ACM DL

ACM Transactions on

Interactive Intelligent Systems (TIIS)

Menu
Latest Articles

Chronodes: Interactive Multifocus Exploration of Event Sequences

VisForum: A Visual Analysis System for Exploring User Groups in Online Forums

A Visual Analytics Framework for Exploring Theme Park Dynamics

Visualizing Research Impact through Citation Data

A Visual Approach for Interactive Keyterm-Based Clustering

NEWS

TiiS 2017 Best Paper Award winners are Marius Kaminskas and Derek Bridge for Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems from TiiS 7(1)! 

TiiS 2016 Best Paper Award is granted to Weike Pan, Qiang Yang, Yuchao Duan, and Zhong Ming, for their article "Transfer Learning for Semi-Supervised Collaborative Recommendation", appeared in TiiS 6(2). Congratulations to all the authors! 

READ MORE
Forthcoming Articles

Introduction to the Special Issue on Human-Centered Machine Learning

Bi-Level Thresholding: Analyzing the Effect of Repeated Errors in Gesture Input

In gesture recognition, one challenge that researchers and developers face is the need for recognition strategies that mediate between false positives and false negatives. In this paper, we examine bi-level thresholding, a recognition strategy that uses two threshold: a tighter threshold limits false positives and recognition errors, and a looser threshold prevents repeated errors (false negatives) by analyzing movements in sequence. We first describe early observations that lead to the development of the bi-level thresholding algorithm. Next, using a wizard-of-Oz recognizer, we hold recognition rates constant and adjust for fixed versus bi-level thresholding; we show that systems using bi-level thresholding result in significant lower workload scores on the NASA-TLX and significantly lower accelerometer variance when performing gesture input. Finally, we examine the effect that bi-level thresholding has on a real-world data set of wrist and finger gestures, showing an ability to significantly improve measures of precision and recall. Overall, these results argue for the viability of bi-level thresholding as an effective technique for balancing between false positives, recognition errors and false negatives.

Creating new technologies for companionable agents to support isolated older adults

This paper reports on the development of capabilities for (on-screen) virtual agents and robots to support isolated older adults in their homes. A real-time architecture was developed to use a virtual agent or a robot interchangeably to interact via dialog and gesture with a human user. Users could interact with either agent on twelve different activities, some of which included on-screen games, and forms to complete. The paper reports on a pre-study that guided the choice of interaction activities. A month-long study with 44 adults between the ages of 55 and 91 assessed differences in the use of the robot and virtual agent.

Modeling and Computational Characterization of Twitter Customer Service Conversations

Given the increasing popularity of customer service dialogue on Twitter, analysis of conversation data is essential to understand trends in customer and agent behavior for the purpose of automating customer service interactions. In this work, we develop a novel taxonomy of fine-grained "dialogue acts" frequently observed in customer service, showcasing acts that are more suited to the domain than the more generic existing taxonomies. Using a sequential SVM-HMM model, we model conversation flow, predicting the dialogue act of a given turn in real-time, and showcase this using our "PredDial" portal. We characterize differences between customer and agent behavior in Twitter customer service conversations, and investigate the effect of testing our system on different customer service industries. Finally, we use a data-driven approach to predict important conversation outcomes: customer satisfaction, customer frustration, and overall problem resolution. We show that the type and location of certain dialogue acts in a conversation have a significant effect on the probability of desirable and undesirable outcomes, and present actionable rules based on our findings. We explore the correlations between different dialogue acts and the outcome of the conversations in detail, using an actionable-rule discovery task by leveraging state-of-the-art sequential rule mining algorithm while modeling a set of conversations as a set of sequences. The patterns and rules we derive can be used as guidelines for outcome-driven automated customer service platforms.

Using Machine Learning to Support Qualitative Coding in Social Science: Shifting The Focus to Ambiguity

Machine learning (ML) has become increasingly inuential to human society, yet the primary advancements and applications of ML are driven by research in only a few computational disciplines. Even applications that affect or analyze human behaviors and social structures are often developed with limited input from experts outside of computational elds. Social scientistsexperts trained to examine and explain the complexity of human behavior and interactions in the worldhave considerable expertise to contribute to the development of ML applications for human-generated data, and their analytic practices could benet from more human- centered ML methods. In this work, we highlight some of the gaps in applying ML to social science research. Building upon content analysis of social media papers, a survey study, and interviews, we summarize the current use and challenges of ML in social sciences. Additionally, we utilize our experience designing a visual analytics tool for collaborative qualitative coding as a case study to illustrate how we might re-imagine the way ML could support workows for social scientists. Finally, we propose three research directions to ground ML applications for social science with the ultimate goal of achieving truly human-centered machine learning.

Crowdsourcing Ground Truth for Medical Relation Extraction

Cognitive computing systems require human labeled data for evaluation, and often for training. The standard practice used in gathering this data minimizes disagreement between annotators, and we have found this results in data that fails to account for the ambiguity inherent in language. We have proposed the CrowdTruth method for collecting ground truth through crowdsourcing, that reconsiders the role of people in machine learning based on the observation that disagreement between annotators provides a useful signal for phenomena such as ambiguity in the text. We report on using this method to build an annotated data set for medical relation extraction for the "cause" and "treat" relations, and how this data performed in a supervised training experiment. We demonstrate that by modeling ambiguity, labeled data gathered from crowd workers can (1) reach the level of quality of domain experts for this task while reducing the cost, and (2) provide better training data at scale than distant supervision. We further propose and validate new weighted measures for precision, recall, and F-measure, that account for ambiguity in both human and machine performance on this task.

MobInsight: A Framework Using Semantic Neighborhood Features for Localized Interpretations of Urban Mobility

Collective urban mobility embodies the residents local insights on the city. Mobility practices of the residents are produced from their spatial choices, which involve various considerations such as the atmosphere of destinations, distance, past experiences, and preferences. ŒThe advances in mobile computing and the rise of geo-social platforms have provided the means for capturing the mobility practices; however, interpreting the residents insights is challenging due to the scale and complexity of an urban environment, and its unique context. In this paper, we present MobInsight, a framework for making localized interpretations of urban mobility that reflƒect various aspects of the urbanism. MobInsight extracts a rich set of neighborhood features through holistic semantic aggregation, and models the mobility between all-pairs of neighborhoods. We evaluate MobInsight with the mobility data of Barcelona and demonstrate diverse localized and semantically-rich interpretations.

An Active Sleep Monitoring Framework Using Wearables

Sleep is the most important aspect of healthy and active living. Right amount of sleep at the right time helps an individual to protect his physical, mental, cognitive health and maintain his quality of life. The most durative of the Activities of Daily Living (ADL), sleep, has a major synergic influence on a persons fuctional, behavioral and cognitive health. A deep understanding of sleep behavior and its relationship with its physiological signals, and contexts (such as eye or body movements) is necessary to design and develop a robust intelligent sleep monitoring system. In this paper, we propose an intelligent algorithm to detect the microscopic states of the sleep, which fundamentally constitute the components of a good and bad sleeping behavior and thus help shape the formative assessment of sleep quality. Our initial analysis includes the investigation of several classification techniques to identify and correlate the relationship of microscopic sleep states with the overall sleep behavior. Subsequently, we also propose an online algorithm based on change point detection to process and classify the microscopic sleep states. We also develop a lightweight version of the proposed algorithm for real-time sleep monitoring, recognition and assessment at scale. For a larger deployment of our proposed model across a community of individuals, we propose an active learning based methodology to reduce the effort of ground truth data collection and labeling. Finally, we evaluate the performance of our proposed algorithms on real data traces, and demonstrate the efficacy of our models for detecting and assessing the fine-grained sleep states beyond an individual.

A Review of User Interface Design for Interactive Machine Learning

Interactive Machine Learning (IML) seeks to complement human perception and intelligence by tightly integrating these strengths with the computational power and speed of computers. The interactive process is designed to involve input from the user but does not require the background knowledge or experience that might be necessary to work with more traditional machine learning techniques. Under the IML process, non-experts can apply their domain knowledge and insight over otherwise unwieldy datasets to find patterns of interest or develop complex data driven applications. This process is co-adaptive in nature and relies on careful management of the interaction between human and machine. Design of the interface is fundamental to the success of this approach, yet there is a lack of consolidated principles on how such an interface should be implemented. This article presents a detailed review and characterization of Interactive Machine Learning from an interactive systems perspective. We propose and describe a structural and behavioural model of a generalized IML system and identify solution principles for building effective interfaces for IML. Where possible, these emergent solution principles are contextualized by reference to the broader human-computer interaction literature. Finally, we identify strands of user interface research key to unlocking more efficient and productive non-expert interactive machine learning applications.

A Human-in-the-loop System for Sound Event Detection and Annotation

Tagging of environment audio events is essential for many tasks. However, finding sound events and labeling them within a long audio file is tedious and time-consuming. In cases where there is very little labeled data (e.g. a single labeled example), it is often not feasible to train an automatic labeler, because many techniques (e.g. Deep Learning) require a large number of human-labeled training examples. Also, fully-automated labeling may not show sufficient agreement with human labeling for many uses. We describe a human-in-the-loop labeling approach that lets a single user greatly reduce the time required to label audio that is tediously long for a human (e.g. 20 hours), has target sounds that are sparse in the audio (10% or less of the audio contains the target), and has too few prior labeled examples (e.g. one) to train a state-of-the-art machine audio labeling system. In this work we describe an interactive sound annotator for this use case. Results from a human-subject study show our tool helped participants label all target sound events within a recording twice as fast as labeling them manually. We present a method to decompose the overall performance of the proposed system into two key factors, interaction overhead and machine accuracy by measuring each of them respectively. These results indicate a future system should be able to speed labeling by as much as a factor of four.

Its Not Just About Accuracy: Metrics that Matter when Modeling Expert Sketching Ability

Design sketching is an important tool for designers and creative professionals to express their ideas and concepts in a visual medium. Being a critical and versatile skill for many different disciplines, courses on design sketching are sometimes taught in universities. Courses today predominately rely on pen and paper, however this traditional pedagogy is limited by the availability of human instructors who can provide personalized feedback. Using a stylus-based intelligent tutoring system called PerSketchTivity, we aim to mimic the feedback given by an instructor and assess the student drawn sketches to give them insight into the areas they need to improve on. In order to provide effective feedback to users, it is important to identify what features of their sketches they should work on to improve their sketching ability. After consulting with several domain experts in sketching, we came up with an initial list of 22 different metrics that could potentially differentiate expert and novice sketches. We gathered over 2000 sketches from 20 novices and four experts for analysis. Seven metrics were shown to significantly correlate with the quality of expert sketches and provided insight into providing intelligent user feedback as well as an overall model of expert sketching ability.

Toward an Understanding of Trust Repair in Human-Robot Interaction: Current Research and Future Directions

Gone are the days of robots solely operating in isolation, without direct interaction with people. Rather, robots are increasingly being deployed in environments and roles that require complex social interaction with humans. The implementation of human-robot teams continues to increase as technology develops in tandem with the state of human-robot interaction (HRI) research. Trust, a major component of much human interaction, is an important facet of HRI. However, the ideas of trust repair and trust violations are understudied in the HRI literature. Trust repair is the activity of rebuilding trust after one party breaks the trust of another. These trust breaks are referred to as trust violations. As HRI becomes widespread, so will trust violations; as a result, a clear understanding of the process of HRI trust repair must be developed in order to ensure that a human-robot team can continue to perform well after trust is violated. Previous research on human-automation trust and human-human trust can serve as starting places for exploring trust repair in HRI. Although existing models of human-automation and human-human trust are helpful, they do not account for some of the complexities of building and maintaining trust in unique relationships between humans and robots. As such, the purpose of this paper is to provide a foundation for exploring human-robot trust repair by drawing upon prior work in the human-robot and human-human trust literature, concluding with recommendations for advancing this body of work.

Dynamic Handwriting Signal Features Predict Domain Expertise

As pen-centric systems increase in the marketplace, they create a parallel need for learning analytic techniques based on dynamic writing. Recent empirical research has shown that signal-level features of dynamic handwriting, such as stroke distance, pressure, and duration, are adapted to conserve total energy expenditure as individuals consolidate expertise in a domain. The aim of this research was to examine how accurately three different machine learning algorithms could automatically classify students by their level of domain expertise, without conducting any written content analysis. Compared with an unsupervised classification accuracy of 71%, a hybrid approach that combined empirical-statistical guidance of machine learning consistently led to correctly classifying 79-92% of students by their expertise level. The hybrid approach also enabled deriving a causal understanding of the basis for prediction success, improved transparency, and a foundation for generalizing results. These findings open up opportunities to design new student-adaptive educational technologies based on individualized data for existing pen-centric systems.

Perceptual Validation for the Generation of Expressive Movements from End-Effector Trajectories

Endowing animated virtual characters with emotionally expressive behaviors is paramount to improve the quality of the interactions between humans and virtual characters. Full-body motion, in particular its subtle kinematic variations, represents an effective way of conveying emotionally expressive content. However, before synthesizing expressive full-body movements, it is necessary to identify and understand what qualities of human motion are salient to the perception of emotions and how these qualities can be exploited when generating novel and equally expressive full-body movements. Based on previous studies, we argue that it is possible to perceive and generate expressive full-body movements from end-effector trajectories alone. Hence, end-effector trajectories define a reduced motion space that is adequate for the characterization of the expressive qualities of human motion and that is both fitting for the analysis and generation of emotionally expressive full-body movements. The purpose and main contribution of this work is the methodological framework we defined and used to assess the validity and applicability of the end-effector trajectories for the perception and generation of expressive full-body movements. This framework consists of the creation of a motion capture database of expressive theatrical movements, the development of a motion synthesis system based on trajectories re-played or re-sampled and inverse kinematics, and two perceptual studies.

Predicting User's Confidence During Visual Decision Making

People are not infallible consistent ``oracles'': their confidence in decision-making may vary significantly between tasks and over time. We have previously reported the benefits of using an interface and algorithms that explicitly captured and exploited users' confidence: error rates were reduced by up to 50% for an industrial multi-class learning problem; and the number of interactions required in a design optimisation context was reduced by 33%. Having access to users' confidence judgements could significantly benefit intelligent interactive systems in industry, in areas such as Intelligent Tutoring systems, and in healthcare. There are many reasons for wanting to capture information about confidence implicitly. Some are ergonomic, but others are more `social' - such as wishing to understand (and possibly take account of) users' cognitive state without interrupting them. We investigate the hypothesis that users' confidence can be accurately predicted from measurements of their behaviour. Eye-tracking systems were used to capture users' gaze patterns as they undertook a series of visual decision tasks, after each of which they reported their confidence on a 5-point Likert scale. Subsequently, predictive models were built using ``conventional" Machine Learning approaches for numerical summary features derived from users' behaviour. We also investigate the extent to which the deep learning paradigm can reduce the need to design features specific to each application, by creating ``gazemaps" -- visual representations of the trajectories and durations of users' gaze fixations -- and then training deep convolutional networks on these images. Treating the prediction of user confidence as a two-class problem (confident/not confident), we attained classification accuracy of 88% for the scenario of new users on known tasks, and 87% for known users on new tasks. Considering the confidence as an ordinal variable, we produced regression models with a mean absolute error of H0.7 in both cases. Capturing just a simple subset of non-task-specific numerical features gave slightly worse, but still quite high accuracy (eg. MAE H1.0). Results obtained with gazemaps and convolutional networks are competitive, despite not having access to longer-term information about users and tasks, which was vital for the `summary' feature sets. This suggests that the gazemap-based approach forms a viable, transferable, alternative to hand-crafting features for each different application. These results provide significant evidence to confirm our hypothesis, and offer a way of substantially improving many interactive artificial intelligence applications via the addition of cheap non-intrusive hardware and computationally cheap prediction algorithms

Trust-based Multi-Robot Symbolic Motion Planning with a Human-in-the-Loop

Symbolic motion planning for robots is the process of specifying and planning robot tasks in a discrete space, then carrying them out in a continuous space in a manner that preserves the discrete-level task specifications. Despite progress in symbolic motion planning, many challenges remain, including addressing scalability for multi-robot systems and improving solutions by incorporating human intelligence. In this paper, distributed symbolic motion planning for multi-robot systems is developed to address scalability. More specifically, compositional reasoning approaches are developed to decompose the global planning problem, and atomic propositions for observation, communication, and control are proposed to address inter-robot collision avoidance. To improve solution quality and adaptability, a dynamic, quantitative, and probabilistic human-to-robot trust model is developed to aid this decomposition. Furthermore, a trust-based real-time switching framework is proposed to switch between autonomous and manual motion planning for tradeoffs between task safety and efficiency. Deadlock- and livelock-free algorithms are designed to guarantee reachability of goals with a human-in-the-loop. A set of non-trivial multi-robot simulations with direct human input and trust evaluation are provided demonstrating the successful implementation of the trust-based multi-robot symbolic motion planning methods.

Motion-Sound Mapping through Interaction: An Approach to User-Centered Design of Auditory Feedback using Machine Learning

Technologies for sensing movement are expanding towards everyday use in virtual reality, gaming, and artistic practices. In this context, there is a need for methodologies and frameworks to help designers and users create meaningful movement experiences. Mapping through Interaction is a conceptual and computational framework for crafting sonic interactions from demonstrations of embodied associations between motion and sound. It draws upon existing literature emphasizing the importance of bodily experience in sound perception and cognition, and uses interactive machine learning to build the mapping iteratively from user demonstrations. We present a method for modeling the mapping between motion and sound parameter sequences using probability distributions. In particular, we examine Gaussian Mixture Regression and a hierarchical extension to Hidden Markov Regression for continuous movement recognition and sound parameter generation. We discuss the role and interpretation of the model parameters for user-centered interaction design. We review two applications of the approach where users can personalize hand gesture control strategies for continuous interaction with sound textures or vocalizations.

Cues of Violent Intergroup Conflict Diminish Perceptions of Robotic Personhood

Convergent lines of evidence indicate that anthropomorphic robots are represented using neurocognitive mechanisms typically employed in social reasoning about other people. Relatedly, a growing literature documents that contexts of threat can exacerbate coalitional biases in social perceptions. Integrating these research programs, the present studies test whether cues of violent intergroup conflict modulate perceptions of the intelligence, emotional experience, or overall personhood of robots. In Studies 1 and 2, participants evaluated a large, bipedal all-terrain robot; in Study 3, participants evaluated a small, social robot with humanlike facial and vocal characteristics. Across all studies, cues of violent conflict caused significant decreases in perceived robotic personhood, and this shift was mediated by parallel reductions in emotional sympathy with the robot (with no significant effects of threat on attributions of intelligence). In addition, in Study 2, participants in the conflict condition estimated the large bipedal robot to be less effective in military combat, and this difference was mediated by the reduction in perceived robotic personhood. These results are discussed as they motivate future investigation into the links between threat, coalitional bias and human-robot interaction.

Visualizing Ubiquitously Sensed Measures of Motor Ability in Multiple Sclerosis: Reflections on communicating machine learning in practice

Sophisticated ubiquitous sensing systems are being used to measure motor ability in clinical settings. Intended to augment clinical decision-making, the interpretability of the machine learning measurements underneath becomes critical to their use. We explore how visualization can support the interpretability of machine learning measures through the case of Assess MS, a system to support the clinical assessment of Multiple Sclerosis. A substantial design challenge is to make visible the algorithms decision-making process in a way that allows clinicians to integrate the algorithms result into their own decision process. To this end, we present an iterative design research study that draws out challenges of supporting interpretability in a real-world system. The key contribution of this paper is to illustrate that simply making visible the algorithmic decision-making process is not helpful in supporting clinicians in their own decision-making process. It disregards that people and algorithms make decisions in different ways. Instead, we propose that visualisation can provide context to algorithmic decision-making, rendering observable a range of internal workings of the algorithm from data quality issues to the web of relationships generated in the machine learning process.

Bibliometrics

Publication Years 2011-2018
Publication Count 164
Citation Count 718
Available for Download 164
Downloads (6 weeks) 1628
Downloads (12 Months) 13557
Downloads (cumulative) 65870
Average downloads per article 402
Average citations per article 4
First Name Last Name Award
Gregory Abowd ACM Eugene L. Lawler Award for Humanitarian Contributions within Computer Science and Informatics (2009)
ACM Fellows (2008)
ACM Senior Member (2008)
Craig Boutilier ACM Fellows (2012)
Oliver Brdiczka ACM Senior Member (2015)
Peter Brusilovsky ACM Senior Member (2008)
Margaret Burnett ACM Fellows (2017)
ACM Distinguished Member (2015)
Yolanda Gil ACM Fellows (2016)
Michael L Gleicher ACM Distinguished Member (2011)
Tracy Anne Hammond ACM Senior Member (2015)
Andreas Kerren ACM Senior Member (2013)
Joseph A Konstan ACM Software System Award (2010)
ACM Fellows (2008)
ACM Distinguished Member (2006)
Wessel Kraaij ACM Distinguished Member (2017)
ACM Senior Member (2007)
Sarit Kraus ACM Fellows (2014)
Tsvi Kuflik ACM Distinguished Member (2013)
ACM Senior Member (2012)
Robin R Murphy ACM Eugene L. Lawler Award for Humanitarian Contributions within Computer Science and Informatics (2014)
Jeffrey Nichols ACM Senior Member (2013)
Fabio Paterno ACM Distinguished Member (2009)
Stefano Piana ACM Gordon Bell Prize
Special Category (2009) ACM Gordon Bell Prize
Special Category (2009)
John T Riedl ACM Software System Award (2010)
ACM Fellows (2009)
ACM Distinguished Member (2007)
Tom Rodden ACM Fellows (2014)
Domenico Sacca ACM Senior Member (2007)
Ben Shneiderman ACM Fellows (1997)
Matthew A Turk ACM Senior Member (2007)
Qiang Yang ACM Fellows (2017)
ACM Distinguished Member (2011)
Qiang Yang ACM Fellows (2017)
ACM Distinguished Member (2011)

First Name Last Name Paper Counts
Kazuhiro Otsuka 3
Albert Salah 3
John Riedl 3
Joseph LaViola, 3
Anthony Jameson 3
Frédéric Bevilacqua 3
Shiro Kumano 3
Ryo Ishii 3
Elisabeth André 3
Yang Wang 2
Ana Paiva 2
Junji Yamato 2
Magalie Ochs 2
Bilge Mutlu 2
Yukiko Nakano 2
Kristiina Jokinen 2
Oya Aran 2
Catherine Pélachaud 2
Heriberto Cuayáhuitl 2
Nina Dethlefs 2
Louis Morency 2
Catherine Havasi 2
Federica Cena 2
Lola Cañamero 2
Giuseppe Carenini 2
Alexander Felfernig 2
Henry Lieberman 2
Cristina Gena 2
Ginevra Castellano 2
Shlomo Berkovsky 2
Iolanda Leite 2
Matthew Turk 2
Chen Yu 2
Ulf Blanke 2
Nan Cao 2
Michael Jugovac 2
Bart Knijnenburg 2
Kim Bard 2
Hatice Gunes 2
Gregory Abowd 2
Dietmar Jannach 2
Hiroshi Ishiguro 2
Sidney D'mello 2
Eduardo Veas 2
Evangelos Milios 2
Hayley Hung 2
Eugene Taranta 2
Yolanda Gil 2
Sigrid Knust 1
Minsuk Kahng 1
Duenhorng Chau 1
Yuru Lin 1
Yong Wang 1
Qingqing Bi 1
Thomas Kirste 1
Akiko Yamazaki 1
Keiko Ikeda 1
Todd Kulesza 1
Ian Oberst 1
Steven Hoi 1
Sandra Okita 1
Brittany Duncan 1
Branislav Kveton 1
Jaclyn Ocumpaugh 1
Caleb Southern 1
Fiora Pirri 1
Takeo Igarashi 1
Kyriakos Kritikos 1
Mark D'Inverno 1
Saleema Amershi 1
Jill Freyne 1
Ya'akov Gal 1
Jin Zhao 1
Marco De Gemmis 1
Lutz Frommberger 1
Matthew Marge 1
Tessa Lau 1
Anat Mirelman 1
Maya Sappelli 1
Joao Oliveira 1
Ilhan Aslan 1
Kenji Sagae 1
Jean Martens 1
Augusto Pieracci 1
Dairazalia Sanchez-Cortes 1
Priscilla Moraes 1
Pascal Poupart 1
Andrew Monk 1
Bo Yin 1
Ronnie Taib 1
Petteri Nurmi 1
Antti Salovaara 1
Fernando Nos 1
Elia Bruni 1
Nicu Sebe 1
Conglei Shi 1
Luca Console 1
Mario Mirabelli 1
Federica Protti 1
Giulia Biamino 1
Franco Fassio 1
Wei Song 1
Florian Eyben 1
Laurel Riek 1
Christopher Peters 1
Brett Stevens 1
Hirohisa Furukawa 1
Evelien Van De Garde-Perik 1
Elise Hoven 1
Laura Pomarjanschi 1
David Rozado 1
Francisco Rodrıguez 1
Takayuki Kanda 1
Ludger Van Elst 1
Peter Weller 1
Peter McOwan 1
Alessandra Staglianò 1
Enamul Hoque 1
Chris Newell 1
Gordon Cheng 1
Gary McKeown 1
Cecilia Sciascio 1
Nicholas Davis 1
Brian Magerko 1
Kai Kunze 1
Sarah Fdili Alaoui 1
Misato Yatsushiro 1
Hanghang Tong 1
Peter Polack 1
Yi Yang 1
Weiyi Wang 1
Abdulmalik Ofemile 1
Yoshinori Kobayashi 1
Brandon Paulson 1
Sylvie Gibet 1
Wengkeen Wong 1
Ravi Sarvadevabhatla 1
Fang Chen 1
Yi Yang 1
Sakti Sakriani 1
Hideki Negoro 1
Lixiu Yu 1
Kostiantyn Kucher 1
Yuchao Duan 1
Cheng Zhang 1
Simon Dobson 1
Berardina Carolis 1
Desney Tan 1
Dimitris Plexousakis 1
Paolo Cremonesi 1
Soheil Bahreini 1
Dimitrios Rafailidis 1
Steven Bethard 1
James Martin 1
Simon Keizer 1
Sean Andrist 1
Michael Gleicher 1
Jane Hsu 1
Rosalind Picard 1
John Dill 1
Chris Shaw 1
Peng Wu 1
Charles Greenbacker 1
Daniel Chester 1
Jesse Hoey 1
M Khawaja 1
Danilo Rodrigues 1
Jasper Uijlings 1
Andreza Sartori 1
Arthur Graesser 1
Livio Robaldo 1
Pierluigi Grillo 1
Michele Mioli 1
Rossana Simeoni 1
Kumiko Tanaka-Ishii 1
Keiji Yasuda 1
Michael Glodek 1
Jean Martin 1
Jongseok Lee 1
Joe Finney 1
Monique Lu 1
Serge Offermans 1
Paul Schermerhorn 1
Matthias Scheutz 1
Dan Tasse 1
Francesca Odone 1
Amy Swanson 1
Nilanjan Sarkar 1
Fabian Christoffel 1
Longfei Zhang 1
Masahiko Inami 1
Joseph Konstan 1
Shangtse Chen 1
Kaya De Barbaro 1
Rahul Basole 1
Michael Steptoe 1
Ross Maciejewski 1
Belgin Mutlu 1
Ariel Rosenfeld 1
Mihoko Fukushima 1
John O'Donovan 1
Thibaut Naour 1
Simone Stumpf 1
Stephen Perona 1
Andrew Ko 1
Andreas Kerren 1
Jie Lu 1
Satoshi Nakamura 1
Angelo Cafaro 1
Hannes Vilhjálmsson 1
Nigel Bosch 1
Valerie Shute 1
Rosa Arriaga 1
Eyal Dim 1
Tsvi Kuflik 1
James Young 1
Rohit Kumar 1
Petros Daras 1
Souneil Park 1
Maurits Kaptein 1
Emile Aarts 1
Nicholas Mattei 1
Judy Goldsmith 1
Alfred Kobsa 1
Tessa Lau 1
Heather Leary 1
Giovanni Semeraro 1
Mary Ellen Foster 1
Thaddeus Simons 1
Helmut Prendinger 1
Antonio Sánchez-Ruiz 1
Alexander Meschtscherjakov 1
Yasmine El-Glaly 1
Catherine Plaisant 1
Karthik Dinakar 1
Carlos Correa 1
Yingjievictor Chen 1
Rita Cucchiara 1
Birgit Lugrin 1
Patrick Olivier 1
Eric Choi 1
Varun Ratnakar 1
Paul Groth 1
Julien Epps 1
Patrik Floréen 1
Charles Callaway 1
Remco Chang 1
Massimo Poesio 1
Jon Chamberlain 1
Alessandro Marcengo 1
Monica Perrero 1
Amon Rapp 1
Ilaria Torre 1
Fabio Torta 1
Eiichiro Sumita 1
Nick Campbell 1
Toyoaki Nishida 1
Seiichi Yamamoto 1
Hans Gellersen 1
Mathieu Boussard 1
Jodi Forlizzi 1
Chiara Pulice 1
Ziad Bawab 1
Lukas Lerche 1
Derek Bridge 1
Gangyi Ding 1
Tianyu Huang 1
Brenda Lin 1
Masa Ogata 1
Yuta Sugiura 1
Christian Jacquemin 1
David Meignan 1
Fangzhou Guo 1
Steven Sutherland 1
Tom Rodden 1
Keiichi Yamazaki 1
Rong Jin 1
Nargess Nourbakhsh 1
Hana Boukricha 1
Ipke Wachsmuth 1
Hiroki Tanaka 1
Katrien Verbert 1
Magnus Sahlgren 1
Juan Ye 1
Graeme Stevenson 1
Domenico Redavid 1
Geert Kruijff 1
Jun Rekimoto 1
Fabio Paternò 1
Jonas Etzold 1
Panos Markopoulos 1
Philipp Wetzler 1
Li Chen 1
Ngoanh Vien 1
Ivana Kruijff-Korbayová 1
Sinziana Mazilu 1
Gerhard Tröster, 1
Daniel Gatica-Perez 1
Sandra Carberry 1
David Oliver 1
Fang Chen 1
Timothy Chklovski 1
Oliviero Stock 1
Victoria Yanulevskaya 1
Fabrizio Antonelli 1
Claudia Picardi 1
Daniele Dupré 1
Elisa Chiabrando 1
Matteo Demichelis 1
Andrew Finch 1
Günther Palm 1
Martin WöLlmer 1
Yale Song 1
Joyce Chai 1
Kris Luyten 1
Pierrick Thébault 1
Koen Van Boerdonk 1
Javier San Agustin 1
Samer Al Moubayed 1
Jens Edlund 1
Ralf Biedert 1
Fabio Zanzotto 1
Ben Steichen 1
Gawesh Jawaheer 1
Georg Buscher 1
Geert Houben 1
Markus Strohmaier 1
Zachary Warren 1
Roman Bednarik 1
Donald Glowinski 1
Hui Zhang 1
Michael Zehetleitner 1
Anne Meyer 1
Ningxiao Sun 1
Chihpin Hsiao 1
Gilles Pesant 1
Ryan Kiros 1
Ehsan Sherkat 1
Rosane Minghim 1
Xing Liang 1
Huamin Qu 1
Leigh Clark 1
Mercan Topkara 1
Hidemi Iwasaka 1
Peter Robinson 1
Peter Brusilovsky 1
Andrés Vargas 1
Dingtian Zhang 1
Carita Paradis 1
Weike Pan 1
Stefano Ferilli 1
Martin Cooney 1
Eelke Folmer 1
Andreas Bulling 1
Jeffrey Allen 1
Roberto Turrin 1
Marco Gillies 1
Franca Garzotto 1
Sangyoung Chung 1
Boris De Ruyter 1
Kirsten Butcher 1
Tamara Sumner 1
Hung Ngo 1
Matthew Luciw 1
Antoine Raux 1
Zhuoran Wang 1
James Deng 1
Tomislav Pejša 1
Nahum Álvarez 1
Manfred Tscheligi 1
Kwanliu Ma 1
Roberto Vezzani 1
Paolo Santinelli 1
Gregor Mehlmann 1
Florian Lingenfelser 1
Kathleen McCoy 1
Edward Schwartz 1
Alex Mihailidis 1
Sunghyun Park 1
Andrew Gordon 1
Andreas Forsblom 1
David Ebert 1
Nadir Weibel 1
Bob Kummerfeld 1
Luca Ducceschi 1
Marina Geymonat 1
Piercarlo Grimaldi 1
Vincenzo Cuciti 1
Fausto Giunchiglia 1
Kostas Karpouzis 1
Ashkan Yazdani 1
Andreas Dengel 1
Seungjun Kim 1
Jean Crespo 1
André Pereira 1
Floriane Dardard 1
Giorgio Gnecco 1
Bibek Paudel 1
Karinne Ramirez-Amaro 1
Humera Minhas 1
Ting Zhang 1
Yuting Li 1
Marius Kaminskas 1
Shuo Yan 1
Yufeng Wu 1
Kunwar Singh 1
Yihsuan Yang 1
Yuanching Teng 1
Jared Bott 1
Jean Frayret 1
Nicolas Gaud 1
Axel Soto 1
Franklin Harper 1
Siwei Fu 1
Huamin Qu 1
Martijn Willemsen 1
Kyle Duarte 1
Margaret Burnett 1
Victor Ng-Thow-Hing 1
Rafael Calvo 1
Denis Parra 1
Anhong Guo 1
Atau Tanaka 1
Kostas Bekris 1
Mario Gianni 1
Navid Fallah 1
Stavroula Manolopoulou 1
Thomas Dodson 1
Apostolos Axenopoulos 1
Jeffrey Nichols 1
Ofra Amir 1
Alfredo Milani 1
Luciana Benotti 1
Martin Villalba 1
Rahul Sukthankar 1
Wessel Kraaij 1
Marc Cavazza 1
Joao Catarino 1
Hansuk Shim 1
Yenling Kuo 1
Jesse Vig 1
Birago Jones 1
Zhenyucheryl Qian 1
Ionut Damian 1
Denny Vrandečić 1
Elyon DeKoven 1
Udo Kruschwitz 1
Roberto Furnari 1
Ilaria Lombardi 1
Dario Mana 1
Friedhelm Schwenker 1
Masafumi Nishida 1
Melanie Hartman 1
Erhardt Barth 1
Pablo Varona 1
Lorenzo Ferrone 1
Elio Masciari 1
Antonio Camurri 1
Lian Zhang 1
Dayi Bian 1
Medha Sarkar 1
Sana Malik 1
Fan Du 1
Juan Wachs 1
Katsutoshi Masai 1
Takashi Yoshino 1
Yutaka Takase 1
Seyednaser Nourashrafeddin 1
Liangyue Li 1
Moushumi Sharmin 1
Rolando Garcia 1
Christoph Trattner 1
Sarit Kraus 1
Michael Young 1
Kristina Yordanova 1
Valentin Enescu 1
Chen Liu 1
Nava Tintarev 1
Tracy Hammond 1
Nicolas Courty 1
Shimei Pan 1
Graham Neubig 1
Tadas Baltrušaitis 1
Ryan Baker 1
Spencer Compton 1
Zhong Ming 1
Baptiste Caramiaux 1
Shuichi Nishio 1
Ilias Apostolopoulos 1
Carolyn Rosé 1
Bruno Zamborlin 1
Seungwoo Kang 1
Junehwa Song 1
Joshua Guerin 1
Pierre Andrews 1
Pasquale Lops 1
Alexander Förster 1
Jürgen Schmidhuber 1
Hendrik Zender 1
Oliver Lemon 1
Clement Leung 1
Li Chen 1
Moran Dorfman 1
Eran Gazit 1
Jeffrey Hausdorff 1
Suzan Verberne 1
Rui Prada 1
Shuji Fujimoto 1
Andreas Uhl 1
Francis Quek 1
Shilad Sen 1
Yuhsuan Chan 1
Martino Lombardi 1
Johannes Wagner 1
Seniz Demir 1
Craig Boutilier 1
Natalie Ruiz 1
Reid Swanson 1
Andrea Toso 1
Francesca Carmagnola 1
Fabiana Vernero 1
David Robertson 1
Stefan Scherer 1
Björn Schuller 1
Aryel Beck 1
Marina Davila-Ross 1
Jean Vesin 1
David Demirdjian 1
Randall Davis 1
Daniel Schreiber 1
Max Mühlhäuser 1
Oliver Brdiczka 1
Antoine Hiolle 1
Kars Lenssen 1
Jessica Hodgins 1
Joyce Chai 1
Anind Dey 1
Nunziato Cassavia 1
Yi Fang 1
Cristina Conati 1
Carlos Martinho 1
Stefano Piana 1
Joshua Wade 1
Amy Weitlauf 1
Hunghsuan Huang 1
Tian(Linger) Xu 1
Margrét Bjarnadóttir 1
David Gotz 1
Vedran Sabol 1
Maki Sugimoto 1
Vlado Kešelj 1
Yong Wang 1
Robert Krüger 1
Casper Harteveld 1
Svenja Adolphs 1
Hichem Sahli 1
Yoshinori Kuno 1
Robin Murphy 1
Yangqiu Song 1
Tomoki Toda 1
Marwa Mahmoud 1
Brian Ravenet 1
Qiang Yang 1
Yang Li 1
Nicola Montecchio 1
Ehud Sharlin 1
Daisuke Sakamoto 1
Jalal Mahmud 1
Gregory Smith 1
German Ruiz 1
Francesco Ricci 1
Jawad Nagi 1
Mika Shigematsu 1
Moitreya Chatterjee 1
Robert Woodbury 1
Maria Riveiro 1
Tobias Baur 1
Patrick Gebhard 1
Stephanie Elzer 1
Fabian Bohnert 1
Daniel Keim 1
Emily Grenader 1
Judy Kay 1
Jeffrey Nickerson 1
Fabrizio Franceschi 1
Silvia Likavec 1
Touradj Ebrahimi 1
Yukiko Nakano 1
David Molyneaux 1
Dominique Decotter 1
Michael Dorr 1
Jonas Beskow 1
Domenico Saccà 1
Patty Kostkova 1
Jing Fan 1
Ben Shneiderman 1
Abraham Bernstein 1
Michael Beetz 1
Shun Sun 1
Rita Kundu 1
Hongsong Li 1
Zheng Guan 1

Affiliation Paper Counts
Tokyo University of Technology 1
University of New South Wales 1
Palo Alto Research Center Incorporated 1
University of Memphis 1
Northeastern University 1
Macalester College 1
TELECOM ParisTech 1
Lund University 1
Bournemouth University 1
Pontificia Universidad Catolica de Chile 1
Queen's University Belfast 1
Rutgers, The State University of New Jersey 1
National Institute of Infectious Diseases 1
Swedish Institute of Computer Science 1
Ritsumeikan University 1
Laobratoire d'Informatique pour la Mecanique et les Sciences de l'Ingenieur 1
Fulda University of Applied Sciences 1
Reykjavik University 1
Laboratoire Traitement et Communication de l'Information 1
Harvard School of Engineering and Applied Sciences 1
Fondazione Bruno Kessler 1
Istituto di Scienza e Tecnologie dell'Informazione A. Faedo 1
British Broadcasting Corporation 1
Institutions Markets Technologies, Lucca 1
Karlsruhe Institute of Technology 1
University of Eastern Finland 1
IBM, Argentina 1
Max Planck Institute for Informatics 1
University of Padua 1
University of Illinois at Urbana-Champaign 1
University of Michigan 1
University of Miyazaki 1
Florida State University 1
University of Calgary 1
University of Amsterdam 1
Harvard Medical School 1
National Technical University of Athens 1
University of Perugia 1
Western Washington University 1
University of Eastern Piedmont Amedeo Avogadro 1
University of Geneva 1
Yale University 1
Newcastle University, United Kingdom 1
University of Pennsylvania 1
University of Koblenz-Landau 1
University of Sao Paulo 1
Vrije Universiteit Amsterdam 1
University of Manitoba 1
University of Skovde 1
Hasselt University 1
Santa Clara University 1
University of Kent 1
University of Dublin, Trinity College 1
BBN Technologies 1
Norwegian University of Science and Technology 1
Institut de Recherche et Coordination Acoustique Musique 1
Beihang University 1
Federal University of Sao Carlos 1
University of Southern California, Information Sciences Institute 1
Japan Science and Technology Agency 1
University of Kentucky 1
University of Tennessee at Martin 1
Yonsei University 1
National University of Singapore 1
Coventry University 1
IBM Thomas J. Watson Research Center 1
Middle Tennessee State University 1
IT University of Copenhagen 1
Monash University 1
University of Louisville 1
West Virginia University 1
Microsoft Research 1
Canon Inc. 1
University College London 1
Universite de Technologie Belfort-Montbeliard 1
Osaka University 1
National Institute of Advanced Industrial Science and Technology 1
Universite Paris-Sud XI 1
University of California, Santa Cruz 1
University of Maryland, Baltimore County 1
Catholic University of Leuven, Leuven 1
University of Utah 1
University of Konstanz 1
Lawrence Livermore National Laboratory 1
Microsoft Corporation 1
Clemson University 1
Nokia Corporation 1
Ben-Gurion University of the Negev 1
Complutense University of Madrid 1
Stevens Institute of Technology 1
Kansai University 1
National University of Cordoba 2
Millersville University 2
Bar-Ilan University 2
University of Helsinki 2
University of Haifa 2
University of Rostock 2
National Taiwan University 2
Nanyang Technological University 2
Bogazici University 2
Google Inc. 2
University of California, Irvine 2
Osnabruck University 2
Free University of Bozen-Bolzano 2
Kyoto University 2
Kyushu University 2
IBM Research 2
Technical University of Darmstadt 2
Lubeck University 2
Nara University of Education 2
University of Waterloo 2
University of California, Davis 2
University of Washington, Seattle 2
University of Pittsburgh 2
University of Bielefeld 2
University of Stuttgart 2
University of York 2
Philips Research 2
Academia Sinica Taiwan 2
Polytechnic School of Montreal 2
Graz University of Technology 2
University of Birmingham 2
Texas A and M University 2
National University of Cuyo 2
Tufts University 2
Politecnico di Milano 2
University College Cork 2
University of Edinburgh 2
University of Roma Tor Vergata 2
Ludwig Maximilian University of Munich 2
Research Organization of Information and Systems National Institute of Informatics 2
University of Minnesota Twin Cities 2
Southern Illinois University at Carbondale 2
University of California, San Diego 2
Michigan State University 2
University of Roma La Sapienza 2
Institute of Computer Science Crete 2
Know-Center, Graz 2
Tongji University 2
University of Gastronomic Sciences 2
Laboratoire des sciences de l'information et des sytemes, Marseille 2
Linnaeus University, Vaxjo 2
Shenzhen University 3
University of Essex 3
University of Nevada, Reno 3
Delft University of Technology 3
CNRS Centre National de la Recherche Scientifique 3
Bremen University 3
Doshisha University 3
University of Zurich 3
Radboud University Nijmegen 3
University of Toronto 3
Texas A and M University System 3
Nokia Bell Labs 3
Queen Mary, University of London 3
University of St Andrews 3
Massachusetts Institute of Technology 3
Lancaster University 3
City University London 3
Universidad Autonoma de Madrid 3
Vrije Universiteit Brussel 3
University of California, Santa Barbara 3
Commonwealth Scientific and Industrial Research Organization 3
Institut Dalle Molle D'intelligence Artificielle Perceptive 3
Columbia University 3
University of Cambridge 3
Japan National Institute of Information and Communications Technology 3
Universite Paris Saclay 3
Helsinki Institute for Information Technology 4
Advanced Telecommunications Research Institute International (ATR) 4
Royal Institute of Technology 4
University of Salzburg 4
Universite de Bretagne-Sud 4
University of Minnesota System 4
Tel Aviv Sourasky Medical Center 4
University of Ulm 4
Goldsmiths, University of London 4
Hong Kong Baptist University 4
University of Nottingham 4
University of Trento 4
Simon Fraser University 4
Technical University of Munich 4
Indiana University 4
University of Notre Dame 4
The University of North Carolina at Chapel Hill 4
University of Genoa 4
University of Tokyo 4
University of Portsmouth 4
University of Sydney 4
Swiss Federal Institute of Technology, Zurich 4
Keio University 4
Swiss Federal Institute of Technology, Lausanne 4
Saitama University 4
Korea Advanced Institute of Science & Technology 4
University of Hertfordshire 4
University of Modena and Reggio Emilia 5
The University of British Columbia 5
Oregon State University 5
Dalle Molle Institute for Artificial Intelligence 5
University of Maryland 5
Nara Institute of Science and Technology 5
University of Wisconsin Madison 5
TU Dortmund University 5
Dalhousie University 6
University of Colorado at Boulder 6
Arizona State University 6
University of Delaware 6
University of Bari 6
Seikei University 6
Purdue University 6
Hong Kong University of Science and Technology 7
Instituto Superior Tecnico 7
MIT Media Laboratory 7
Beijing Institute of Technology 7
Vanderbilt University 8
Heriot-Watt University, Edinburgh 8
University of Central Florida 8
University of Southern California 8
University of Augsburg 9
German Research Center for Artificial Intelligence (DFKI) 9
Eindhoven University of Technology 9
Nippon Telegraph and Telephone Corporation 10
Telecom Italia 10
Carnegie Mellon University 11
CSIRO Data61 11
Georgia Institute of Technology 17
University of Turin 20

ACM Transactions on Interactive Intelligent Systems (TiiS) - Special Issue on Interactive Visual Analysis of Human Crowd Behaviors and Regular Paper
Archive


2018
Volume 8 Issue 1, March 2018 Special Issue on Interactive Visual Analysis of Human Crowd Behaviors and Regular Paper

2017
Volume 7 Issue 4, December 2017 Special Issue on IUI 2016 Highlights
Volume 7 Issue 3, October 2017
Volume 7 Issue 2, July 2017
Volume 7 Issue 1, March 2017

2016
Volume 6 Issue 4, December 2016 Special Issue on Human Interaction with Artificial Advice Givers
Volume 6 Issue 3, October 2016 Regular Articles and Special Issue on Highlights of ICMI 2014 (Part 2 of 2)
Volume 6 Issue 2, August 2016 Regular Articles, Special Issue on Highlights of IUI 2015 (Part 2 of 2) and Special Issue on Highlights of ICMI 2014 (Part 1 of 2)
Volume 6 Issue 1, May 2016 Special Issue on New Directions in Eye Gaze for Interactive Intelligent Systems (Part 2 of 2), Regular Articles and Special Issue on Highlights of IUI 2015 (Part 1 of 2)
Volume 5 Issue 4, January 2016 Regular Articles and Special issue on New Directions in Eye Gaze for Interactive Intelligent Systems (Part 1 of 2)

2015
Volume 5 Issue 3, October 2015 Special Issue on Behavior Understanding for Arts and Entertainment (Part 2 of 2) and Regular Articles
Volume 5 Issue 2, July 2015 Special Issue on Behavior Understanding for Arts and Entertainment (Part 1 of 2)
Volume 5 Issue 1, March 2015
Volume 4 Issue 4, January 2015 Special Issue on Activity Recognition for Interaction and Regular Article

2014
Volume 4 Issue 3, October 2014 Special Issue on Multiple Modalities in Interactive Systems and Robots
Volume 4 Issue 2, July 2014
Volume 4 Issue 1, April 2014 Special Issue on Interactive Computational Visual Analytics
Volume 3 Issue 4, January 2014

2013
Volume 3 Issue 3, October 2013
Volume 3 Issue 2, July 2013 Special issue on interaction with smart objects, Special section on eye gaze and conversation
Volume 3 Issue 1, April 2013 Special section on internet-scale human problem solving and regular papers

2012
Volume 2 Issue 4, December 2012 Special issue on highlights of the decade in interactive intelligent systems
Volume 2 Issue 3, September 2012 Special Issue on Common Sense for Interactive Systems
Volume 2 Issue 2, June 2012
Volume 2 Issue 1, March 2012 Special Issue on Affective Interaction in Natural Environments
Volume 1 Issue 2, January 2012

2011
Volume 1 Issue 1, October 2011
 
All ACM Journals | See Full Journal Index

Search TIIS
enter search term and/or author name