ACM DL

ACM Transactions on

Interactive Intelligent Systems (TIIS)

Menu
Latest Articles

Analysis of Movement Quality in Full-Body Physical Activities

Full-body human movement is characterized by fine-grain expressive qualities that humans are easily capable of exhibiting and recognizing in... (more)

Toward Effective Robot--Child Tutoring: Internal Motivation, Behavioral Intervention, and Learning Outcomes

Personalized learning environments have the potential to improve learning outcomes for children in a variety of educational domains, as they can tailor instruction based on the unique learning needs of individuals. Robot tutoring systems can further engage users by leveraging their potential for embodied social interaction and take into account... (more)

Miscommunication Detection and Recovery in Situated Human–Robot Dialogue

Even without speech recognition errors, robots may face difficulties interpreting natural-language instructions. We present a method for robustly... (more)

Visual Exploration of Air Quality Data with a Time-correlation-partitioning Tree Based on Information Theory

<?tight?>Discovering the correlations among variables of air quality data is challenging,... (more)

Enhancing Deep Learning with Visual Interactions

Deep learning has emerged as a powerful tool for feature-driven labeling of datasets. However, for it to be effective, it requires a large and finely labeled training dataset. Precisely labeling a large training dataset is expensive, time-consuming, and error prone. In this article, we present a visually driven deep-learning approach that starts... (more)

Developing a Hand Gesture Recognition System for Mapping Symbolic Hand Gestures to Analogous Emojis in Computer-Mediated Communication

Recent trends in computer-mediated communication (CMC) have not only led to expanded instant... (more)

NEWS

TiiS 2017 Best Paper Award winners are Marius Kaminskas and Derek Bridge for Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems from TiiS 7(1)! 

TiiS 2016 Best Paper Award is granted to Weike Pan, Qiang Yang, Yuchao Duan, and Zhong Ming, for their article "Transfer Learning for Semi-Supervised Collaborative Recommendation", appeared in TiiS 6(2). Congratulations to all the authors! 

READ MORE
Forthcoming Articles
A User-Adaptive Modeling for Eating Action Identification from Wristband

Eating activity monitoring through wearable sensors can potentially enable interventions based on eating speed to mitigate the risks of critical healthcare problems such as obesity or diabetes. Eating actions are poly-componential gestures composed of sequential arrangements of three distinct components interspersed with gestures that may be unrelated to eating. This makes it extremely challenging to accurately identify eating actions. The primary reason for the lack of acceptance of state-of-art eating action monitoring techniques include: i) the need to install wearable sensors that are cumbersome to wear or limit mobility of the user, ii) the need for manual input from the user, and iii) poor accuracy if adequate manual input is not available. In this work, we propose a novel methodology, IDEA that performs accurate eating action identification in eating episodes with an average F1-score of 0.92. IDEA uses only a single wrist-worn sensor and provides feedback on eating speed every 2 minutes without obtaining any manual input from the user. %It can also be used to automatically annotate other poly-componential gestures.

Towards User-Adaptive Visualizations: Comparing and Combining Eye-Tracking and Interaction Data for the Real-Time Prediction of User Cognitive Abilities

EventAction: A Visual Analytics Approach to Explainable Recommendation for Event Sequences

Human-in-the-Loop Learning for Personalized Diet Monitoring from Unstructured Mobile Data

Lifestyle interventions that focus on diet are crucial in self-management and prevention of many chronic conditions such as obesity, cardiovascular disease, diabetes, and cancer. Such interventions require a diet monitoring approach to estimate overall dietary composition and energy intake. Although wearable sensors have been used to estimate eating context (e.g., food type and eating time), accurate monitoring of diet intake has remained a challenging problem. In particular, because monitoring diet intake is a self-administered task that involves the end-user to record or report on their nutrition intake, current diet monitoring technologies are prone to measurement errors related to challenges of human memory, estimation, and bias. New approaches based on mobile devices have been proposed to facilitate the process of diet intake recording. These technologies require individuals to use mobile devices such as smartphones to record nutrition intake by either entering text or taking images of the food. Such approaches, however, suffer from errors due to low adherence to technology adoption and time sensitivity to the dietary intake context. In this article, we introduce EZNutriPal1, an interactive diet monitoring system that operates on unstructured mobile data such as speech and free-text to facilitate dietary recording, real-time prompting, and personalized nutrition monitoring. EZNutriPal features a Natural Language Processing (NLP) unit that learns incrementally to add user-specific nutrition data and rules to the system. To prevent missing data that are required for dietary monitoring (e.g., calorie intake estimation), EZNutriPal devises an interactive operating mode that prompts the end-user to complete missing data in real-time. Additionally, we propose a combinatorial optimization approach to identify most appropriate pairs of food name and portion size in complex input sentences. We evaluate the proposed approach using real data collected with 23 subjects who participated in two user studies conducted in 13 days each. The results demonstrate that EZNutriPal achieves 89.7% in calorie intake estimation. We also assess the impacts of the incremental training and interactive prompting on the accuracy of calorie intake estimation and show that incremental training and interactive prompting improve the accuracy performance of computing dietary monitoring by 49.6% and 29.1%, respectively, compare to a system without such computing units.

Special Issue on Highlights of ACM Intelligent User Interface

Special Issue on Highlights of ACM Intelligent User Interface

User Evaluations on Review-based Recommendation Explanations

The explanation interface has been recognized important in recommender systems because it can allow users to better judge the relevance of recommendations to their preference and hence make more informed decisions. In different product domains, the specific purpose of explanation can be different. For high-investment products (e.g., digital cameras, laptops), how to educate the typical type of new buyers about product knowledge and consequently improve their preference certainty and decision quality is essentially crucial. With this objective, we have developed a novel tradeoff-oriented explanation interface that particularly takes into account sentiment features as extracted from product reviews to generate recommendations and explanations in a category structure. In this manuscript, we report two user studies conducted on this interface. The first is an online user study (in both before-after and within-subjects setups) that compared our prototype system with the traditional one that purely considers static specifications for explanation. The experimental results reveal that adding sentiment-based explanations can help increase users' product knowledge, preference certainty, perceived information usefulness, perceived recommendation transparency and quality, and purchase intention. Inspired by those findings, we performed a follow-up eye-tracking lab experiment in order to in-depth investigate how users view information on the interface. This study shows integrating sentiment features with static specifications in the tradeoff-oriented explanations prompted users to not only view more recommendations from various categories, but also stay longer on reading explanations. The results also infer users' inherent information needs for sentiment features during product evaluation and decision making. At the end, we discuss the work's practical implications from three major aspects, i.e., new users, category interface, and explanation purpose.

Unobtrusive Activity Recognition and Position Estimation for Work Surfaces using RF-radar Sensing

Activity recognition is a core component of many intelligent and context-aware systems. We present a solution for discreetly and unobtrusively recognizing common work activities above a work surface without using cameras. We demonstrate our approach, which utilizes an RF-radar sensor mounted under the work surface, in three domains; recognizing work activities at a convenience-store counter, recognizing common office deskwork activities, and estimating the position of customers in a showroom environment. Our examples illustrate potential benefits for both post-hoc business analytics and for real-time applications. Our solution was able to classify seven clerk activities with 94.9% accuracy using data collected in a lab environment and able to recognize six common deskwork activities collected in real offices with 95.3% accuracy. Using two sensors simultaneously, we demonstrate coarse position estimation around a large surface with 95.4% accuracy. We show that using multiple projections of RF signal leads to improved recognition accuracy. Finally, we show how smartwatches worn by users can be used to attribute an activity, recognized with the RF sensor, to a particular user in multi-user scenarios. We believe our solution can mitigate some of users privacy concerns associated with cameras and is useful for a wide range of intelligent systems.

All ACM Journals | See Full Journal Index

Search TIIS
enter search term and/or author name