ACM DL

ACM Transactions on

Interactive Intelligent Systems (TIIS)

Menu
Latest Articles

AttentiveVideo: A Multimodal Approach to Quantify Emotional Responses to Mobile Advertisements

Understanding a target audience's emotional responses to a video advertisement is crucial to evaluate the advertisement's effectiveness. However, traditional methods for collecting such information are slow, expensive, and coarse grained. We propose AttentiveVideo, a scalable intelligent mobile interface with corresponding inference... (more)

Wearables and Social Signal Processing for Smarter Public Presentations

Social Signal Processing techniques have given the opportunity to analyze in-depth human behavior in social face-to-face interactions. With recent... (more)

Trusting Virtual Agents: The Effect of Personality

We present artificial intelligent (AI) agents that act as interviewers to engage with a user in a text-based conversation and automatically infer the user's personality traits. We investigate how the personality of an AI interviewer and the inferred personality of a user influences the user's trust in the AI interviewer from two... (more)

Profiling Personality Traits with Games

Trying to understand a player's characteristics with regards to a computer game is a major line of research known as player modeling. The purpose of player modeling is typically the adaptation of the game itself. We present two studies that extend player modeling into player profiling by trying to identify abstract personality traits, such... (more)

Toward Universal Spatialization Through Wikipedia-Based Semantic Enhancement

This article introduces Cartograph, a visualization system that harnesses the vast world knowledge... (more)

Interactive Quality Analytics of User-generated Content: An Integrated Toolkit for the Case of Wikipedia

Digital libraries and services enable users to access large amounts of data on demand. Yet, quality assessment of information encountered on the... (more)

A Comparison of Techniques for Sign Language Alphabet Recognition Using Armband Wearables

Recent research has shown that reliable recognition of sign language words and phrases using user-friendly and noninvasive armbands is feasible and... (more)

Bi-Level Thresholding: Analyzing the Effect of Repeated Errors in Gesture Input

In gesture recognition, one challenge that researchers and developers face is the need for recognition strategies that mediate between false positives and false negatives. In this article, we examine bi-level thresholding, a recognition strategy that uses two thresholds: a tighter threshold limits false positives and recognition errors, and a... (more)

HILC: Domain-Independent PbD System Via Computer Vision and Follow-Up Questions

Creating automation scripts for tasks involving Graphical User Interface (GUI) interactions is hard. It is challenging because not all software applications allow access to a program’s internal state, nor do they all have accessibility APIs. Although much of the internal state is exposed to the user through the GUI, it is hard to... (more)

A Comparison of Adaptive View Techniques for Exploratory 3D Drone Teleoperation

Drone navigation in complex environments poses many problems to teleoperators. Especially in three... (more)

Modeling and Computational Characterization of Twitter Customer Service Conversations

Given the increasing popularity of customer service dialogue on Twitter, analysis of conversation data is essential to understanding trends in... (more)

NEWS

TiiS 2017 Best Paper Award winners are Marius Kaminskas and Derek Bridge for Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems from TiiS 7(1)! 

TiiS 2016 Best Paper Award is granted to Weike Pan, Qiang Yang, Yuchao Duan, and Zhong Ming, for their article "Transfer Learning for Semi-Supervised Collaborative Recommendation", appeared in TiiS 6(2). Congratulations to all the authors! 

READ MORE
Forthcoming Articles

Learning from Sets of Items in Recommender Systems

Algorithmic and HCI aspects for explaining recommendations of artistic images

Understanding why automatic recommendation systems make decisions is an important area of research because a user's satisfaction improves when she understand the reasoning behind the suggestions. In the area of visual art recommendation, explanation is a critical part of the process of selling art work. Traditionally art work has been sold in art galleries where people can see different physical artworks, and artists have the chance to persuade the people into buying their work. Online sales of art work only offer the user the action of navigating through the catalog, but nobody plays the key role of the artist: persuading people into buying the artwork. In the music industry, another artistic domain, recommendation systems have been very successful and play a key role by showing users what they would like to hear. There is a lot of research about this type of recommendation, but are very few works about explaining content-based recommendations of visual arts, though both belong to the artistic domain. Current works do not provide a perspective of the many variables involved in the user perception of several aspects of the system such as domain knowledge, relevance, explainability, and trust. In this paper we aim to fill this gap by studying several aspects of the user experience of a recommender system of artistic images. We conducted two user studies on Amazon Mechanical Turk, to evaluate different levels of explainability, combined with different algorithms, interfaces and devices, in order to learn about the interaction between these variables and which effects cause these interactions in the user experience. Our experiments confirm that explanations of recommendations in the image domain are useful and increase user satisfaction, perception of explainability and relevance. In the first study, our results show that the observed effects are dependent on the underlying recommendation algorithm used. In the second study, our results show that these effects are also dependent of the device used in the study. Our general results indicate that algorithms should not be studied in isolation, but rather in conjunction with interfaces and the device since all of them play a significant role in the perception of explainability and trust for image recommendation. Finally, using the framework by Knijnenburg et al., we provide a comprehensive model, for each study, which synthesizes the effects between different variables involved in the user experience with explainable visual recommender systems of artistic images.

A User-Adaptive Modeling for Eating Action Identification from Wristband

Eating activity monitoring through wearable sensors can potentially enable interventions based on eating speed to mitigate the risks of critical healthcare problems such as obesity or diabetes. Eating actions are poly-componential gestures composed of sequential arrangements of three distinct components interspersed with gestures that may be unrelated to eating. This makes it extremely challenging to accurately identify eating actions. The primary reason for the lack of acceptance of state-of-art eating action monitoring techniques include: i) the need to install wearable sensors that are cumbersome to wear or limit mobility of the user, ii) the need for manual input from the user, and iii) poor accuracy if adequate manual input is not available. In this work, we propose a novel methodology, IDEA that performs accurate eating action identification in eating episodes with an average F1-score of 0.92. IDEA uses only a single wrist-worn sensor and provides feedback on eating speed every 2 minutes without obtaining any manual input from the user. %It can also be used to automatically annotate other poly-componential gestures.

Towards User-Adaptive Visualizations: Comparing and Combining Eye-Tracking and Interaction Data for the Real-Time Prediction of User Cognitive Abilities

EventAction: A Visual Analytics Approach to Explainable Recommendation for Event Sequences

Human-in-the-Loop Learning for Personalized Diet Monitoring from Unstructured Mobile Data

Lifestyle interventions that focus on diet are crucial in self-management and prevention of many chronic conditions such as obesity, cardiovascular disease, diabetes, and cancer. Such interventions require a diet monitoring approach to estimate overall dietary composition and energy intake. Although wearable sensors have been used to estimate eating context (e.g., food type and eating time), accurate monitoring of diet intake has remained a challenging problem. In particular, because monitoring diet intake is a self-administered task that involves the end-user to record or report on their nutrition intake, current diet monitoring technologies are prone to measurement errors related to challenges of human memory, estimation, and bias. New approaches based on mobile devices have been proposed to facilitate the process of diet intake recording. These technologies require individuals to use mobile devices such as smartphones to record nutrition intake by either entering text or taking images of the food. Such approaches, however, suffer from errors due to low adherence to technology adoption and time sensitivity to the dietary intake context. In this article, we introduce EZNutriPal1, an interactive diet monitoring system that operates on unstructured mobile data such as speech and free-text to facilitate dietary recording, real-time prompting, and personalized nutrition monitoring. EZNutriPal features a Natural Language Processing (NLP) unit that learns incrementally to add user-specific nutrition data and rules to the system. To prevent missing data that are required for dietary monitoring (e.g., calorie intake estimation), EZNutriPal devises an interactive operating mode that prompts the end-user to complete missing data in real-time. Additionally, we propose a combinatorial optimization approach to identify most appropriate pairs of food name and portion size in complex input sentences. We evaluate the proposed approach using real data collected with 23 subjects who participated in two user studies conducted in 13 days each. The results demonstrate that EZNutriPal achieves 89.7% in calorie intake estimation. We also assess the impacts of the incremental training and interactive prompting on the accuracy of calorie intake estimation and show that incremental training and interactive prompting improve the accuracy performance of computing dietary monitoring by 49.6% and 29.1%, respectively, compare to a system without such computing units.

Toward a Unified Theory of Learned Trust in Interpersonal and Human-Machine Interactions

A proposal for a unified theory of learned trust implemented in a cognitive architecture is presented. A published computational cognitive model of learned trust is critically reviewed. A revised model is proposed to overcome the limitations of the published model and expand its scope of applicability. The revised model integrates several seemingly unrelated categories of findings from the literature on interpersonal and human-machine interactions and makes unintuitive predictions for future studies. The implications of the model for the advancement of the theory on trust are discussed.

All ACM Journals | See Full Journal Index

Search TIIS
enter search term and/or author name