Learning from Sets of Items in Recommender Systems
Understanding why automatic recommendation systems make decisions is an important area of research because a user's satisfaction improves when she understand the reasoning behind the suggestions. In the area of visual art recommendation, explanation is a critical part of the process of selling art work. Traditionally art work has been sold in art galleries where people can see different physical artworks, and artists have the chance to persuade the people into buying their work. Online sales of art work only offer the user the action of navigating through the catalog, but nobody plays the key role of the artist: persuading people into buying the artwork. In the music industry, another artistic domain, recommendation systems have been very successful and play a key role by showing users what they would like to hear. There is a lot of research about this type of recommendation, but are very few works about explaining content-based recommendations of visual arts, though both belong to the artistic domain. Current works do not provide a perspective of the many variables involved in the user perception of several aspects of the system such as domain knowledge, relevance, explainability, and trust. In this paper we aim to fill this gap by studying several aspects of the user experience of a recommender system of artistic images. We conducted two user studies on Amazon Mechanical Turk, to evaluate different levels of explainability, combined with different algorithms, interfaces and devices, in order to learn about the interaction between these variables and which effects cause these interactions in the user experience. Our experiments confirm that explanations of recommendations in the image domain are useful and increase user satisfaction, perception of explainability and relevance. In the first study, our results show that the observed effects are dependent on the underlying recommendation algorithm used. In the second study, our results show that these effects are also dependent of the device used in the study. Our general results indicate that algorithms should not be studied in isolation, but rather in conjunction with interfaces and the device since all of them play a significant role in the perception of explainability and trust for image recommendation. Finally, using the framework by Knijnenburg et al., we provide a comprehensive model, for each study, which synthesizes the effects between different variables involved in the user experience with explainable visual recommender systems of artistic images.
Eating activity monitoring through wearable sensors can potentially enable interventions based on eating speed to mitigate the risks of critical healthcare problems such as obesity or diabetes. Eating actions are poly-componential gestures composed of sequential arrangements of three distinct components interspersed with gestures that may be unrelated to eating. This makes it extremely challenging to accurately identify eating actions. The primary reason for the lack of acceptance of state-of-art eating action monitoring techniques include: i) the need to install wearable sensors that are cumbersome to wear or limit mobility of the user, ii) the need for manual input from the user, and iii) poor accuracy if adequate manual input is not available. In this work, we propose a novel methodology, IDEA that performs accurate eating action identification in eating episodes with an average F1-score of 0.92. IDEA uses only a single wrist-worn sensor and provides feedback on eating speed every 2 minutes without obtaining any manual input from the user. %It can also be used to automatically annotate other poly-componential gestures.
Towards User-Adaptive Visualizations: Comparing and Combining Eye-Tracking and Interaction Data for the Real-Time Prediction of User Cognitive Abilities
EventAction: A Visual Analytics Approach to Explainable Recommendation for Event Sequences
Lifestyle interventions that focus on diet are crucial in self-management and prevention of many chronic conditions such as obesity, cardiovascular disease, diabetes, and cancer. Such interventions require a diet monitoring approach to estimate overall dietary composition and energy intake. Although wearable sensors have been used to estimate eating context (e.g., food type and eating time), accurate monitoring of diet intake has remained a challenging problem. In particular, because monitoring diet intake is a self-administered task that involves the end-user to record or report on their nutrition intake, current diet monitoring technologies are prone to measurement errors related to challenges of human memory, estimation, and bias. New approaches based on mobile devices have been proposed to facilitate the process of diet intake recording. These technologies require individuals to use mobile devices such as smartphones to record nutrition intake by either entering text or taking images of the food. Such approaches, however, suffer from errors due to low adherence to technology adoption and time sensitivity to the dietary intake context. In this article, we introduce EZNutriPal1, an interactive diet monitoring system that operates on unstructured mobile data such as speech and free-text to facilitate dietary recording, real-time prompting, and personalized nutrition monitoring. EZNutriPal features a Natural Language Processing (NLP) unit that learns incrementally to add user-specific nutrition data and rules to the system. To prevent missing data that are required for dietary monitoring (e.g., calorie intake estimation), EZNutriPal devises an interactive operating mode that prompts the end-user to complete missing data in real-time. Additionally, we propose a combinatorial optimization approach to identify most appropriate pairs of food name and portion size in complex input sentences. We evaluate the proposed approach using real data collected with 23 subjects who participated in two user studies conducted in 13 days each. The results demonstrate that EZNutriPal achieves 89.7% in calorie intake estimation. We also assess the impacts of the incremental training and interactive prompting on the accuracy of calorie intake estimation and show that incremental training and interactive prompting improve the accuracy performance of computing dietary monitoring by 49.6% and 29.1%, respectively, compare to a system without such computing units.
A proposal for a unified theory of learned trust implemented in a cognitive architecture is presented. A published computational cognitive model of learned trust is critically reviewed. A revised model is proposed to overcome the limitations of the published model and expand its scope of applicability. The revised model integrates several seemingly unrelated categories of findings from the literature on interpersonal and human-machine interactions and makes unintuitive predictions for future studies. The implications of the model for the advancement of the theory on trust are discussed.