Learning from Sets of Items in Recommender Systems
E-Learning systems can support real-time monitoring of learners? learning desires and effects, thus offering opportunities for enhanced personalized learning. Recognition of the determinants of dyslexic users? motivation to use e-learning systems is important to help developers improve the design of e-learning systems and educators direct their efforts to relevant factors to enhance dyslexic students? motivation. Existing research has rarely attempted to model dyslexic users? motivation in e-learning context from a comprehensive perspective. This paper has conceived a hybrid approach, namely combining the strengths of qualitative and quantitative analysis methods, to motivation modeling. It examines a variety of factors which affect dyslexic students? motivation to engage in e-learning systems from psychological, behavioral and technical perspectives, and establishes their interrelationships. Specifically, the study collects data from a multi-item Likert-style questionnaire to measure relevant factors for conceptual motivation modelling. It then applies the Structural Equation Modeling approach to determine the quantitative mapping between dyslexic students? continued use intention and motivational factors, followed by discussions about theoretical findings and design instructions according to our motivation model. Our research has led to a novel motivation model with new constructs of Learning Experience, Reading Experience, Perceived Control and Perceived Privacy. Initial results have indicated direct effects of Attitudes Toward School, Visual Attractiveness, Reading Experience and Utilization on continued use intention.
Successful social robot services depend on how robots can interact with users. The effective service is obtained through smooth, engaged and humanoid interactions in which robots react properly to a user's affective state. This paper proposes a novel empathy model for humanoid robots in order to achieve longer and more engaged human-robot interactions (HRI) by considering human emotions and replying to them appropriately. The proposed model continuously detects the affective states of a user and generates desired, either parallel or reactive, empathic behavior that is already adapted also to the user's personality. Affective states are detected using stacked autoencoder network that was trained and tested on RAVDESS dataset. The overall proposed empathic model is verified throughout a scenario where different emotions are triggered in participants and then robot applied empathy. The results provide support evidence about the effectiveness of the proposed model in terms of participants' perceived related social and friendly functionalities.
Understanding why automatic recommendation systems make decisions is an important area of research because a user's satisfaction improves when she understand the reasoning behind the suggestions. In the area of visual art recommendation, explanation is a critical part of the process of selling art work. Traditionally art work has been sold in art galleries where people can see different physical artworks, and artists have the chance to persuade the people into buying their work. Online sales of art work only offer the user the action of navigating through the catalog, but nobody plays the key role of the artist: persuading people into buying the artwork. In the music industry, another artistic domain, recommendation systems have been very successful and play a key role by showing users what they would like to hear. There is a lot of research about this type of recommendation, but are very few works about explaining content-based recommendations of visual arts, though both belong to the artistic domain. Current works do not provide a perspective of the many variables involved in the user perception of several aspects of the system such as domain knowledge, relevance, explainability, and trust. In this paper we aim to fill this gap by studying several aspects of the user experience of a recommender system of artistic images. We conducted two user studies on Amazon Mechanical Turk, to evaluate different levels of explainability, combined with different algorithms, interfaces and devices, in order to learn about the interaction between these variables and which effects cause these interactions in the user experience. Our experiments confirm that explanations of recommendations in the image domain are useful and increase user satisfaction, perception of explainability and relevance. In the first study, our results show that the observed effects are dependent on the underlying recommendation algorithm used. In the second study, our results show that these effects are also dependent of the device used in the study. Our general results indicate that algorithms should not be studied in isolation, but rather in conjunction with interfaces and the device since all of them play a significant role in the perception of explainability and trust for image recommendation. Finally, using the framework by Knijnenburg et al., we provide a comprehensive model, for each study, which synthesizes the effects between different variables involved in the user experience with explainable visual recommender systems of artistic images.
Eating activity monitoring through wearable sensors can potentially enable interventions based on eating speed to mitigate the risks of critical healthcare problems such as obesity or diabetes. Eating actions are poly-componential gestures composed of sequential arrangements of three distinct components interspersed with gestures that may be unrelated to eating. This makes it extremely challenging to accurately identify eating actions. The primary reason for the lack of acceptance of state-of-art eating action monitoring techniques include: i) the need to install wearable sensors that are cumbersome to wear or limit mobility of the user, ii) the need for manual input from the user, and iii) poor accuracy if adequate manual input is not available. In this work, we propose a novel methodology, IDEA that performs accurate eating action identification in eating episodes with an average F1-score of 0.92. IDEA uses only a single wrist-worn sensor and provides feedback on eating speed every 2 minutes without obtaining any manual input from the user. %It can also be used to automatically annotate other poly-componential gestures.
This paper proposes a novel study on personality recognition in different scenarios. Our goal is to jointly model nonverbal behavioral cues with contextual information for a robust, multi-scenario, personality recognition system. Therefore, we build a novel multi-stream Convolutional Neural network framework (CNN), which considers multiple sources of information. From a given scenario, we extract spatio-temporal motion descriptors from every individual in the scene, spatio-temporal motion descriptors encoding social groups dynamics, and proxemics descriptors to encode the interaction with the surrounding context. All the proposed descriptors are mapped to the same feature space facilitating the overall learning effort. Experiments on two public datasets demonstrate the effectiveness of jointly modeling the mutual Person-Context information, outperforming the state-of-the art-results for personality recognition in two different scenarios. Lastly, we present CNN class activation maps for each personality trait, shedding light on behavioral patterns linked with personality attributes.
Towards User-Adaptive Visualizations: Comparing and Combining Eye-Tracking and Interaction Data for the Real-Time Prediction of User Cognitive Abilities
EventAction: A Visual Analytics Approach to Explainable Recommendation for Event Sequences
Lifestyle interventions that focus on diet are crucial in self-management and prevention of many chronic conditions such as obesity, cardiovascular disease, diabetes, and cancer. Such interventions require a diet monitoring approach to estimate overall dietary composition and energy intake. Although wearable sensors have been used to estimate eating context (e.g., food type and eating time), accurate monitoring of diet intake has remained a challenging problem. In particular, because monitoring diet intake is a self-administered task that involves the end-user to record or report on their nutrition intake, current diet monitoring technologies are prone to measurement errors related to challenges of human memory, estimation, and bias. New approaches based on mobile devices have been proposed to facilitate the process of diet intake recording. These technologies require individuals to use mobile devices such as smartphones to record nutrition intake by either entering text or taking images of the food. Such approaches, however, suffer from errors due to low adherence to technology adoption and time sensitivity to the dietary intake context. In this article, we introduce EZNutriPal1, an interactive diet monitoring system that operates on unstructured mobile data such as speech and free-text to facilitate dietary recording, real-time prompting, and personalized nutrition monitoring. EZNutriPal features a Natural Language Processing (NLP) unit that learns incrementally to add user-specific nutrition data and rules to the system. To prevent missing data that are required for dietary monitoring (e.g., calorie intake estimation), EZNutriPal devises an interactive operating mode that prompts the end-user to complete missing data in real-time. Additionally, we propose a combinatorial optimization approach to identify most appropriate pairs of food name and portion size in complex input sentences. We evaluate the proposed approach using real data collected with 23 subjects who participated in two user studies conducted in 13 days each. The results demonstrate that EZNutriPal achieves 89.7% in calorie intake estimation. We also assess the impacts of the incremental training and interactive prompting on the accuracy of calorie intake estimation and show that incremental training and interactive prompting improve the accuracy performance of computing dietary monitoring by 49.6% and 29.1%, respectively, compare to a system without such computing units.
In this paper, we propose novel techniques to predict a user?s movie genre preference and rating behavior from her psycholinguistic attributes obtained from the social media interactions. The motivation of this work comes from various psychological studies that demonstrate that psychological attributes such as personality and values can influence one?s decision or choice in real life. In this work, we integrate user interactions in Twitter and IMDb to derive interesting relations between human psychological attributes and their movie preferences. In particular, we first predict a user?s movie genre preferences from the personality and value scores of the user derived from her tweets. Second, we also develop models to predict user movie rating behavior from her tweets in Twitter and movie genre and storyline preferences from IMDb. We further strengthen the movie rating model by incorporating the user reviews. In the above models, we investigate the role of personality and values independently and combinedly while predicting movie genre preferences and movie rating behaviors. We find that our combined models significantly improve the accuracy than that of a single model that is built by using personality or values independently. We also compare our technique with the traditional movie genre and rating prediction techniques. The experimental results show that our models are effective in recommending movies to users.
The explanation interface has been recognized important in recommender systems because it can allow users to better judge the relevance of recommendations to their preference and hence make more informed decisions. In different product domains, the specific purpose of explanation can be different. For high-investment products (e.g., digital cameras, laptops), how to educate the typical type of new buyers about product knowledge and consequently improve their preference certainty and decision quality is essentially crucial. With this objective, we have developed a novel tradeoff-oriented explanation interface that particularly takes into account sentiment features as extracted from product reviews to generate recommendations and explanations in a category structure. In this manuscript, we report two user studies conducted on this interface. The first is an online user study (in both before-after and within-subjects setups) that compared our prototype system with the traditional one that purely considers static specifications for explanation. The experimental results reveal that adding sentiment-based explanations can help increase users' product knowledge, preference certainty, perceived information usefulness, perceived recommendation transparency and quality, and purchase intention. Inspired by those findings, we performed a follow-up eye-tracking lab experiment in order to in-depth investigate how users view information on the interface. This study shows integrating sentiment features with static specifications in the tradeoff-oriented explanations prompted users to not only view more recommendations from various categories, but also stay longer on reading explanations. The results also infer users' inherent information needs for sentiment features during product evaluation and decision making. At the end, we discuss the work's practical implications from three major aspects, i.e., new users, category interface, and explanation purpose.
A proposal for a unified theory of learned trust implemented in a cognitive architecture is presented. A published computational cognitive model of learned trust is critically reviewed. A revised model is proposed to overcome the limitations of the published model and expand its scope of applicability. The revised model integrates several seemingly unrelated categories of findings from the literature on interpersonal and human-machine interactions and makes unintuitive predictions for future studies. The implications of the model for the advancement of the theory on trust are discussed.