Throughout nearly all sectors of society, humans are increasingly needing to interact with intelligent systems. Just as is the case in human-human interactions, trust plays a critical role in the success of human-machine interactions. However, this requires the design of intelligent systems that are responsive to changes in human trust level, thereby necessitating the design of an online trust sensor. In this paper, it is shown that psychophysiological measurements can be used to sense human trust in intelligent systems in real-time. Two approaches for developing classifier-based empirical trust sensor models are presented that map psychophysiological measurements, specifically electroencephalography (EEG) and galvanic skin response (GSR), to human trust level. Human subject data collected from 33 participants was used for feature extraction, feature selection, classifier training, and model validation. The first approach considers a common set of psychophysiological features across all participants as the input variables, resulting in a general trust sensor model. The second approach considers a customized feature set for each individual and trains a classifier-based model using that feature set, resulting in improved mean accuracy but at the expense of an increase in training time. This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor; implications of the work in the context of trust management algorithm design for intelligent systems are also discussed.
The automatic detection and classification of stance (e.g., certainty or agreement) in text data using natural language processing and machine learning methods create an opportunity to gain insight into the speakers' attitudes towards their own and other people's utterances. However, identifying stance in text presents many challenges related to training data collection and classifier training. In order to facilitate the entire process of training a stance classifier, we propose a visual analytics approach, called ALVA, for text data annotation and visualization. ALVA's interplay with the stance classifier follows an active learning strategy in order to select suitable candidate utterances for manual annotation. Our approach supports annotation process management and provides the annotators with a clean user interface for labeling utterances with multiple stance categories. ALVA also contains a visualization method to help analysts of the annotation and training process gain a better understanding of the categories used by the annotators. The visualization uses a novel visual representation, called CatCombos, which groups individual annotation items by the combination of stance categories. Additionally, our system makes a visualization of a vector space model available that is itself based on utterances. ALVA is already being used by our domain experts in linguistics and computational linguistics in order to improve the understanding of stance phenomena and to build a stance classifier for applications such as social media monitoring.
This paper presents a novel smart eyewear that recognizes a wearer's facial expression in daily scenes. We evaluated our device and showed the robustness to the noise from a wearer's facial direction change, repeatability and the positional drift of the glasses. Our device uses embedded photo reflective sensors and machine learning to recognize a wearer's facial expressions. We leverage the skin deformation when a wearer changes their facial expressions. With small photo reflective sensors, we measure the proximity between the skin surface on a face and the eyewear frame where 17 sensors are integrated. A Support Vector Machine (SVM) algorithm was applied for the sensor information. The sensors can cover various facial muscle movements and can be integrated into everyday glasses.There are various possible scenarios of our devices such as a care system for older adults and mental management. The main contributions of our work are as follows. (1) We evaluated the recognition accuracy in daily scenes. We showed 92.8% accuracy regardless of facial direction, taking on/off by learning those data. Our device can recognize facial expressions with 78.1% accuracy for repeatability, with 87.7% accuracy in case of its positional drift. (2) It is designed and implemented considering social acceptability. The device looks like normal eyewear, so users can wear it anytime, anywhere. (3) Initial field trials in daily life were undertaken. Our work is one of the first attempts to recognize and evaluate a variety of facial expressions in the form of an unobtrusive wearable.
Full-body human movement is characterized by fine-grain expressive qualities that humans are easily capable to exhibit and recognize in others movement. In sports (e.g., martial arts) as well as in performing arts (e.g., dance), the same sequence of movements can be performed in a wide range of ways characterized by different qualities, often in terms of subtle (spatial and temporal) perturbations of the movement. Even a non-expert observer can distinguish between a top-level and an average performance by a dancer or martial artist. The difference is not in the performed movements - the same in both cases - but in the quality of their performance. In this paper, we present a computational framework aiming at an automated approximate measure of movement quality in full-body physical activities. Starting from motion capture data, the framework computes low-level (e.g., a limb velocity) and high-level (e.g., synchronization between different limbs) movement features. Then, this vector of features is integrated to compute a value aiming at providing a quantitative assessment of movement quality, approximating the evaluation an external expert observer would give of the same sequence of movements. Next, a system representing a concrete implementation of the framework is proposed. Karate is adopted as a testbed. We selected two different katas (i.e., detailed choreographies of movements in karate), characterized by different overall attitude and expression (aggressiveness, meditation), and we asked seven athletes, having various levels of experience and age, to perform them. Motion capture data were collected from the performances and were analyzed with the system. The results of the automated analysis were compared with the scores given by fourteen karate experts who rated the same performances. Results show that the movement quality scores computed by the system and the ratings given by the human observers are highly correlated (Pearsons correlations r = 0.84, p = 0.001 and r = 0.75, p = 0.005).
Research impact plays a critical role in evaluating the research quality and influence of a scholar, a journal, or a conference. Many researchers have attempted to quantify research impact by introducing different types of metrics based on citation data, such as h-index, citation count, and impact factor. These metrics are widely used in academic community. However, quantitative metrics are highly aggregated in most cases and sometimes biased, which probably results in the loss of impact details that are important for comprehensively understanding research impact. For example, which research area does a researcher have great research impact on? How does the research impact change over time? How do the collaborators take effect on the research impact of an individual? Simple quantitative metrics can hardly help answer such kind of questions, since more detailed exploration of the citation data is needed. Previous work on visualizing citation data usually only shows limited aspects of research impact and may suffer from other problems including visual clutter and scalability issues. To fill this gap, we propose an interactive visualization tool ImpactVis for better exploration of research impact through citation data. Case studies and in-depth expert interviews are conducted to demonstrate the effectiveness of ImpactVis.