1,644 research outputs found

    Designing an Educational and Intelligent Human-Computer Interface for Older Adults

    Get PDF
    As computing devices continue to become more heavily integrated into our lives, proper design of human-computer interfaces becomes a more important topic of discussion. Efficient and useful human-computer interfaces need to take into account the abilities of the humans who will be using such interfaces, and adapt to difficulties that different users may face – such as the particular difficulties older users must face. However, various issues in the design of human-computer interfaces for older users yet exist: a wide variance of ability is displayed by older adults, which can be difficult to design for. Motions and notions found intuitive by younger users can be anything but for the older user. Properly-designed devices must also assist without injuring the pride and independence of the users – thus, it’s understood that devices designed “for the elderly” may encounter a poor reception when introduced to the ageing community. Affective computing gives current researchers in HCI a useful opportunity to develop applications with interfaces that detect mood and attention via nonverbal cues and take appropriate actions accordingly. Current work in affective computing applications with older adult users points to possibilities reducing feelings of loneliness in the older adult population via these affective applications. However, we believe that everyday applications – such as chat programs or operating systems – can also take advantage of affective computing principles to make themselves more accessible for older adults, via communication enhancement. In this thesis, we document a variety of work in the field of developing human-computer interfaces for the older adult user, and the various requirements each of these studies confirm regarding human-computer interaction design for the elderly. We then explain how integration of affective computing can positively affect these designs, and outline a design approach for proper human-computer interfaces for the elderly which take into account affective computing principles. We then develop a case study around a chat application – ChitChat – which takes these principles and guidelines into account from the beginning, and give several examples of real-world applications also built with these guidelines. Finally, we conclude by summarizing the broader impacts of this work

    Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data

    Get PDF
    Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future

    The Effects of Cognitive Disequilibrium on Student Question Generation While Interacting with AutoTutor

    Get PDF
    AbstractThe purpose of this study was to test the effects of cognitive disequilibrium on student question generation while interacting with an intelligent tutoring system. Students were placed in a state of cognitive disequilibrium while they interacted with AutoTutor on topics of computer literacy. The students were tutored on three topics in computer literacy: hardware, operating system, and the internet. During the course of the study a confederate was present to answer any questions that the participant may have had. Additional analyses examined any potential influence the confederates had on student question asking. Lastly, the study explored the relationship between emotions and cognitive disequilibrium. More specifically, the study examined the temporal relationship between confusion and student generated questions. Based on previous cognitive disequilibrium literature, it was predicted that students who were placed in a state of cognitive disequilibrium would generate a significantly higher proportion of question than participants who were not placed in a state of cognitive disequilibrium. Additionally, it was predicted that students who were placed in a state of cognitive disequilibrium would generate “better” questions than participants who were not in a state of cognitive disequilibrium. Results revealed that participants who were not placed in a state of cognitive disequilibrium generated a significantly higher proportion of questions. Furthermore, there were no significant differences found between participants for deep or intermediate questions. Results did reveal significant main effects as a function of time for certain action units. Lastly, it was discovered that certain measures of individual differences were significant predictors of student question generation

    Before they can teach they must talk : on some aspects of human-computer interaction

    Get PDF

    Human desire inference process and analysis

    Get PDF
    Ubiquitous computing becomes a more fascinating research area since it may offer us an unobtrusive way to help users in their environments that integrate surrounding objects and activities. To date, there have been numerous studies focusing on how user\u27s activity can be identified and predicted, without considering motivation driving an action. However, understanding the underlying motivation is a key to activity analysis. On the other hand, user\u27s desires often generate motivations to engage activities in order to fulfill such desires. Thus, we must study user\u27s desires in order to provide proper services to make the life of users more comfortable. In this study, we present how to design and implement a computational model for inference of user\u27s desire. First, we devised a hierarchical desire inference process based on the Bayesian Belief Networks (BBNs), that considers the affective states, behavior contexts and environmental contexts of a user at given points in time to infer the user\u27s desire. The inferred desire of the highest probability from the BBNs is then used in the subsequent decision making. Second, we extended a probabilistic framework based on the Dynamic Bayesian Belief Networks (DBBNs) which model the observation sequences and information theory. A generic hierarchical probabilistic framework for desire inference is introduced to model the context information and the visual sensory observations. Also, this framework dynamically evolves to account for temporal change in context information along with the change in user\u27s desire. Third, we described what possible factors are relevant to determine user\u27s desire. To achieve this, a full-scale experiment has been conducted. Raw data from sensors were interpreted as context information. We observed the user\u27s activities and get user\u27s emotions as a part of input parameters. Throughout the experiment, a complete analysis was conducted whereas 30 factors were considered and most relevant factors were selectively chosen using correlation coefficient and delta value. Our results show that 11 factors (3 emotions, 7 behaviors and 1 location factor) are relevant to inferring user\u27s desire. Finally, we have established an evaluation environment within the Smart Home Lab to validate our approach. In order to train and verify the desire inference model, multiple stimuli are provided to induce user\u27s desires and pilot data are collected during the experiments. For evaluation, we used the recall and precision methodology, which are basic measures. As a result, average precision was calculated to be 85% for human desire inference and 81% for Think-Aloud

    A framework for human-like behavior in an immersive virtual world

    Get PDF
    Just as readers feel immersed when the story-line adheres to their experiences, users will more easily feel immersed in a virtual environment if the behavior of the characters in that environment adheres to their expectations, based on their life-long observations in the real world. This paper introduces a framework that allows authors to establish natural, human-like behavior, physical interaction and emotional engagement of characters living in a virtual environment. Represented by realistic virtual characters, this framework allows people to feel immersed in an Internet based virtual world in which they can meet and share experiences in a natural way as they can meet and share experiences in real life. Rather than just being visualized in a 3D space, the virtual characters (autonomous agents as well as avatars representing users) in the immersive environment facilitate social interaction and multi-party collaboration, mixing virtual with real

    Virtual environments promoting interaction

    Get PDF
    Virtual reality (VR) has been widely researched in the academic environment and is now breaking into the industry. Regular companies do not have access to this technology as a collaboration tool because these solutions usually require specific devices that are not at hand of the common user in offices. There are other collaboration platforms based on video, speech and text, but VR allows users to share the same 3D space. In this 3D space there can be added functionalities or information that in a real-world environment would not be possible, something intrinsic to VR. This dissertation has produced a 3D framework that promotes nonverbal communication. It plays a fundamental role on human interaction and is mostly based on emotion. In the academia, confusion is known to influence learning gains if it is properly managed. We designed a study to evaluate how lexical, syntactic and n-gram features influence perceived confusion and found results (not statistically significant) that point that it is possible to build a machine learning model that can predict the level of confusion based on these features. This model was used to manipulate the script of a given presentation, and user feedback shows a trend that by manipulating these features and theoretically lowering the level of confusion on text not only drops the reported confusion, as it also increases reported sense of presence. Another contribution of this dissertation comes from the intrinsic features of a 3D environment where one can carry actions that in a real world are not possible. We designed an automatic adaption lighting system that reacts to the perceived user’s engagement. This hypothesis was partially refused as the results go against what we hypothesized but do not have statistical significance. Three lines of research may stem from this dissertation. First, there can be more complex features to train the machine learning model such as syntax trees. Also, on an Intelligent Tutoring System this could adjust the avatar’s speech in real-time if fed by a real-time confusion detector. When going for a social scenario, the set of basic emotions is well-adjusted and can enrich them. Facial emotion recognition can extend this effect to the avatar’s body to fuel this synchronization and increase the sense of presence. Finally, we based this dissertation on the premise of using ubiquitous devices, but with the rapid evolution of technology we should consider that new devices will be present on offices. This opens new possibilities for other modalities.A Realidade Virtual (RV) tem sido alvo de investigação extensa na academia e tem vindo a entrar na indústria. Empresas comuns não têm acesso a esta tecnologia como uma ferramenta de colaboração porque estas soluções necessitam de dispositivos específicos que não estão disponíveis para o utilizador comum em escritório. Existem outras plataformas de colaboração baseadas em vídeo, voz e texto, mas a RV permite partilhar o mesmo espaço 3D. Neste espaço podem existir funcionalidades ou informação adicionais que no mundo real não seria possível, algo intrínseco à RV. Esta dissertação produziu uma framework 3D que promove a comunicação não-verbal que tem um papel fundamental na interação humana e é principalmente baseada em emoção. Na academia é sabido que a confusão influencia os ganhos na aprendizagem quando gerida adequadamente. Desenhámos um estudo para avaliar como as características lexicais, sintáticas e n-gramas influenciam a confusão percecionada. Construímos e testámos um modelo de aprendizagem automática que prevê o nível de confusão baseado nestas características, produzindo resultados não estatisticamente significativos que suportam esta hipótese. Este modelo foi usado para manipular o texto de uma apresentação e o feedback dos utilizadores demonstra uma tendência na diminuição do nível de confusão reportada no texto e aumento da sensação de presença. Outra contribuição vem das características intrínsecas de um ambiente 3D onde se podem executar ações que no mundo real não seriam possíveis. Desenhámos um sistema automático de iluminação adaptativa que reage ao engagement percecionado do utilizador. Os resultados não suportam o que hipotetizámos mas não têm significância estatística, pelo que esta hipótese foi parcialmente rejeitada. Três linhas de investigação podem provir desta dissertação. Primeiro, criar características mais complexas para treinar o modelo de aprendizagem, tais como árvores de sintaxe. Além disso, num Intelligent Tutoring System este modelo poderá ajustar o discurso do avatar em tempo real, alimentado por um detetor de confusão. As emoções básicas ajustam-se a um cenário social e podem enriquecê-lo. A emoção expressada facialmente pode estender este efeito ao corpo do avatar para alimentar o sincronismo social e aumentar a sensação de presença. Finalmente, baseámo-nos em dispositivos ubíquos, mas com a rápida evolução da tecnologia, podemos considerar que novos dispositivos irão estar presentes em escritórios. Isto abre possibilidades para novas modalidades

    Embodied interaction with visualization and spatial navigation in time-sensitive scenarios

    Get PDF
    Paraphrasing the theory of embodied cognition, all aspects of our cognition are determined primarily by the contextual information and the means of physical interaction with data and information. In hybrid human-machine systems involving complex decision making, continuously maintaining a high level of attention while employing a deep understanding concerning the task performed as well as its context are essential. Utilizing embodied interaction to interact with machines has the potential to promote thinking and learning according to the theory of embodied cognition proposed by Lakoff. Additionally, the hybrid human-machine system utilizing natural and intuitive communication channels (e.g., gestures, speech, and body stances) should afford an array of cognitive benefits outstripping the more static forms of interaction (e.g., computer keyboard). This research proposes such a computational framework based on a Bayesian approach; this framework infers operator\u27s focus of attention based on the physical expressions of the operators. Specifically, this work aims to assess the effect of embodied interaction on attention during the solution of complex, time-sensitive, spatial navigational problems. Toward the goal of assessing the level of operator\u27s attention, we present a method linking the operator\u27s interaction utility, inference, and reasoning. The level of attention was inferred through networks coined Bayesian Attentional Networks (BANs). BANs are structures describing cause-effect relationships between operator\u27s attention, physical actions and decision-making. The proposed framework also generated a representative BAN, called the Consensus (Majority) Model (CMM); the CMM consists of an iteratively derived and agreed graph among candidate BANs obtained by experts and by the automatic learning process. Finally, the best combinations of interaction modalities and feedback were determined by the use of particular utility functions. This methodology was applied to a spatial navigational scenario; wherein, the operators interacted with dynamic images through a series of decision making processes. Real-world experiments were conducted to assess the framework\u27s ability to infer the operator\u27s levels of attention. Users were instructed to complete a series of spatial-navigational tasks using an assigned pairing of an interaction modality out of five categories (vision-based gesture, glove-based gesture, speech, feet, or body balance) and a feedback modality out of two (visual-based or auditory-based). Experimental results have confirmed that physical expressions are a determining factor in the quality of the solutions in a spatial navigational problem. Moreover, it was found that the combination of foot gestures with visual feedback resulted in the best task performance (p\u3c .001). Results have also shown that embodied interaction-based multimodal interface decreased execution errors that occurred in the cyber-physical scenarios (p \u3c .001). Therefore we conclude that appropriate use of interaction and feedback modalities allows the operators maintain their focus of attention, reduce errors, and enhance task performance in solving the decision making problems
    corecore