44 research outputs found

    NeuroPlace: categorizing urban places according to mental states

    Get PDF
    Urban spaces have a great impact on how people’s emotion and behaviour. There are number of factors that impact our brain responses to a space. This paper presents a novel urban place recommendation approach, that is based on modelling in-situ EEG data. The research investigations leverages on newly affordable Electroencephalogram (EEG) headsets, which has the capability to sense mental states such as meditation and attention levels. These emerging devices have been utilized in understanding how human brains are affected by the surrounding built environments and natural spaces. In this paper, mobile EEG headsets have been used to detect mental states at different types of urban places. By analysing and modelling brain activity data, we were able to classify three different places according to the mental state signature of the users, and create an association map to guide and recommend people to therapeutic places that lessen brain fatigue and increase mental rejuvenation. Our mental states classifier has achieved accuracy of (%90.8). NeuroPlace breaks new ground not only as a mobile ubiquitous brain monitoring system for urban computing, but also as a system that can advise urban planners on the impact of specific urban planning policies and structures. We present and discuss the challenges in making our initial prototype more practical, robust, and reliable as part of our on-going research. In addition, we present some enabling applications using the proposed architecture

    Towards estimating computer users' mood from interaction behaviour with keyboard and mouse

    Get PDF
    The purpose of this exploratory research was to study the relationship between the mood of computer users and their use of keyboard and mouse to examine the possibility of creating a generic or individualized mood measure. To examine this, a field study (n = 26) and a controlled study (n = 16) were conducted. In the field study, interaction data and self-reported mood measurements were collected during normal PC use over several days. In the controlled study, participants worked on a programming task while listening to high or low arousing background music. Besides subjective mood measurement, galvanic skin response (GSR) data was also collected. Results found no generic relationship between the interaction data and the mood data. However, the results of the studies found significant average correlations between mood measurement and personalized regression models based on keyboard and mouse interaction data. Together the results suggest that individualized mood prediction is possible from interaction behaviour with keyboard and mouse

    Toward conducting motivational interviewing with an on-demand clinician avatar for tailored health behavior change interventions

    No full text
    In this article we describe work-in-progress about the development of avatar-based personalized assistants that can delivered motivational interviewing health behavior change interventions, tailored to its specific users Our approach combines the latest progress in Embodied Conversational Agents (ECAs), believable agents, and dialog systems. We discuss how we use different platforms to aim at providing accessibility of personalized health assistant, anytime anywhere

    Study of Human Affective Response on Multimedia Contents

    No full text

    eEVA as a Real-Time Multimodal Agent Human-Robot Interface

    No full text
    We posit that human-robot interfaces that integrate multimodal communication features of a 3-dimensional graphical social virtual agent with a high degree of freedom robot are highly promising. We discuss the modular agent architecture of an interactive system that integrates two frameworks (our in-house virtual social agent and robot agent framework) that enables social multimodal human-robot interaction with the Toyota’s Human Support Robot (HSR). We demonstrate HSR greeting gestures using culturally diverse inspired motions, combined with our virtual social agent interface, and we provide the results of a pilot study designed to assess the effects of our multimodal virtual agent/robot system on users’ experience. We discuss future directions for social interaction with a virtual agent/robot system
    corecore