11 research outputs found

    Multi-Moji: Combining Thermal, Vibrotactile and Visual Stimuli to Expand the Affective Range of Feedback

    Get PDF
    This paper explores the combination of multiple concurrent modalities for conveying emotional information in HCI: temperature, vibration and abstract visual displays. Each modality has been studied individually, but can only convey a limited range of emotions within two-dimensional valencearousal space. This paper is the first to systematically combine multiple modalities to expand the available affective range. Three studies were conducted: Study 1 measured the emotionality of vibrotactile feedback by itself; Study 2 measured the perceived emotional content of three bimodal combinations: vibrotactile + thermal, vibrotactile + visual and visual + thermal. Study 3 then combined all three modalities. Results show that combining modalities increases the available range of emotional states, particularly in the problematic top-right and bottom-left quadrants of the dimensional model. We also provide a novel lookup resource for designers to identify stimuli to convey a range of emotions

    Neuroanatomical correlates of perceived usability

    Get PDF
    Usability has a distinct subjective component, yet surprisingly little is known about its neural basis and relation to the neuroanatomy of aesthetics. To begin closing this gap, we conducted two functional magnetic resonance imaging studies in which participants were shown static webpages (in the first study) and videos of interaction with webpages (in the second study). The webpages were controlled so as to exhibit high and low levels of perceived usability and perceived aesthetics. Our results show unique links between perceived usability and brain areas involved in functions such as emotional processing (left fusiform gyrus, superior frontal gyrus), anticipation of physical interaction (precentral gyrus), task intention (anterior cingulate cortex), and linguistic processing (medial and bilateral superior frontal gyri). We use these findings to discuss the brain correlates of perceived usability and the use of fMRI for usability evaluation and for generating new user experiences

    Orecchio: Extending Body-Language through Actuated Static and Dynamic Auricular Postures

    Get PDF
    In this paper, we propose using the auricle – the visible part of the ear – as a means of expressive output to extend body language to convey emotional states. With an initial exploratory study, we provide an initial set of dynamic and static auricular postures. Using these results, we examined the relationship between emotions and auricular postures, noting that dynamic postures involving stretching the top helix in fast (e.g., 2Hz) and slow speeds (1Hz) conveyed intense and mild pleasantness while static postures involving bending the side or top helix towards the center of the ear were associated with intense and mild unpleasantness. Based on the results, we developed a prototype (called Orrechio) with miniature motors, custommade robotic arms and other electronic components. A preliminary user evaluation showed that participants feel more comfortable using expressive auricular postures with people they are familiar with, and that it is a welcome addition to the vocabulary of human body language

    Anthropomorphic Design: Emotional Perception for Deformable Object

    Get PDF
    Despite the increasing number of studies on user experience (UX) and user interfaces (UI), few studies have examined emotional interaction between humans and deformable objects. In the current study, we investigated how the anthropomorphic design of a flexible display interacts with emotion. For 101 unique 3D images in which an object was bent at different axes, 281 participants were asked to report how strongly the object evoked five elemental emotions (e.g., happiness, disgust, anger, fear, and sadness) in an online survey. People rated the object’s shape using three emotional categories: happiness, disgust–anger, and sadness–fear. It was also found that a combination of axis of bending (horizontal or diagonal axis) and convexity (bending convexly or concavely) predicted emotional valence, underpinning the anthropomorphic design of flexible displays. Our findings provide empirical evidence that axis of bending and convexity can be an important antecedent of emotional interaction with flexible objects, triggering at least three types of emotion in users

    Emergeables: Deformable Displays for Continuous Eyes-Free Mobile Interaction

    Get PDF
    ABSTRACT In this paper we present the concept of Emergeables -mobile surfaces that can deform or 'morph' to provide fully-actuated, tangible controls. Our goal in this work is to provide the flexibility of graphical touchscreens, coupled with the affordance and tactile benefits offered by physical widgets. In contrast to previous research in the area of deformable displays, our work focuses on continuous controls (e.g., dials or sliders), and strives for fully-dynamic positioning, providing versatile widgets that can change shape and location depending on the user's needs. We describe the design and implementation of two prototype emergeables built to demonstrate the concept, and present an in-depth evaluation that compares both with a touchscreen alternative. The results show the strong potential of emergeables for on-demand, eyes-free control of continuous parameters, particularly when comparing the accuracy and usability of a high-resolution emergeable to a standard GUI approach. We conclude with a discussion of the level of resolution that is necessary for future emergeables, and suggest how high-resolution versions might be achieved

    Digital Fabrication Approaches for the Design and Development of Shape-Changing Displays

    Get PDF
    Interactive shape-changing displays enable dynamic representations of data and information through physically reconfigurable geometry. The actuated physical deformations of these displays can be utilised in a wide range of new application areas, such as dynamic landscape and topographical modelling, architectural design, physical telepresence and object manipulation. Traditionally, shape-changing displays have a high development cost in mechanical complexity, technical skills and time/finances required for fabrication. There is still a limited number of robust shape-changing displays that go beyond one-off prototypes. Specifically, there is limited focus on low-cost/accessible design and development approaches involving digital fabrication (e.g. 3D printing). To address this challenge, this thesis presents accessible digital fabrication approaches that support the development of shape-changing displays with a range of application examples – such as physical terrain modelling and interior design artefacts. Both laser cutting and 3D printing methods have been explored to ensure generalisability and accessibility for a range of potential users. The first design-led content generation explorations show that novice users, from the general public, can successfully design and present their own application ideas using the physical animation features of the display. By engaging with domain experts in designing shape-changing content to represent data specific to their work domains the thesis was able to demonstrate the utility of shape-changing displays beyond novel systems and describe practical use-case scenarios and applications through rapid prototyping methods. This thesis then demonstrates new ways of designing and building shape-changing displays that goes beyond current implementation examples available (e.g. pin arrays and continuous surface shape-changing displays). To achieve this, the thesis demonstrates how laser cutting and 3D printing can be utilised to rapidly fabricate deformable surfaces for shape-changing displays with embedded electronics. This thesis is concluded with a discussion of research implications and future direction for this work

    Autonomous behaviour in tangible user interfaces as a design factor

    Get PDF
    PhD ThesisThis thesis critically explores the design space of autonomous and actuated artefacts, considering how autonomous behaviours in interactive technologies might shape and influence users’ interactions and behaviours. Since the invention of gearing and clockwork, mechanical devices were built that both fascinate and intrigue people through their mechanical actuation. There seems to be something magical about moving devices, which draws our attention and piques our interest. Progress in the development of computational hardware is allowing increasingly complex commercial products to be available to broad consumer-markets. New technologies emerge very fast, ranging from personal devices with strong computational power to diverse user interfaces, like multi-touch surfaces or gestural input devices. Electronic systems are becoming smaller and smarter, as they comprise sensing, controlling and actuation. From this, new opportunities arise in integrating more sensors and technology in physical objects. These trends raise some specific questions around the impacts smarter systems might have on people and interaction: how do people perceive smart systems that are tangible and what implications does this perception have for user interface design? Which design opportunities are opened up through smart systems? There is a tendency in humans to attribute life-like qualities onto non-animate objects, which evokes social behaviour towards technology. Maybe it would be possible to build user interfaces that utilise such behaviours to motivate people towards frequent use, or even motivate them to build relationships in which the users care for their devices. Their aim is not to increase the efficiency of user interfaces, but to create interfaces that are more engaging to interact with and excite people to bond with these tangible objects. This thesis sets out to explore autonomous behaviours in physical interfaces. More specifically, I am interested in the factors that make a user interpret an interface as autonomous. Through a review of literature concerned with animated objects, autonomous technology and robots, I have mapped out a design space exploring the factors that are important in developing autonomous interfaces. Building on this and utilising workshops conducted with other researchers, I have vi developed a framework that identifies key elements for the design of Tangible Autonomous Interfaces (TAIs). To validate the dimensions of this framework and to further unpack the impacts on users of interacting with autonomous interfaces I have adopted a ‘research through design’ approach. I have iteratively designed and realised a series of autonomous, interactive prototypes, which demonstrate the potential of such interfaces to establish themselves as social entities. Through two deeper case studies, consisting of an actuated helium balloon and desktop lamp, I provide insights into how autonomy could be implemented into Tangible User Interfaces. My studies revealed that through their autonomous behaviour (guided by the framework) these devices established themselves, in interaction, as social entities. They furthermore turned out to be acceptable, especially if people were able to find a purpose for them in their lives. This thesis closes with a discussion of findings and provides specific implications for design of autonomous behaviour in interfaces
    corecore