7,848 research outputs found

    Exploring the Affective Loop

    Get PDF
    Research in psychology and neurology shows that both body and mind are involved when experiencing emotions (Damasio 1994, Davidson et al. 2003). People are also very physical when they try to communicate their emotions. Somewhere in between beings consciously and unconsciously aware of it ourselves, we produce both verbal and physical signs to make other people understand how we feel. Simultaneously, this production of signs involves us in a stronger personal experience of the emotions we express. Emotions are also communicated in the digital world, but there is little focus on users' personal as well as physical experience of emotions in the available digital media. In order to explore whether and how we can expand existing media, we have designed, implemented and evaluated /eMoto/, a mobile service for sending affective messages to others. With eMoto, we explicitly aim to address both cognitive and physical experiences of human emotions. Through combining affective gestures for input with affective expressions that make use of colors, shapes and animations for the background of messages, the interaction "pulls" the user into an /affective loop/. In this thesis we define what we mean by affective loop and present a user-centered design approach expressed through four design principles inspired by previous work within Human Computer Interaction (HCI) but adjusted to our purposes; /embodiment/ (Dourish 2001) as a means to address how people communicate emotions in real life, /flow/ (Csikszentmihalyi 1990) to reach a state of involvement that goes further than the current context, /ambiguity/ of the designed expressions (Gaver et al. 2003) to allow for open-ended interpretation by the end-users instead of simplistic, one-emotion one-expression pairs and /natural but designed expressions/ to address people's natural couplings between cognitively and physically experienced emotions. We also present results from an end-user study of eMoto that indicates that subjects got both physically and emotionally involved in the interaction and that the designed "openness" and ambiguity of the expressions, was appreciated and understood by our subjects. Through the user study, we identified four potential design problems that have to be tackled in order to achieve an affective loop effect; the extent to which users' /feel in control/ of the interaction, /harmony and coherence/ between cognitive and physical expressions/,/ /timing/ of expressions and feedback in a communicational setting, and effects of users' /personality/ on their emotional expressions and experiences of the interaction

    Designing gestures for affective input: an analysis of shape, effort and valence

    Get PDF
    We discuss a user-centered approach to incorporating affective expressions in interactive applications, and argue for a design that addresses both body and mind. In particular, we have studied the problem of finding a set of affective gestures. Based on previous work in movement analysis and emotion theory [Davies, Laban and Lawrence, Russell], and a study of an actor expressing emotional states in body movements, we have identified three underlying dimensions of movements and emotions: shape, effort and valence. From these dimensions we have created a new affective interaction model, which we name the affective gestural plane model. We applied this model to the design of gestural affective input to a mobile service for affective messages

    Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects

    Get PDF
    These are the Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    Emerging technologies for learning report (volume 3)

    Get PDF

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Kehollistuneet vuorovaikutuskoreografiat. Kinesteettinen lähestymistapa älykkäiden ympäristöjen suunnitteluun

    Get PDF
    Research investigates interaction design through application of the concept of choreography. Special attention is paid to assess what kind of influences technological designs have on the user’s body and movements. Choreographic approach to interaction design emphasizes the felt experience of movement as content to interaction design and offers methods for conducting multi-level choreographic analysis. The concept of kinesthesia, which refers to the felt sensation of movement, is regarded as the foundational concept for both understanding and realizing the choreographic analysis. Choreographic method is applied in studying a future vision of intelligent information and communication environments. Intelligent environment refers to development where objects in everyday environments become connected and form a communicating-actuating network that possess abilities to collect information on the environment and of its users, and enables processing of this information for serving the user’s needs. The research data consists of two visions on intelligent environments in video format, introduced by Microsoft. Visions are analyzed through choreographic analysis with intention to investigate interactions between the user, the intelligent environment and the computer system. Micro level choreography analysis focuses on how the user experiences choreographies as movement continuums. Also local level choreographies that address the broader interaction context will be analyzed. Task based analysis focuses on two functions, first, sending and fetching digital information and, second, real time re-modelling of data and visualizations. Phenomenological methodology that enabled embodiment of the choreographies through dancing was applied in the analysis. Dancing aimed at internalizing the choreographies and enabled the analysis of felt sensation of movement. Key finding of the study is that choreographic analysis and hermeneutics of the body work well to be utilized in tandem in conducting a case study research on intelligent ICT environments. Dancing is considered as choreographic practice that provides understanding on the unfolding of interactions in space, time and movement. Furthermore, dancing integrates the designer’s explicit technological information to the design context and highlights the kinesthetic dimension of interaction. Presented methods provide relevant support for defining technological systems in intelligent ICT environments that are grounded in the embodied experience of interaction. I suggest that ‘dancing as choreographic practice’ is to be applied in user-centered design of intelligent information and communication environments.Tutkimus tarkastelee vuorovaikutussuunnittelua koreografian käsitteen kautta. Koreografinen lähestymistapa tarkastelee teknologian kokonaisvaltaista ohjausvaikutusta käyttäjän liikkeeseen teknologian käyttötilanteessa. Koreografinen suunnitteluote korostaa liikkeen kokemuksen huomioimisen tärkeyttä vuorovaikutussuunnittelussa ja tarjoaa menetelmiä monitasoisen vuorovaikutusanalyysin toteuttamiseen. Kinestesian käsite, jolla tarkoitetaan liikkeen kokemista kehossa, nousee yhdeksi koreografisen lähestymistavan keskeisistä käsitteistä. Sovellan koreografista menetelmää tulevaisuuden älykästä informaatio- ja kommunikaatioympäristöä kuvaavan vision tutkimiseen. Älykkäällä ympäristöllä viittaan kehityskulkuun, jossa jokapäiväisissä ympäristöissämme läsnä oleva teknologia verkottuu, kykenee keräämään ja jakamaan tietoa ympäristöstä ja käyttäjistä sekä mahdollistaa tiedon jalostuksen käyttäjän tarpeita palvelevalla tavalla. Aineistona on käytetty Microsoftin teknologiavisioita, joissa esitetyt kuvaukset älykkäistä ympäristöistä sekä esimerkit käyttäjän ja teknologian välisistä liikkeellisistä vuorovaikutuksista nousevat analyysin kohteeksi. Analyysissa keskitytään ensinnäkin käyttäjän toteuttamien mikroliikkeiden jatkumon kokemuksen analyysiin. Toiseksi analysoidaan yksilön kokemusta paikallisen tason koreografioissa. Tällä analyysitasolla huomiota kiinnitetään teknologista vuorovaikutusta laajemman vuorovaikutustapahtuman kontekstiin jolloin mm. sosiaaliset tapahtumat ja tilan vaikutus vuorovaikutukseen tulevat huomioiduksi. Analyysi toteutetaan tehtäväperusteisena ja analyysi käsittää kaksi toimintoa: tiedostojen jakaminen ja vastaanottaminen sekä datan ja visualisointien muokkaus. Toteutin tutkimuksen nojaten fenomenologiseen metodologiaan, joka mahdollisti koreografioiden henkilökohtaisen omaksumisen tanssin eli tutkimuksen kohteena olevien vuorovaikutustapojen kehollisen harjoittamisen kautta. Teknologiavisioissa esitetyn liikemateriaalin perusteella jäsentyi koreografia, jonka tanssiminen mahdollisti liiketiedon sisäistämisen ja vuorovaikutusten kehollisesti koettujen ulottuvuuksien arvioinnin. Tutkimus osoitti koreografisen analyysin ja osittain tanssimalla toteutetun ruumiin hermeneuttisen lähestymistavan soveltuvan hyvin sovellettavaksi yhdessä älykästä ympäristöä käsittelevässä tapaustutkimuksessa. Tutkimuksen johtopäätöksenä koreografisen menetelmän ja vuorovaikutusten kehollisen harjoittamisen todetaan auttavan suunnittelijaa tilassa, ajassa ja liikkeessä tapahtuvien vuorovaikutusten jäsentämisessä, ja arvioimaan miten teknologisen järjestelmän suunnitteluratkaisut vaikuttavat käyttäjän kehoon ja liikkeeseen vuorovaikutustapahtumassa. Esitän ’tanssimista koreografisena käytäntönä’ sovellettavaksi älykkäiden ympäristöjen käyttäjäkeskeisen suunnittelun menetelmänä

    Designing Interfaces to Support Collaboration in Information Retrieval

    Get PDF
    Information retrieval systems should acknowledge the existence of collaboration in the search process. Collaboration can help users to be more effective in both learning systems and in using them. We consider some issues of viewing interfaces to information retrieval systems as collaborative notations and how to build systems that more actively support collaboration. We describe a system that embodies just one kind of explicit support; a graphical representation of the search process that can be manipulated and discussed by the users. By acknowledging the importance of other people in the search process, we can develop systems that not only improve help-giving by people but which can lead to a more robust search activity, more able to cope with, and indeed exploit, the failures of any intelligent agents used

    Interaction Design: Foundations, Experiments

    Get PDF
    Interaction Design: Foundations, Experiments is the result of a series of projects, experiments and curricula aimed at investigating the foundations of interaction design in particular and design research in general. The first part of the book - Foundations - deals with foundational theoretical issues in interaction design. An analysis of two categorical mistakes -the empirical and interactive fallacies- forms a background to a discussion of interaction design as act design and of computational technology as material in design. The second part of the book - Experiments - describes a range of design methods, programs and examples that have been used to probe foundational issues through systematic questioning of what is given. Based on experimental design work such as Slow Technology, Abstract Information Displays, Design for Sound Hiders, Zero Expression Fashion, and IT+Textiles, this section also explores how design experiments can play a central role when developing new design theory

    To Draw or Not to Draw: Recognizing Stroke-Hover Intent in Gesture-Free Bare-Hand Mid-Air Drawing Tasks

    Get PDF
    Over the past several decades, technological advancements have introduced new modes of communication with the computers, introducing a shift from traditional mouse and keyboard interfaces. While touch based interactions are abundantly being used today, latest developments in computer vision, body tracking stereo cameras, and augmented and virtual reality have now enabled communicating with the computers using spatial input in the physical 3D space. These techniques are now being integrated into several design critical tasks like sketching, modeling, etc. through sophisticated methodologies and use of specialized instrumented devices. One of the prime challenges in design research is to make this spatial interaction with the computer as intuitive as possible for the users. Drawing curves in mid-air with fingers, is a fundamental task with applications to 3D sketching, geometric modeling, handwriting recognition, and authentication. Sketching in general, is a crucial mode for effective idea communication between designers. Mid-air curve input is typically accomplished through instrumented controllers, specific hand postures, or pre-defined hand gestures, in presence of depth and motion sensing cameras. The user may use any of these modalities to express the intention to start or stop sketching. However, apart from suffering with issues like lack of robustness, the use of such gestures, specific postures, or the necessity of instrumented controllers for design specific tasks further result in an additional cognitive load on the user. To address the problems associated with different mid-air curve input modalities, the presented research discusses the design, development, and evaluation of data driven models for intent recognition in non-instrumented, gesture-free, bare-hand mid-air drawing tasks. The research is motivated by a behavioral study that demonstrates the need for such an approach due to the lack of robustness and intuitiveness while using hand postures and instrumented devices. The main objective is to study how users move during mid-air sketching, develop qualitative insights regarding such movements, and consequently implement a computational approach to determine when the user intends to draw in mid-air without the use of an explicit mechanism (such as an instrumented controller or a specified hand-posture). By recording the user’s hand trajectory, the idea is to simply classify this point as either hover or stroke. The resulting model allows for the classification of points on the user’s spatial trajectory. Drawing inspiration from the way users sketch in mid-air, this research first specifies the necessity for an alternate approach for processing bare hand mid-air curves in a continuous fashion. Further, this research presents a novel drawing intent recognition work flow for every recorded drawing point, using three different approaches. We begin with recording mid-air drawing data and developing a classification model based on the extracted geometric properties of the recorded data. The main goal behind developing this model is to identify drawing intent from critical geometric and temporal features. In the second approach, we explore the variations in prediction quality of the model by improving the dimensionality of data used as mid-air curve input. Finally, in the third approach, we seek to understand the drawing intention from mid-air curves using sophisticated dimensionality reduction neural networks such as autoencoders. Finally, the broad level implications of this research are discussed, with potential development areas in the design and research of mid-air interactions
    corecore