34,787 research outputs found

    Space time pixels

    Get PDF
    This paper reports the design of a networked system, the aim of which is to provide an intermediate virtual space that will establish a connection and support interaction between multiple participants in two distant physical spaces. The intention of the project is to explore the potential of the digital space to generate original social relationships between people that their current (spatial or social) position can difficultly allow the establishment of innovative connections. Furthermore, to explore if digital space can sustain, in time, low-level connections like these, by balancing between the two contradicting needs of communication and anonymity. The generated intermediate digital space is a dynamic reactive environment where time and space information of two physical places is superimposed to create a complex common ground where interaction can take place. It is a system that provides awareness of activity in a distant space through an abstract mutable virtual environment, which can be perceived in several different ways – varying from a simple dynamic background image to a common public space in the junction of two private spaces or to a fully opened window to the other space – according to the participants will. The thesis is that the creation of an intermediary environment that operates as an activity abstraction filter between several users, and selectively communicates information, could give significance to the ambient data that people unconsciously transmit to others when co-existing. It can therefore generate a new layer of connections and original interactivity patterns; in contrary to a straight-forward direct real video and sound system, that although it is functionally more feasible, it preserves the existing social constraints that limit interaction into predefined patterns

    Infant Neural Sensitivity to Dynamic Eye Gaze relates to quality of parent–infant interaction at 7-months in infants at risk for Autism

    Get PDF
    Links between brain function measures and quality of parent–child interactions within the early developmental period have been investigated in typical and atypical development. We examined such links in a group of 104 infants with and without a family history for autism in the first year of life. Our findings suggest robust associations between event related potential responses to eye gaze and observed parent–infant interaction measures. In both groups, infants with more positive affect exhibit stronger differentiation to gaze stimuli. This association was observed with the earlier P100 waveform component in the control group but with the later P400 component in infants at-risk. These exploratory findings are critical in paving the way for a better understanding of how infant laboratory measures may relate to overt behavior and how both can be combined in the context of predicting risk or clinical diagnosis in toddlerhood

    What does touch tell us about emotions in touchscreen-based gameplay?

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 ACM. It is posted here by permission of ACM for your personal use. Not for redistribution.Nowadays, more and more people play games on touch-screen mobile phones. This phenomenon raises a very interesting question: does touch behaviour reflect the player’s emotional state? If possible, this would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Based on touch-behaviour, machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. The results were very interesting reaching between 69% and 77% of correct discrimination between the four emotional states. Higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence

    DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

    Full text link
    There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7,306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people's lives

    Communicating with feeling

    Get PDF
    Communication between users in shared editors takes place in a deprived environment - distributed users find it difficult to communicate. While many solutions to the problems this causes have been suggested this paper presents a novel one. It describes one possible use of haptics as a channel for communication between users. User's telepointers are considered as haptic avatars and interactions such as haptically pushing and pulling each other are afforded. The use of homing forces to locate other users is also discussed, as is a proximity sensation based on viscosity. Evaluation of this system is currently underway

    Emotive computing may have a role in telecare

    Get PDF
    This brief paper sets out arguments for the introduction of new technologies into telecare and lifestyle monitoring that can detect and monitor the emotive state of patients. The significantly increased use of computers by older people will enable the elements of emotive computing to be integrated with features such as keyboards and webcams, to provide additional information on emotional state. When this is combined with other data, there will be significant opportunities for system enhancement and the identification of changes in user status, and hence of need. The ubiquity of home computing makes the keyboard a very attractive, economic and non-intrusive means of data collection and analysis

    Refining personal and social presence in virtual meetings

    Get PDF
    Virtual worlds show promise for conducting meetings and conferences without the need for physical travel. Current experience suggests the major limitation to the more widespread adoption and acceptance of virtual conferences is the failure of existing environments to provide a sense of immersion and engagement, or of ‘being there’. These limitations are largely related to the appearance and control of avatars, and to the absence of means to convey non-verbal cues of facial expression and body language. This paper reports on a study involving the use of a mass-market motion sensor (Kinect™) and the mapping of participant action in the real world to avatar behaviour in the virtual world. This is coupled with full-motion video representation of participant’s faces on their avatars to resolve both identity and facial expression issues. The outcomes of a small-group trial meeting based on this technology show a very positive reaction from participants, and the potential for further exploration of these concepts

    When Intrusive Can Be Likable: Product Placement Effects on Multitasking Consumers

    Get PDF
    Using movie scenes, this study examines how multitasking by viewers influences the product-plot integration effect. Findings indicate that multitasking dampens a well-integrated placement\u27s brand-enhancing effect and mitigates an intrusive placement\u27s brand-damaging effect. Well-integrated placement produces an assimilation effect, leading to convergence of viewers\u27 attitudes toward the placed versus competing brands, while intrusive placement triggers a contrast effect that results in divergence of these attitudes. Among single-tasking viewers, the boomerang effect of an intrusive placement decreases the favorability of the placed brand and increases the favorability of the not-shown competitor. The opposite is true among multitasking viewers, however

    Analyzing liquids

    Get PDF
    corecore