58 research outputs found

    Automatic Synchronization between Local and Remote Video Persons in Dining Improves Conversation

    Get PDF
    Asynchronous exchange of video messaging is a way to achieve time-shifted communication for the people who have difficulties to enjoy daily family communication in real time, because of time-zone differences or life-rhythm differences. However face-to-face communication and video messaging communication is significantly different. Since mealtime is the most common opportunity for daily family communication, it has been proposed to synchronize the video message with the viewer by changing its playback speed in dining situations to improve video messaging communication. This paper studies the influence of the synchronization method by means of Wizard of Oz (WoZ), and by means of the implemented prototype system. In the synchronization method, the dining progress of the video person is matched with that of the viewer by real-time meal weight detection. The lab study found that the synchronization via WoZ increased speech frequency, decreased the duration of switching pauses, and led to a higher ratio of eating actions immediately after verbal responses of the user. This indicated that a more active commitment of the user was observed. The prototype system with finer control of the video than WoZ also achieved comparable result in terms of questionnaire scores, indicating the feasibility of a videoconferencing system with such a function

    非同期環境における共食コミュニケーション支援システムの研究

    Get PDF
    筑波大学 (University of Tsukuba)201

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    ESCOM 2017 Proceedings

    Get PDF

    Supporting Relationships with Video Chat

    Get PDF
    Video chat is often called the “closest thing to being there”, but anyone who has used video chat to maintain personal relationships or collaborate with others knows that video chat is not the same as face-to-face interaction. In this thesis, I focus on understanding how video chat can be most effectively designed and used to support relationships, helping to bridge the communication gap for distance separated people. An important difference between video chat and face-to-face interaction is potential effects of seeing oneself. In this thesis, I present two studies exploring this important caveat to supporting relationships remotely. The first study shows that the dominant interface design (which shows one’s own video feed) has measurable effects on people’s experiences and conversations in VMC. The second study focuses on a specific group of people—those with social anxiety—who may be particularly affected by self-view in video chat interfaces. This study shows that interfaces that focus on content (much like the media sharing system presented in this thesis) have the potential to minimize effects of feedback in video chat. Another key difference between video chat and face-to-face interaction is the difficulty of engaging in shared activities. Colocated friends or family members can easily share activities such as walks, movies, or board games; distance separated people have a much harder time doing the same. The work presented in this thesis introduces a synchronous media sharing system that can serve as a powerful tool for maintaining relationships. Building on this work, I show that synchronous media sharing is also useful for creating new relationships as well. Together, the system and studies presented in this thesis provide valuable new insights and techniques for the development of video chat tools that support new and sustained relationships over a distance

    Social Intelligence Design 2007. Proceedings Sixth Workshop on Social Intelligence Design

    Get PDF

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data
    corecore