1,416 research outputs found

    Video Conferencing: Infrastructures, Practices, Aesthetics

    Get PDF
    The COVID-19 pandemic has reorganized existing methods of exchange, turning comparatively marginal technologies into the new normal. Multipoint videoconferencing in particular has become a favored means for web-based forms of remote communication and collaboration without physical copresence. Taking the recent mainstreaming of videoconferencing as its point of departure, this anthology examines the complex mediality of this new form of social interaction. Connecting theoretical reflection with material case studies, the contributors question practices, politics and aesthetics of videoconferencing and the specific meanings it acquires in different historical, cultural and social contexts

    User interface for a better eye contact in videoconferencing

    Get PDF

    Mutual Gaze Support in Videoconferencing Reviewed

    Get PDF
    Videoconferencing allows geographically dispersed parties to communicate by simultaneous audio and video transmissions. It is used in a variety of application scenarios with a wide range of coordination needs and efforts, such as private chat, discussion meetings, and negotiation tasks. In particular, in scenarios requiring certain levels of trust and judgement non-verbal communication, cues are highly important for effective communication. Mutual gaze support plays a central role in those high coordination need scenarios but generally lacks adequate technical support from videoconferencing systems. In this paper, we review technical concepts and implementations for mutual gaze support in videoconferencing, classify them, evaluate them according to a defined set of criteria, and give recommendations for future developments. Our review gives decision makers, researchers, and developers a tool to systematically apply and further develop videoconferencing systems in serious settings requiring mutual gaze. This should lead to well-informed decisions regarding the use and development of this technology and to a more widespread exploitation of the benefits of videoconferencing in general. For example, if videoconferencing systems supported high-quality mutual gaze in an easy-to-set-up and easy-to-use way, we could hold more effective and efficient recruitment interviews, court hearings, or contract negotiations

    EyeGaze:Enabling eye contact over video

    Get PDF

    FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality

    No full text
    We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call

    A State of the Art Overview on Biosignal-based User-Adaptive Video Conferencing Systems

    Get PDF
    Video conferencing systems are widely used in times of distributed teams since they support flexible work arrangements. However, they have negative impacts on users, such as lacking eye gaze or zoom fatigue. Adaptive interventions in video conferences based on user behavior provide interesting solutions to overcome these challenges, for example, by alerting users when looking tired. Specifically, biosignals measured by sensors like microphones or eye-trackers are a promising basis for adaptive interventions. To provide an overview of current biosignal-based user-adaptive video conferencing systems, we conducted a systematic literature review and identified 24 publications. We summarize existing knowledge in a morphological box and outline further research directions. Thereby, a focus on biooptical signals is visible. Current adaptations target audience feedback, expression understanding and eye gaze mostly by image and representation modifications. In future, we recommend including further biosignals and addressing more diverse problems by investigating adaptation capabilities of further software elements

    Gazedirector: Fully articulated eye gaze redirection in video

    Get PDF
    We present GazeDirector, a new approach for eye gaze redirection that uses model-fitting. Our method first tracks the eyes by fitting a multi-part eye region model to video frames using analysis-by-synthesis, thereby recovering eye region shape, texture, pose, and gaze simultaneously. It then redirects gaze by 1) warping the eyelids from the original image using a model-derived flow field, and 2) rendering and compositing synthesized 3D eyeballs onto the output image in a photorealistic manner. GazeDirector allows us to change where people are looking without person-specific training data, and with full articulation, i.e. we can precisely specify new gaze directions in 3D. Quantitatively, we evaluate both model-fitting and gaze synthesis, with experiments for gaze estimation and redirection on the Columbia gaze dataset. Qualitatively, we compare GazeDirector against recent work on gaze redirection, showing better results especially for large redirection angles. Finally, we demonstrate gaze redirection on YouTube videos by introducing new 3D gaze targets and by manipulating visual behavior

    Eye contact over video

    Get PDF
    corecore