2,947 research outputs found

    User interface for a better eye contact in videoconferencing

    Get PDF

    Dynamic Anamorphosis as a Special, Computer-Generated User Interface

    Get PDF
    A classical or static anamorphic image requires a specific, usually a highly oblique view direction, from which the observer can see the anamorphosis in its correct form. This paper explains dynamic anamorphosis which adapts itself to the changing position of the observer so that wherever the observer moves, he sees the same undeformed image. This dynamic changing of the anamorphic deformation in concert with the movement of the observer requires from the system to track the 3D position of the observer’s eyes and the re-computation of the anamorphic deformation in real time. This is achieved using computer vision methods which consist of face detection and tracking the 3D position of the selected observer. An application of this system of dynamic anamorphosis in the context of an interactive art installation is described. We show that anamorphic deformation is also useful for improving eye contact in videoconferencing. Other possible applications involve novel user interfaces where the user can freely move and observe perspectively undeformed images

    A motion control method for a differential drive robot based on human walking for immersive telepresence

    Get PDF
    Abstract. This thesis introduces an interface for controlling Differential Drive Robots (DDRs) for telepresence applications. Our goal is to enhance immersive experience while reducing user discomfort, when using Head Mounted Displays (HMDs) and body trackers. The robot is equipped with a 360° camera that captures the Robot Environment (RE). Users wear an HMD and use body trackers to navigate within a Local Environment (LE). Through a live video stream from the robot-mounted camera, users perceive the RE within a virtual sphere known as the Virtual Environment (VE). A proportional controller was employed to facilitate the control of the robot, enabling to replicate the movements of the user. The proposed method uses chest tracker to control the telepresence robot and focuses on minimizing vection and rotations induced by the robot’s motion by modifying the VE, such as rotating and translating it. Experimental results demonstrate the accuracy of the robot in reaching target positions when controlled through the body-tracker interface. Additionally, it also reveals an optimal VE size that effectively reduces VR sickness and enhances the sense of presence

    Conformal Tracking For Virtual Environments

    Get PDF
    A virtual environment is a set of surroundings that appears to exist to a user through sensory stimuli provided by a computer. By virtual environment, we mean to include environments supporting the full range from VR to pure reality. A necessity for virtual environments is knowledge of the location of objects in the environment. This is referred to as the tracking problem, which points to the need for accurate and precise tracking in virtual environments. Marker-based tracking is a technique which employs fiduciary marks to determine the pose of a tracked object. A collection of markers arranged in a rigid configuration is called a tracking probe. The performance of marker-based tracking systems depends upon the fidelity of the pose estimates provided by tracking probes. The realization that tracking performance is linked to probe performance necessitates investigation into the design of tracking probes for proponents of marker-based tracking. The challenges involved with probe design include prediction of the accuracy and precision of a tracking probe, the creation of arbitrarily-shaped tracking probes, and the assessment of the newly created probes. To address these issues, we present a pioneer framework for designing conformal tracking probes. Conformal in this work means to adapt to the shape of the tracked objects and to the environmental constraints. As part of the framework, the accuracy in position and orientation of a given probe may be predicted given the system noise. The framework is a methodology for designing tracking probes based upon performance goals and environmental constraints. After presenting the conformal tracking framework, the elements used for completing the steps of the framework are discussed. We start with the application of optimization methods for determining the probe geometry. Two overall methods for mapping markers on tracking probes are presented, the Intermediary Algorithm and the Viewpoints Algorithm. Next, we examine the method used for pose estimation and present a mathematical model of error propagation used for predicting probe performance in pose estimation. The model uses a first-order error propagation, perturbing the simulated marker locations with Gaussian noise. The marker locations with error are then traced through the pose estimation process and the effects of the noise are analyzed. Moreover, the effects of changing the probe size or the number of markers are discussed. Finally, the conformal tracking framework is validated experimentally. The assessment methods are divided into simulation and post-fabrication methods. Under simulation, we discuss testing of the performance of each probe design. Then, post-fabrication assessment is performed, including accuracy measurements in orientation and position. The framework is validated with four tracking probes. The first probe is a six-marker planar probe. The predicted accuracy of the probe was 0.06 deg and the measured accuracy was 0.083 plus/minus 0.015 deg. The second probe was a pair of concentric, planar tracking probes mounted together. The smaller probe had a predicted accuracy of 0.206 deg and a measured accuracy of 0.282 plus/minus 0.03 deg. The larger probe had a predicted accuracy of 0.039 deg and a measured accuracy of 0.017 plus/minus 0.02 deg. The third tracking probe was a semi-spherical head tracking probe. The predicted accuracy in orientation and position was 0.54 plus/minus 0.24 deg and 0.24 plus/minus 0.1 mm, respectively. The experimental accuracy in orientation and position was 0.60 plus/minus 0.03 deg and 0.225 plus/minus 0.05 mm, respectively. The last probe was an integrated, head-mounted display probe, created using the conformal design process. The predicted accuracy of this probe was 0.032 plus/minus 0.02 degrees in orientation and 0.14 plus/minus 0.08 mm in position. The measured accuracy of the probe was 0.028 plus/minus 0.01 degrees in orientation and 0.11 plus/minus 0.01 mm in position. These results constitute an order of magnitude improvement over current marker-based tracking probes in orientation, indicating the benefits of a conformal tracking approach. Also, this result translates to a predicted positional overlay error of a virtual object presented at 1m of less than 0.5 mm, which is well above reported overlay performance in virtual environments

    Dynamic Anamorphosis as a Special, Computer-Generated User Interface

    Get PDF
    A classical or static anamorphic image requires a specific, usually a highly oblique view direction, from which the observer can see the anamorphosis in its correct form. This paper explains dynamic anamorphosis which adapts itself to the changing position of the observer so that wherever the observer moves, he sees the same undeformed image. This dynamic changing of the anamorphic deformation in concert with the movement of the observer requires from the system to track the 3D position of the observer’s eyes and the re-computation of the anamorphic deformation in real time. This is achieved using computer vision methods which consist of face detection and tracking the 3D position of the selected observer. An application of this system of dynamic anamorphosis in the context of an interactive art installation is described. We show that anamorphic deformation is also useful for improving eye contact in videoconferencing. Other possible applications involve novel user interfaces where the user can freely move and observe perspectively undeformed images

    Face and Body gesture recognition for a vision-based multimodal analyser

    Full text link
    users, computers should be able to recognize emotions, by analyzing the human's affective state, physiology and behavior. In this paper, we present a survey of research conducted on face and body gesture and recognition. In order to make human-computer interfaces truly natural, we need to develop technology that tracks human movement, body behavior and facial expression, and interprets these movements in an affective way. Accordingly in this paper, we present a framework for a vision-based multimodal analyzer that combines face and body gesture and further discuss relevant issues

    Real-time face view correction for front-facing cameras

    Get PDF
    Face view is particularly important in person-to-person communication. Disparity between the camera location and the face orientation can result in undesirable facial appearances of the participants during video conferencing. This phenomenon becomes particularly notable on devices where the front-facing camera is placed at unconventional locations such as below the display or within the keyboard. In this paper, we takes the video stream from a single RGB camera as input, and generates a video stream that emulates the view from a virtual camera at a designated location. The most challenging issue of this problem is that the corrected view often needs out-of-plane head rotations. To address this challenge, we reconstruct 3D face shape and re-render it into synthesized frames according to the virtual camera location. To output the corrected video stream with natural appearance in real-time, we propose several novel techniques including accurate eyebrow reconstruction, high-quality blending between corrected face image and background, and a template-based 3D reconstruction of glasses. Our system works well for different lighting conditions and skin tones, and is able to handle users wearing glasses. Extensive experiments and user studies demonstrate that our proposed method can achieve high-quality results
    corecore