158 research outputs found

    Through the combining glass

    Get PDF
    Reflective optical combiners like beam splitters and two way mirrors are used in AR to overlap digital contents on the users' hands or bodies. Augmentations are usually unidirectional, either reflecting virtual contents on the user's body (Situated Augmented Reality) or augmenting user's reflections with digital contents (AR mirrors). But many other novel possibilities remain unexplored. For example, users' hands, reflected inside a museum AR cabinet, can allow visitors to interact with the artifacts exhibited. Projecting on the user's hands as their reflection cuts through the objects can be used to reveal objects' internals. Augmentations from both sides are blended by the combiner, so they are consistently seen by any number of users, independently of their location or, even, the side of the combiner through which they are looking. This paper explores the potential of optical combiners to merge the space in front and behind them. We present this design space, identify novel augmentations/interaction opportunities and explore the design space using three prototypes

    MorphFace: a hybrid morphable face for a robopatient

    Get PDF
    Physicians use pain expressions shown in a patient’s face to regulate their palpation methods during physical examination. Training to interpret patients’ facial expressions with different genders and ethnicities still remains a challenge, taking novices a long time to learn through experience. This paper presents MorphFace: a controllable 3D physical-virtual hybrid face to represent pain expressions of patients from different ethnicity-gender backgrounds. It is also an intermediate step to expose trainee physicians to the gender and ethnic diversity of patients. We extracted four principal components from the Chicago Face Database to design a four degrees of freedom (DoF) physical face controlled via tendons to span 85% of facial variations among gender and ethnicity. Details such as skin colour, skin texture, and facial expressions are synthesized by a virtual model and projected onto the 3D physical face via a frontmounted LED projector to obtain a hybrid controllable patient face simulator. A user study revealed that certain differences in ethnicity between the observer and the MorphFace lead to different perceived pain intensity for the same pain level rendered by the MorphFace. This highlights the value of having MorphFace as a controllable hybrid simulator to quantify perceptual differences during physician training

    Makeup Lamps: Live Augmentation of Human Faces via Projection

    Get PDF
    We propose the first system for live dynamic augmentation of human faces. Using projector‐based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency — an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high‐speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non‐rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions

    SELF-IMAGE MULTIMEDIA TECHNOLOGIES FOR FEEDFORWARD OBSERVATIONAL LEARNING

    Get PDF
    This dissertation investigates the development and use of self-images in augmented reality systems for learning and learning-based activities. This work focuses on self- modeling, a particular form of learning, actively employed in various settings for therapy or teaching. In particular, this work aims to develop novel multimedia systems to support the display and rendering of augmented self-images. It aims to use interactivity (via games) as a means of obtaining imagery for use in creating augmented self-images. Two multimedia systems are developed, discussed and analyzed. The proposed systems are validated in terms of their technical innovation and their clinical efficacy in delivering behavioral interventions for young children on the autism spectrum

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Intermediated reality

    Get PDF
    Real-time solutions to reducing the gap between virtual and physical worlds for photorealistic interactive Augmented Reality (AR) are presented. First, a method of texture deformation with image inpainting, provides a proof of concept to convincingly re-animate fixed physical objects through digital displays with seamless visual appearance. This, in combination with novel methods for image-based retargeting of real shadows to deformed virtual poses and environment illumination estimation using in conspicuous flat Fresnel lenses, brings real-world props to life in compelling, practical ways. Live AR animation capability provides the key basis for interactive facial performance capture driven deformation of real-world physical facial props. Therefore, Intermediated Reality (IR) is enabled; a tele-present AR framework that drives mediated communication and collaboration for multiple users through the remote possession of toys brought to life.This IR framework provides the foundation of prototype applications in physical avatar chat communication, stop-motion animation movie production, and immersive video games. Specifically, a new approach to reduce the number of physical configurations needed for a stop-motion animation movie by generating the in-between frames digitally in AR is demonstrated. AR-generated frames preserve its natural appearance and achieve smooth transitions between real-world keyframes and digitally generated in-betweens. Finally, the methods integrate across the entire Reality-Virtuality Continuum to target new game experiences called Multi-Reality games. This gaming experience makes an evolutionary step toward the convergence of real and virtual game characters for visceral digital experiences

    Facial Expression Rendering in Medical Training Simulators: Current Status and Future Directions

    Get PDF
    Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.This work was supported by the Robopatient project funded by the EPSRC Grant No EP/T00519X/
    • 

    corecore