88 research outputs found

    3D Medical Collaboration Technology to Enhance Emergency Healthcare

    Get PDF
    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare

    [Guidelines for the use of a feed additive tested in an industrial environment on highly productive commercial laying hens] Методические рекомендации по использованию кормовой добавки, прошедшей испытания в промышленных условиях на высокопродуктивных промышленных курах-несушках

    Get PDF
    Methodological recommendations were developed within the framework of the Grant Agreement No. 14.W03.31.0013 dated February 20, 2017 as part of the implementation of the Russian Federation Government Resolution No. 220 dated April 9, 2010 on the topic Development of modern biotechnologies for assessing gene expression in connection with productivity and resistance to diseases in poultry farming. Methodical recommendations are intended for specialists and managers of poultry farms, farmers, researchers, teachers, graduate students and students of agricultural universities, students of the advanced training system. Методические рекомендации разработаны в рамках Договора о выделении гранта № 14.W03.31.0013 от 20 февраля 2017 г. в рамках реализации постановления Правительства Российской Федерации от 9 апреля 2010 г. № 220 по теме  Разработка современных биотехнологий для оценки экспрессии генов в связи с продуктивностью и устойчивостью к заболеваниям в птицеводстве. Методические рекомендации предназначены для специалистов и руководителей птицеводческих хозяйств, фермеров, научных работников, преподавателей, аспирантов и студентов сельскохозяйственных вузов, слушателей системы повышения квалификации

    Dynamic eye convergence for head-mounted displays improves user performance in virtual environments

    No full text
    In Virtual Environments (VE), users are often facing tasks that involve direct manipulation of virtual objects at close distances, such as touching, grabbing, placement. In immersive systems that employ head-mounted displays these tasks could be quite challenging, due to lack of convergence of virtual cameras. We present a mechanism that dynamically converges left and right cameras on target objects in VE. This mechanism simulates the natural process that takes place in real life automatically. As a result, the rendering system maintains optimal conditions for stereoscopic viewing of target objects at varying depths, in real time. Building on our previous work, which introduced the eye convergence algorithm [Sherstyuk and State 2010], we developed a Virtual Reality (VR) system and conducted an experimental study on effects of eye convergence in immersive VE. This paper gives the full description of the system, the study design and a detailed analysis of the results obtained

    Modeling Real Objects Using Video See-Through Augmented Reality

    Get PDF
    This paper presents an interactive “what-you-see-is-what-you-get ” (WYSIWYG) method for creating textured 3-D models of real objects using video see-through augmented reality. We use a tracked probe to sample the objects ’ geometries, and we acquire video images from the head-mounted cameras to capture textures. Our system provides visual feedback during modeling by overlaying the model onto the real object in the user’s field of view. This visual feedback makes the modeling process interactive and intuitive.

    An interactive camera placement and visibility simulator for image-based vr applications

    No full text
    We describe an interactive software simulator that assists with the design of multi-camera setups for applications such as image-based virtual reality, three-dimensional reconstruction from still or video imagery, surveillance, etc. Instead of automating the camera placement process, our goal is to assist a user by means of a simulator that supports interactive placement and manipulation of multiple cameras within a pre-modeled three-dimensional environment. It provides a real-time 3D rendering of the environment, depicting the exact coverage of each camera (including indications of occluded and overlap regions) and the effective spatial resolution on the surfaces. The simulator can also indicate the dynamic coverage of pan-tilt-zoom cameras using “traces ” to highlight areas that are reachable within a user-selectable interval. We describe the simulator, its underlying “engine ” and its interface, and we show an example multi-camera setup for remote 3D medical consultation, including preliminary 3D reconstruction results

    Simulation-based design and rapid prototyping of a parallaxfree, orthoscopic video see-through head-mounted display

    No full text
    We built a video see-through head-mounted display with zero eye offset from commercial components and a mount fabricated via rapid prototyping. The orthoscopic HMD’s layout was created and optimized with a software simulator. We describe simulator and HMD design, we show the HMD in use and demonstrate zero parallax
    corecore