7 research outputs found

    Visual Scoping and Personal Space on Shared Tabletop Surfaces

    Get PDF
    Information is often shared between participants in meetings using a projector or a large display. Shared touch-based tabletop surface is an emerging technology. The shared display may not be able to accommodate all the information that participants want on the display. Moreover, large amounts of displayed information increase the complexity and clutter making it harder for participants to locate specific pieces of information. Key challenges are thus how to eliminate or hide irrelevant information and how participants can add information without distracting the other participants unintentionally. This study reports a novel approach that addresses these challenges by globally hiding information that is not relevant to all participants by introducing a private area on the public display

    Stereo Viewing and Virtual Reality Technologies in Mobile Robot Teleguide

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.” DOI: 10.1109/TRO.2009.2028765The use of 3-D stereoscopic visualization may provide a user with higher comprehension of remote environments in teleoperation when compared with 2-D viewing, in particular, a higher perception of environment depth characteristics, spatial localization, remote ambient layout, faster system learning, and decision performance. Works in the paper have demonstrated how stereo vision contributes to the improvement of the perception of some depth cues, often for abstract tasks, while it is hard to find works addressing stereoscopic visualization in mobile robot teleguide applications. This paper intends to contribute to this aspect by investigating the stereoscopic robot teleguide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This paper also investigates how user performance may vary when employing different display technologies. Results from a set of test trials run on seven virtual reality systems, from laptop to large panorama and from head-mounted display to Cave automatic virtual environment (CAVE), emphasized few aspects that represent a base for further investigations as well as a guide when designing specific systems for telepresence.Peer reviewe

    Vitreo-retinal eye surgery robot : sustainable precision

    Get PDF
    Vitreo-retinal eye surgery encompasses the surgical procedures performed on the vitreous humor and the retina. A procedure typically consists of the removal of the vitreous humor, the peeling of a membrane and/or the repair of a retinal detachment. Vitreo-retinal surgery is performed minimal invasively. Small needle shaped instruments are inserted into the eye. Instruments are manipulated by hand in four degrees of freedom about the insertion point. Two rotations move the instrument tip laterally, in addition to a translation in axial instrument direction and a rotation about its longitudinal axis. The manipulation of the instrument tip, e.g. a gripping motion can be considered as a fifth degree of freedom. While performing vitreo-retinal surgery manually, the surgeon faces various challenges. Typically, delicate micrometer range thick tissue is operated, for which steady hand movements and high accuracy instrument manipulation are required. Lateral instrument movements are inverted by the pivoting insertion point and scaled depending on the instrument insertion depth. A maximum of two instruments can be used simultaneously. There is nearly no perception of surgical forces, since most forces are below the human detection limit. Therefore, the surgeon relies only on visual feedback, obtained via a microscope or endoscope. Both vision systems force the surgeon to work in a static and non ergonomic body posture. Although the surgeon’s proficiency improves throughout his career, hand tremor will become a problem at higher age. Robotically assisted surgery with a master-slave system can assist the surgeon in these challenges. The slave system performs the actual surgery, by means of instrument manipulators which handle the instruments. The surgeon remains in control of the instruments by operating haptic interfaces via a master. Using electronic hardware and control software, the master and slave are connected. Amongst others, advantages as tremor filtering, up-scaled force feedback, down-scaled motions and stabilized instrument positioning will enhance dexterity on surgical tasks. Furthermore, providing the surgeon an ergonomic body posture will prolong the surgeon’s career. This thesis focuses on the design and realization of a high precision slave system for eye surgery. The master-slave system uses a table mounted design, where the system is compact, lightweight, easy to setup and equipped to perform a complete intervention. The slave system consists of two main parts: the instrument manipulators and their passive support system. Requirements are derived from manual eye surgery, conversations with medical specialists and analysis of the human anatomy and vitreo-retinal interventions. The passive support system provides a stiff connection between the instrument manipulator, patient and surgical table. Given the human anatomical diversity, presurgical adjustments can be made to allow the instrument manipulators to be positioned over each eye. Most of the support system is integrated within the patient’s headrest. On either the left or right side, two exchangeable manipulator-support arms can be installed onto the support system, depending on the eye being operated upon. The compact, lightweight and easy to install design, allows for a short setup time and quick removal in case of a complication. The slave system’s surgical reach is optimized to emulate manually performed surgery. For bimanual instrument operation, two instrument manipulators are used. Additional instrument manipulators can be used for non-active tools e.g. an illumination probe or an endoscope. An instrument manipulator allows the same degrees of freedom and a similar reach as manually performed surgery. Instrument forces are measured to supply force feedback to the surgeon via haptic interfaces. The instrument manipulator is designed for high stiffness, is play free and has low friction to allow tissue manipulation with high accuracy. Each instrument manipulator is equipped with an on board instrument change system, by which instruments can be changed in a fast and secure way. A compact design near the instrument allows easy access to the surgical area, leaving room for the microscope and peripheral equipment. The acceptance of a surgical robot for eye surgery mostly relies on equipment safety and reliability. The design of the slave system features various safety measures, e.g. a quick release mechanism for the instrument manipulator and additional locks on the pre-surgical adjustment fixation clamp. Additional safety measures are proposed, like a hard cover over the instrument manipulator and redundant control loops in the controlling FPGA. A method to fixate the patient’s head to the headrest by use of a custom shaped polymer mask is proposed. Two instrument manipulators and their passive support system have been realized so far, and the first experimental results confirm the designed low actuation torque and high precision performance

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    19th Conference of The Associations of Christians In The Mathematical Sciences

    Get PDF
    Association of Christians in the Mathematical Sciences 19th Biennial Conference Proceedings, May 29 - June 1, 2011, Bethel University

    3D Visualization Technologies for Teleguided Robots

    No full text
    corecore