6 research outputs found

    Stereoscopic human interfaces

    Get PDF
    This article focuses on the use of stereoscopic video interfaces for telerobotics. Topics concerning human visual perception, binocular image capturing, and stereoscopic devices are described. There is a wide variety of video interfaces for telerobotic systems. Choosing the best video interface depends on the telerobotic application requirements. Simple monoscopic cameras are good enough for watching remote robot movements or for teleprogramming a sequence of commands. However, when operators seek precise robot guidance or wish to manipulate objects, a better perception of the remote environment must be achieved, for which more advanced visual interfaces are required. This implies a higher degree of telepresence, and, therefore, the most suitable visual interface has to be chosen. The aim of this article is to describe the two main aspects using stereoscopic interfaces: the capture of binocular video images, according to the disparity limits in human perception and the proper selection of the visualization interface for stereoscopic images

    Virtual Reality to Simulate Visual Tasks for Robotic Systems

    Get PDF
    Virtual reality (VR) can be used as a tool to analyze the interactions between the visual system of a robotic agent and the environment, with the aim of designing the algorithms to solve the visual tasks necessary to properly behave into the 3D world. The novelty of our approach lies in the use of the VR as a tool to simulate the behavior of vision systems. The visual system of a robot (e.g., an autonomous vehicle, an active vision system, or a driving assistance system) and its interplay with the environment can be modeled through the geometrical relationships between the virtual stereo cameras and the virtual 3D world. Differently from conventional applications, where VR is used for the perceptual rendering of the visual information to a human observer, in the proposed approach, a virtual world is rendered to simulate the actual projections on the cameras of a robotic system. In this way, machine vision algorithms can be quantitatively validated by using the ground truth data provided by the knowledge of both the structure of the environment and the vision system

    Integrated Virtual Reality Game Interaction: The Archery Game

    Get PDF
    The aim of this research is to develop an innovative technology framework that allows a new type of human-computer interaction, where user's motion recognition is combined with immersive visualisation. The demonstrator program allows the user to visualise a virtual arrow on the top of a real physical crossbow used as game controller. Stereoscopic- 3D Visualisation is implemented using two virtual cameras with a variable angle between them (Automated Toed-in). An algorithm is also used to calculate the maximum acceptable dimension of the stereoscopic arrow, in relation with the screen size and the data provided by the Motion Capture system about the user position. The Motion Capture data are transmitted to the video game using a network interface demanded also to depacketize and process the motion data. Tests prove that the Stereoscopic-3D effect is strong enough to visualise the virtual arrow in a realistic position on the top of the crossbow. The theory formulated may be utilised to develop a new generation of Stereoscopic applications easy to use and immersive

    Realization of a demonstrator slave for robotic minimally invasive surgery

    Get PDF
    Robots for Minimally Invasive Surgery (MIS) can improve the surgeon’s work conditions with respect to conventional MIS and to enable MIS with more complex procedures. This requires to provide the surgeon with tactile feedback to feel forces executed on e.g. tissue and sutures, which is partially lost in conventional MIS. Additionally use of a robot should improve the approach possibilities of a target organ by means of instrument degrees of freedom (DoFs) and of the entry points with a compact set-up. These requirements add to the requirements set by the most common commercially available system, the da Vinci which are: (i) dexterity, (ii) natural hand-eye coordination, (iii) a comfortable body posture, (iv) intuitive utilization, and (v) a stereoscopic ’3D’ view of the operation site. The purpose of Sofie (Surgeon’s operating force-feedback interface Eindhoven) is to evaluate the possible benefit of force-feedback and the approach of both patient and target organ. Sofie integrates master, slave, electronic hardware and control. This thesis focusses on the design and realization of a technology demonstrator of the Slave. To provide good accuracy and valuable force-feedback, good dynamic behavior and limited hysteresis are required. To this end the Slave includes (i) a relatively short force-path between its instrument-tips and between tip and patient, and (ii) a passive instrument-support by means of a remote kinematically fixed point of rotation. The incision tissue does not support the instrument. The Slave is connected directly to the table. It provides a 20 DoF highly adaptable stiff frame (pre-surgical set-up) with a short force-path between the instrumenttips and between instrument-tip and patient. During surgery this frame supports three 4 DoF manipulators, two for exchangeable 4 DoF instruments and one for an endoscope. The pre-surgical set-up of the Slave consists of a 5 DoF platform-adjustment with a platform. This platform can hold three 5 DoF manipulator-adjustments in line-up. The set-up is compact to avoid interference with the team, entirely mechanical and allows fast manual adjustment and fixation of the joints. It provides a stiff frame during surgery. A weight-compensation mechanism for the platformadjustment has been proposed. Measurements indicate all natural frequencies are above 25 Hz. The manipulator moves the instrument in 4 DoFs (??, , ?? and Z). Each manipulator passively supports its instrument with a parallelogram mechanism, providing a kinematically fixed point of rotation. Two manipulators have been designed in consecutive order. The first manipulator drives with a worm-wormwheel, the second design uses a ball-screw drive. This ball-screw drive reduces friction, which is preferred for next generations of the manipulator, since the worm-wormwheel drive shows a relatively low coherence at low frequencies. The compact ??Zmanipulator moves the instrument in ?? by rotating a drum. Friction wheels in the drum provide Z. Eventually, the drum will be removable from the manipulator for sterilization. This layout of the manipulator results in a small motion-envelope and least obstructs the team at the table. Force sensors measuring forces executed with the instrument, are integrated in the manipulator instead of at the instrument tip, to avoid all risks of electrical signals being introduced into the patient. Measurements indicate the separate sensors function properly. Integrated in the manipulator the sensors provide a good indication of the force but do suffer from some hysteresis which might be caused by moving wires. The instrument as realized consists of a drive-box, an instrument-tube and a 4 DoF tip. It provides the surgeon with three DoFs additional to the gripper of conventional MIS instruments. These DoFs include two lateral rotations (pitch and pivot) to improve the approach possibilities and the roll DoF will contribute in stitching. Pitch and roll are driven by means of bevelgears, driven with concentric tubes. Cables drive the pivot and close DoFs of the gripper. The transmissions are backdriveable for safety. Theoretical torques that can be achieved with this instrument approximate the requirements closely. Further research needs to reveal the torques achieved in practice and whether the requirements obtained from literature actually are required for these 4 DoF instruments. Force-sensors are proposed and can be integrated. Sofie currently consists of a master prototype with two 5 DoF haptic interfaces, the Slave and an electronic hardware cabinet. The surgeon uses the haptic interfaces of the Master to manipulate the manipulators and instruments of the Slave, while the actuated DoFs of the Master provide the surgeon with force-feedback. This project resulted in a demonstrator of the slave with force sensors incorporated, compact for easy approach of the patient and additional DoFs to increase approach possibilities of the target organ. This slave and master provide a good starting point to implement haptic controllers. These additional features may ultimately benefit both surgeon and patient

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    Desarrollo de un sistema de visión 3D para su integración en un robot móvil social

    Get PDF
    En una rama de la ingeniería tan importante como la robótica, con dispositivos cada vez en más estrecha relación con el mundo que nos rodea, se ha hecho muy necesaria la ayuda de sistemas de interacción hombre-máquina adecuados para abordar esta relación de la forma más natural posible. La visión artificial es uno de los campos que más apoyo ofrece en el desarrollo de estos sistemas de interacción. Este proyecto aborda la comprensión, diseño e implementación de un sistema de visión artificial 3D con el objetivo de controlar e interactuar con un robot móvil de tipo social.Departamento de Ingeniería de Sistemas y AutomáticaGrado en Ingeniería en Electrónica Industrial y Automátic
    corecore