68 research outputs found

    Stereoscopic human interfaces

    Get PDF
    This article focuses on the use of stereoscopic video interfaces for telerobotics. Topics concerning human visual perception, binocular image capturing, and stereoscopic devices are described. There is a wide variety of video interfaces for telerobotic systems. Choosing the best video interface depends on the telerobotic application requirements. Simple monoscopic cameras are good enough for watching remote robot movements or for teleprogramming a sequence of commands. However, when operators seek precise robot guidance or wish to manipulate objects, a better perception of the remote environment must be achieved, for which more advanced visual interfaces are required. This implies a higher degree of telepresence, and, therefore, the most suitable visual interface has to be chosen. The aim of this article is to describe the two main aspects using stereoscopic interfaces: the capture of binocular video images, according to the disparity limits in human perception and the proper selection of the visualization interface for stereoscopic images

    Stereo Viewing and Virtual Reality Technologies in Mobile Robot Teleguide

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.” DOI: 10.1109/TRO.2009.2028765The use of 3-D stereoscopic visualization may provide a user with higher comprehension of remote environments in teleoperation when compared with 2-D viewing, in particular, a higher perception of environment depth characteristics, spatial localization, remote ambient layout, faster system learning, and decision performance. Works in the paper have demonstrated how stereo vision contributes to the improvement of the perception of some depth cues, often for abstract tasks, while it is hard to find works addressing stereoscopic visualization in mobile robot teleguide applications. This paper intends to contribute to this aspect by investigating the stereoscopic robot teleguide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This paper also investigates how user performance may vary when employing different display technologies. Results from a set of test trials run on seven virtual reality systems, from laptop to large panorama and from head-mounted display to Cave automatic virtual environment (CAVE), emphasized few aspects that represent a base for further investigations as well as a guide when designing specific systems for telepresence.Peer reviewe

    Virtual Reality to Simulate Visual Tasks for Robotic Systems

    Get PDF
    Virtual reality (VR) can be used as a tool to analyze the interactions between the visual system of a robotic agent and the environment, with the aim of designing the algorithms to solve the visual tasks necessary to properly behave into the 3D world. The novelty of our approach lies in the use of the VR as a tool to simulate the behavior of vision systems. The visual system of a robot (e.g., an autonomous vehicle, an active vision system, or a driving assistance system) and its interplay with the environment can be modeled through the geometrical relationships between the virtual stereo cameras and the virtual 3D world. Differently from conventional applications, where VR is used for the perceptual rendering of the visual information to a human observer, in the proposed approach, a virtual world is rendered to simulate the actual projections on the cameras of a robotic system. In this way, machine vision algorithms can be quantitatively validated by using the ground truth data provided by the knowledge of both the structure of the environment and the vision system

    Perception-driven approaches to real-time remote immersive visualization

    Get PDF
    In remote immersive visualization systems, real-time 3D perception through RGB-D cameras, combined with modern Virtual Reality (VR) interfaces, enhances the user’s sense of presence in a remote scene through 3D reconstruction rendered in a remote immersive visualization system. Particularly, in situations when there is a need to visualize, explore and perform tasks in inaccessible environments, too hazardous or distant. However, a remote visualization system requires the entire pipeline from 3D data acquisition to VR rendering satisfies the speed, throughput, and high visual realism. Mainly when using point-cloud, there is a fundamental quality difference between the acquired data of the physical world and the displayed data because of network latency and throughput limitations that negatively impact the sense of presence and provoke cybersickness. This thesis presents state-of-the-art research to address these problems by taking the human visual system as inspiration, from sensor data acquisition to VR rendering. The human visual system does not have a uniform vision across the field of view; It has the sharpest visual acuity at the center of the field of view. The acuity falls off towards the periphery. The peripheral vision provides lower resolution to guide the eye movements so that the central vision visits all the interesting crucial parts. As a first contribution, the thesis developed remote visualization strategies that utilize the acuity fall-off to facilitate the processing, transmission, buffering, and rendering in VR of 3D reconstructed scenes while simultaneously reducing throughput requirements and latency. As a second contribution, the thesis looked into attentional mechanisms to select and draw user engagement to specific information from the dynamic spatio-temporal environment. It proposed a strategy to analyze the remote scene concerning the 3D structure of the scene, its layout, and the spatial, functional, and semantic relationships between objects in the scene. The strategy primarily focuses on analyzing the scene with models the human visual perception uses. It sets a more significant proportion of computational resources on objects of interest and creates a more realistic visualization. As a supplementary contribution, A new volumetric point-cloud density-based Peak Signal-to-Noise Ratio (PSNR) metric is proposed to evaluate the introduced techniques. An in-depth evaluation of the presented systems, comparative examination of the proposed point cloud metric, user studies, and experiments demonstrated that the methods introduced in this thesis are visually superior while significantly reducing latency and throughput

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2-D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. [Continues.

    Scalable multi-view stereo camera array for real world real-time image capture and three-dimensional displays

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (leaves 71-75).The number of three-dimensional displays available is escalating and yet the capturing devices for multiple view content are focused on either single camera precision rigs that are limited to stationary objects or the use of synthetically created animations. In this work we will use the existence of inexpensive digital CMOS cameras to explore a multi- image capture paradigm and the gathering of real world real-time data of active and static scenes. The capturing system can be developed and employed for a wide range of applications such as portrait-based images for multi-view facial recognition systems, hypostereo surgical training systems, and stereo surveillance by unmanned aerial vehicles. The system will be adaptable to capturing the correct stereo views based on the environmental scene and the desired three-dimensional display. Several issues explored by the system will include image calibration, geometric correction, the possibility of object tracking, and transfer of the array technology into other image capturing systems. These features provide the user more freedom to interact with their specific 3-D content while allowing the computer to take on the difficult role of stereoscopic cinematographer.Samuel L. Hill.S.M

    Virtual Reality and Oceanography: Overview, Applications, and Perspective

    Get PDF
    With the ongoing, exponential increase in ocean data from autonomous platforms, satellites, models, and in particular, the growing field of quantitative imaging, there arises a need for scalable and cost-efficient visualization tools to interpret these large volumes of data. With the recent proliferation of consumer grade head-mounted displays, the emerging field of virtual reality (VR) has demonstrated its benefit in numerous disciplines, ranging from medicine to archeology. However, these benefits have not received as much attention in the ocean sciences. Here, we summarize some of the ways that virtual reality has been applied to this field. We highlight a few examples in which we (the authors) demonstrate the utility of VR as a tool for ocean scientists. For oceanic datasets that are well-suited for three-dimensional visualization, virtual reality has the potential to enhance the practice of ocean science
    corecore