101 research outputs found

    Machine vision based teleoperation aid

    Get PDF
    When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid

    Shape Perception of Clear Water in Photo-Realistic Images

    Get PDF
    Light plays a vital role in the perception of transparency, depth and shape of liquids. The perception of the surfaces of liquids is made possible with an understanding of refraction of light and knowledge of the underlying texture geometry. Given this, what specific characteristics of the natural optical environment are essential to the perception of transparent liquids, specifically with respect to efficiency and realism? In this thesis, a light path triangulation method for the recovery of transparent surface shape and a system to estimate the perceived shape of any arbitrary-shaped object with a refractive surface are proposed. A psycho-physical experiment was conducted to investigate this using the perceived shape of water from stereo images using a real time stereoscopic 3-D depth gauge. The results suggest that people are able to consistently perceive shape of liquids from photo-realistic images and that regularity in underlying texture facilitates human judgement of surface shape

    Shared-Frustum stereo rendering

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 52-54).by Michael Vincent Capps.S.M

    Stereoscopic 3D user interfaces : exploring the potentials and risks of 3D displays in cars

    Get PDF
    During recent years, rapid advancements in stereoscopic digital display technology has led to acceptance of high-quality 3D in the entertainment sector and even created enthusiasm towards the technology. The advent of autostereoscopic displays (i.e., glasses-free 3D) allows for introducing 3D technology into other application domains, including but not limited to mobile devices, public displays, and automotive user interfaces - the latter of which is at the focus of this work. Prior research demonstrates that 3D improves the visualization of complex structures and augments virtual environments. We envision its use to enhance the in-car user interface by structuring the presented information via depth. Thus, content that requires attention can be shown close to the user and distances, for example to other traffic participants, gain a direct mapping in 3D space

    Exploring 3D User Interface Technologies for Improving the Gaming Experience

    Get PDF
    3D user interface technologies have the potential to make games more immersive & engaging and thus potentially provide a better user experience to gamers. Although 3D user interface technologies are available for games, it is still unclear how their usage affects game play and if there are any user performance benefits. A systematic study of these technologies in game environments is required to understand how game play is affected and how we can optimize the usage in order to achieve better game play experience. This dissertation seeks to improve the gaming experience by exploring several 3DUI technologies. In this work, we focused on stereoscopic 3D viewing (to improve viewing experience) coupled with motion based control, head tracking (to make games more engaging), and faster gesture based menu selection (to reduce cognitive burden associated with menu interaction while playing). We first studied each of these technologies in isolation to understand their benefits for games. We present the results of our experiments to evaluate benefits of stereoscopic 3D (when coupled with motion based control) and head tracking in games. We discuss the reasons behind these findings and provide recommendations for game designers who want to make use of these technologies to enhance gaming experiences. We also present the results of our experiments with finger-based menu selection techniques with an aim to find out the fastest technique. Based on these findings, we custom designed an air-combat game prototype which simultaneously uses stereoscopic 3D, head tracking, and finger-count shortcuts to prove that these technologies could be useful for games if the game is designed with these technologies in mind. Additionally, to enhance depth discrimination and minimize visual discomfort, the game dynamically optimizes stereoscopic 3D parameters (convergence and separation) based on the user\u27s look direction. We conducted a within subjects experiment where we examined performance data and self-reported data on users perception of the game. Our results indicate that participants performed significantly better when all the 3DUI technologies (stereoscopic 3D, head-tracking and finger-count gestures) were available simultaneously with head tracking as a dominant factor. We explore the individual contribution of each of these technologies to the overall gaming experience and discuss the reasons behind our findings. Our experiments indicate that 3D user interface technologies could make gaming experience better if used effectively. The games must be designed to make use of the 3D user interface technologies available in order to provide a better gaming experience to the user. We explored a few technologies as part of this work and obtained some design guidelines for future game designers. We hope that our work will serve as the framework for the future explorations of making games better using 3D user interface technologies

    Instrumental vision

    Full text link
    This chapter interrogates stereo-immersive ‘virtual reality’ (VR), the technology that enables a perceiver to experience what it is like to be immersed in a simulated environment. While the simulation is powered by the “geometry engine” (Cutting, 1997: 31) associated with high-end computer imaging technology, the visual experience itself is powered by ordinary human vision: the vision system’s innate capacity to see “in 3D”. To understand and critically appraise stereo-immersive VR, we should study not its purported ‘virtuality’, but its specific visuality, because the ‘reality’ of a so-called ‘virtual environment’ is afforded by the stereoacuity of binocular vision itself. By way of such a critique of the visuality of stereo-immersive VR, this chapter suggests that we think about the ‘practice’ of vision, and consider on what basis vision can have its own ‘materiality’. Pictorial perception is proposed as an exemplary visual mode in which the possibilities of perception might emerge. Against the ‘possibilities’ of vision associated with pictures, the visuality of stereo-immersive VR emerges as a harnessing, or ‘instrumentalisation’ of vision’s innate capabilities. James J. Gibson’s ‘ecological’ approach to vision studies is referenced to show the degree to which developers of VR have sought — and succeeded — to mimic the ‘realness’ of ordinary perceptual reality. This raises a question concerning whether the success of stereo-immersive VR is simultaneously the source of its own perceptual redundancy: for to bring into being the perceptual basis of ordinary ‘real’ reality, is to return the perceiver to what is already familiar and known

    Development of an augmented reality guided computer assisted orthopaedic surgery system

    Get PDF
    Previously held under moratorium from 1st December 2016 until 1st December 2021.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively
    • …
    corecore