1,312 research outputs found

    Semi-autonomous wheelchair developed using a unique camera system configuration biologically inspired by equine vision

    Full text link
    This paper is concerned with the design and development of a semi-autonomous wheelchair system using cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. This unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. This novel vision system combined with shared control strategies provides intelligent assistive guidance during wheelchair navigation and can accompany any hands-free wheelchair control technology. Leading up to experimental trials with patients at the Royal Rehabilitation Centre (RRC) in Ryde, results have displayed the effectiveness of this system to assist the user in navigating safely within the RRC whilst avoiding potential collisions. © 2011 IEEE

    An Empirical Evaluation of Visual Cues for 3D Flow Field Perception

    Get PDF
    Three-dimensional vector fields are common datasets throughout the sciences. They often represent physical phenomena that are largely invisible to us in the real world, like wind patterns and ocean currents. Computer-aided visualization is a powerful tool that can represent data in any way we choose through digital graphics. Visualizing 3D vector fields is inherently difficult due to issues such as visual clutter, self-occlusion, and the difficulty of providing depth cues that adequately support the perception of flow direction in 3D space. Cutting planes are often used to overcome these issues by presenting slices of data that are more cognitively manageable. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. The most valuable depth cue for the perception of other kinds of 3D data, notably 3D networks and 3D point clouds, is structure-from-motion (also called the Kinetic Depth Effect); another powerful depth cue is stereoscopic viewing, but none of these cues have been fully examined in the context of flow visualization. This dissertation presents a series of quantitative human factors studies that evaluate depth and direction cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The results of the studies are distilled into a set of design guidelines to improve the effectiveness of 3D flow field visualizations, and those guidelines are implemented as an immersive, interactive 3D flow visualization proof-of-concept application

    Interactive natural user interfaces

    Get PDF
    For many years, science fiction entertainment has showcased holographic technology and futuristic user interfaces that have stimulated the world\u27s imagination. Movies such as Star Wars and Minority Report portray characters interacting with free-floating 3D displays and manipulating virtual objects as though they were tangible. While these futuristic concepts are intriguing, it\u27s difficult to locate a commercial, interactive holographic video solution in an everyday electronics store. As used in this work, it should be noted that the term holography refers to artificially created, free-floating objects whereas the traditional term refers to the recording and reconstruction of 3D image data from 2D mediums. This research addresses the need for a feasible technological solution that allows users to work with projected, interactive and touch-sensitive 3D virtual environments. This research will aim to construct an interactive holographic user interface system by consolidating existing commodity hardware and interaction algorithms. In addition, this work studies the best design practices for human-centric factors related to 3D user interfaces. The problem of 3D user interfaces has been well-researched. When portrayed in science fiction, futuristic user interfaces usually consist of a holographic display, interaction controls and feedback mechanisms. In reality, holographic displays are usually represented by volumetric or multi-parallax technology. In this work, a novel holographic display is presented which leverages a mini-projector to produce a free-floating image onto a fog-like surface. The holographic user interface system will consist of a display component: to project a free-floating image; a tracking component: to allow the user to interact with the 3D display via gestures; and a software component: which drives the complete hardware system. After examining this research, readers will be well-informed on how to build an intuitive, eye-catching holographic user interface system for various application arenas

    Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    Get PDF
    This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively

    3D-TV Production from Conventional Cameras for Sports Broadcast

    Get PDF
    3DTV production of live sports events presents a challenging problem involving conflicting requirements of main- taining broadcast stereo picture quality with practical problems in developing robust systems for cost effective deployment. In this paper we propose an alternative approach to stereo production in sports events using the conventional monocular broadcast cameras for 3D reconstruction of the event and subsequent stereo rendering. This approach has the potential advantage over stereo camera rigs of recovering full scene depth, allowing inter-ocular distance and convergence to be adapted according to the requirements of the target display and enabling stereo coverage from both existing and ‘virtual’ camera positions without additional cameras. A prototype system is presented with results of sports TV production trials for rendering of stereo and free-viewpoint video sequences of soccer and rugby

    A Virtual Testbed for Fish-Tank Virtual Reality: Improving Calibration with a Virtual-in-Virtual Display

    Get PDF
    With the development of novel calibration techniques for multimedia projectors and curved projection surfaces, volumetric 3D displays are becoming easier and more affordable to build. The basic requirements include a display shape that defines the volume (e.g. a sphere, cylinder, or cuboid) and a tracking system to provide each user's location for the perspective corrected rendering. When coupled with modern graphics cards, these displays are capable of high resolution, low latency, high frame rate, and even stereoscopic rendering; however, like many previous studies have shown, every component must be precisely calibrated for a compelling 3D effect. While human perceptual requirements have been extensively studied for head-tracked displays, most studies featured seated users in front of a flat display. It remains unclear if results from these flat display studies are applicable to newer, walk-around displays with enclosed or curved shapes. To investigate these issues, we developed a virtual testbed for volumetric head-tracked displays that can measure calibration accuracy of the entire system in real-time. We used this testbed to investigate visual distortions of prototype curved displays, improve existing calibration techniques, study the importance of stereo to performance and perception, and validate perceptual calibration with novice users. Our experiments show that stereo is important for task performance, but requires more accurate calibration, and that novice users can make effective use of perceptual calibration tools. We also propose a novel, real-time calibration method that can be used to fine-tune an existing calibration using perceptual feedback. The findings from this work can be used to build better head-tracked volumetric displays with an unprecedented amount of 3D realism and intuitive calibration tools for novice users

    Interaction in an immersive virtual Beijing courtyard house

    Get PDF
    Courtyard housing had been a standard dwelling type in China for more than 3000 years, which integrated tightly with local customs, aesthetics, philosophy, and natural conditions. As the representative of Chinese courtyard housing, Beijing\u27s style has its unique features including structure, plan layout, and urban form. How to present these features effectively is of great importance to understand Beijing courtyard housing. The current major visualization methods in architecture include physical model, digital imaging, and hand drawing. All of them have two common limitations--small dimensions and non-interaction. As an alternative, VR owns two advantages--immersion and interactivity. In a full-immersive VR environment, such as the C6, users can examine virtual buildings at full-scale and operate models interactively at real-time. Thus, this project attempts to implement an interactive simulation of Beijing courtyard house in C6, and find out if architectural knowledge can be presented through this environment. The methodological steps include VR modeling, interaction planning, and C6 implementation. A four-yard house in Beijing was used as the prototype of VR modeling. By generating the model into six versions with different nodes and textures, it was found that the fewer nodes a model has, the quicker it is in C6. The main interaction mechanism is to demonstrate the main hall\u27s structure interactively through menu selection. The sequence to show the structure is based on its constructional process. Each menu item uses the name of structural components, and by clicking a menu item, the corresponding constructional step is shown in C6. There were five viewers invited to see the simulation and comment on the functionality of full-immersion and interactivity in this product. Overall, the results are positive that the full-immersive and interactive VR environment is potentially effective to present architectural knowledge. A major suggestion from the viewers is that more details can be added in the simulation, such as characters and furniture. Upon the accomplishment of this project, a method to implement architectural simulations efficiently in C6 could be found. In the future, this study could involve more complex interactions such as virtual inhabitants, as a means to show the Chinese culture vividly
    • …
    corecore