76 research outputs found

    Multimodal metaphors for generic interaction tasks in virtual environments

    Full text link
    Virtual Reality (VR) Systeme bieten zusätzliche Ein- und Ausgabekanäle für die Interaktion zwischen Mensch und Computer in virtuellen Umgebungen. Solche VR Technologien ermöglichen den Anwendern bessere Einblicke in hochkomplexe Datenmengen, stellen allerdings auch hohe Anforderungen an den Benutzer bezüglich der Fähigkeiten mit virtuellen Objekten zu interagieren. In dieser Arbeit werden sowohl die Entwicklung und Evaluierung neuer multimodaler Interaktionsmetaphern für generische Interaktionsaufgaben in virtuellen Umgebungen vorgestellt und diskutiert. Anhand eines VR Systems wird der Einsatz dieser Konzepte an zwei Fallbeispielen aus den Domänen der 3D-Stadtvisualisierung und seismischen Volumendarstellung aufgezeigt

    Novel haptic interface For viewing 3D images

    Get PDF
    In recent years there has been an explosion of devices and systems capable of displaying stereoscopic 3D images. While these systems provide an improved experience over traditional bidimensional displays they often fall short on user immersion. Usually these systems only improve depth perception by relying on the stereopsis phenomenon. We propose a system that improves the user experience and immersion by having a position dependent rendering of the scene and the ability to touch the scene. This system uses depth maps to represent the geometry of the scene. Depth maps can be easily obtained on the rendering process or can be derived from the binocular-stereo images by calculating their horizontal disparity. This geometry is then used as an input to be rendered in a 3D display, do the haptic rendering calculations and have a position depending render of the scene. The author presents two main contributions. First, since the haptic devices have a finite work space and limited resolution, we used what we call detail mapping algorithms. These algorithms compress geometry information contained in a depth map, by reducing the contrast among pixels, in such a way that it can be rendered into a limited resolution display medium without losing any detail. Second, the unique combination of a depth camera as a motion capturing system, a 3D display and haptic device to enhance user experience. While developing this system we put special attention on the cost and availability of the hardware. We decided to use only off-the-shelf, mass consumer oriented hardware so our experiments can be easily implemented and replicated. As an additional benefit the total cost of the hardware did not exceed the one thousand dollars mark making it affordable for many individuals and institutions

    Towards System Agnostic Calibration of Optical See-Through Head-Mounted Displays for Augmented Reality

    Get PDF
    This dissertation examines the developments and progress of spatial calibration procedures for Optical See-Through (OST) Head-Mounted Display (HMD) devices for visual Augmented Reality (AR) applications. Rapid developments in commercial AR systems have created an explosion of OST device options for not only research and industrial purposes, but also the consumer market as well. This expansion in hardware availability is equally matched by a need for intuitive standardized calibration procedures that are not only easily completed by novice users, but which are also readily applicable across the largest range of hardware options. This demand for robust uniform calibration schemes is the driving motive behind the original contributions offered within this work. A review of prior surveys and canonical description for AR and OST display developments is provided before narrowing the contextual scope to the research questions evolving within the calibration domain. Both established and state of the art calibration techniques and their general implementations are explored, along with prior user study assessments and the prevailing evaluation metrics and practices employed within. The original contributions begin with a user study evaluation comparing and contrasting the accuracy and precision of an established manual calibration method against a state of the art semi-automatic technique. This is the first formal evaluation of any non-manual approach and provides insight into the current usability limitations of present techniques and the complexities of next generation methods yet to be solved. The second study investigates the viability of a user-centric approach to OST HMD calibration through novel adaptation of manual calibration to consumer level hardware. Additional contributions describe the development of a complete demonstration application incorporating user-centric methods, a novel strategy for visualizing both calibration results and registration error from the user’s perspective, as well as a robust intuitive presentation style for binocular manual calibration. The final study provides further investigation into the accuracy differences observed between user-centric and environment-centric methodologies. The dissertation concludes with a summarization of the contribution outcomes and their impact on existing AR systems and research endeavors, as well as a short look ahead into future extensions and paths that continued calibration research should explore

    Proceedings of the Second PHANToM Users Group Workshop : October 19-22, 1997 : Endicott House, Dedham, MA, Massachusetts Institute of Technology, Cambridge, MA

    Get PDF
    "December, 1997." Cover title.Includes bibliographical references.Sponsored by SensAble Technologies, Inc., Cambridge, MA."[edited by J. Kennedy Salisbury and Mandayam A. Srinivasan]

    ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    Get PDF
    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation

    Designing multi-sensory displays for abstract data

    Get PDF
    The rapid increase in available information has lead to many attempts to automatically locate patterns in large, abstract, multi-attributed information spaces. These techniques are often called data mining and have met with varying degrees of success. An alternative approach to automatic pattern detection is to keep the user in the exploration loop by developing displays for perceptual data mining. This approach allows a domain expert to search the data for useful relationships and can be effective when automated rules are hard to define. However, designing models of the abstract data and defining appropriate displays are critical tasks in building a useful system. Designing displays of abstract data is especially difficult when multi-sensory interaction is considered. New technology, such as Virtual Environments, enables such multi-sensory interaction. For example, interfaces can be designed that immerse the user in a 3D space and provide visual, auditory and haptic (tactile) feedback. It has been a goal of Virtual Environments to use multi-sensory interaction in an attempt to increase the human-to-computer bandwidth. This approach may assist the user to understand large information spaces and find patterns in them. However, while the motivation is simple enough, actually designing appropriate mappings between the abstract information and the human sensory channels is quite difficult. Designing intuitive multi-sensory displays of abstract data is complex and needs to carefully consider human perceptual capabilities, yet we interact with the real world everyday in a multi-sensory way. Metaphors can describe mappings between the natural world and an abstract information space. This thesis develops a division of the multi-sensory design space called the MS-Taxonomy. The MS-Taxonomy provides a concept map of the design space based on temporal, spatial and direct metaphors. The detailed concepts within the taxonomy allow for discussion of low level design issues. Furthermore the concepts abstract to higher levels, allowing general design issues to be compared and discussed across the different senses. The MS-Taxonomy provides a categorisation of multi-sensory design options. However, to design effective multi-sensory displays requires more than a thorough understanding of design options. It is also useful to have guidelines to follow, and a process to describe the design steps. This thesis uses the structure of the MS-Taxonomy to develop the MS-Guidelines and the MS-Process. The MS-Guidelines capture design recommendations and the problems associated with different design choices. The MS-Process integrates the MS-Guidelines into a methodology for developing and evaluating multi-sensory displays. A detailed case study is used to validate the MS-Taxonomy, the MS-Guidelines and the MS-Process. The case study explores the design of multi-sensory displays within a domain where users wish to explore abstract data for patterns. This area is called Technical Analysis and involves the interpretation of patterns in stock market data. Following the MS-Process and using the MS-Guidelines some new multi-sensory displays are designed for pattern detection in stock market data. The outcome from the case study includes some novel haptic-visual and auditory-visual designs that are prototyped and evaluated

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2, an introduction and reports on fourteen research projects.National Institutes of Health Grant RO1 DC00117National Institutes of Health Grant RO1 DC02032National Institutes of Health/National Institute on Deafness and Other Communication Disorders Grant R01 DC00126National Institutes of Health Grant R01 DC00270National Institutes of Health Contract N01 DC52107U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-95-K-0014U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-96-K-0003U.S. Navy - Office of Naval Research Grant N00014-96-1-0379U.S. Air Force - Office of Scientific Research Grant F49620-95-1-0176U.S. Air Force - Office of Scientific Research Grant F49620-96-1-0202U.S. Navy - Office of Naval Research Subcontract 40167U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-96-K-0002National Institutes of Health Grant R01-NS33778U.S. Navy - Office of Naval Research Grant N00014-92-J-184

    Spatial cognition in virtual environments

    Get PDF
    Since the last decades of the past century, Virtual Reality (VR) has been developed also as a methodology in research, besides a set of helpful applications in medical field (trainings for surgeons, but also rehabilitation tools). In science, there is still no agreement if the use of this technology in research on cognitive processes allows us to generalize results found in a Virtual Environment (VE) to the human behavior or cognition in the real world. This happens because of a series of differences found in basic perceptual processes (for example, depth perception) suggest a big difference in visual environmental representation capabilities of Virtual scenarios. On the other side, in literature quite a lot of studies can be found, which give a proof of VEs reliability in more than one field (trainings and rehabilitation, but also in some research paradigms). The main aim of this thesis is to investigate if, and in which cases, these two different views can be integrated and shed a new light and insights on the use of VR in research. Through the many experiments conducted in the "Virtual Development and Training Center" of the Fraunhofer Institute in Magdeburg, we addressed both low-level spatial processes (within an "evaluation of distances paradigm") and high-level spatial cognition (using a navigation and visuospatial planning task, called "3D Maps"), trying to address, at the same time, also practical problems as, for example, the use of stereoscopy in VEs or the problem of "Simulator Sickness" during navigation in immersive VEs. The results obtained with our research fill some gaps in literature about spatial cognition in VR and allow us to suggest that the use of VEs in research is quite reliable, mainly if the investigated processes are from the higher level of complexity. In this case, in fact, human brain "adapts" pretty well even to a "new" reality like the one offered by the VR, providing of course a familiarization period and the possibility to interact with the environment; the behavior will then be “like if” the environment was real: what is strongly lacking, at the moment, is the possibility to give a completely multisensorial experience, which is a very important issue in order to get the best from this kind of “visualization” of an artificial world. From a low-level point of view, we can confirm what already found in literature, that there are some basic differences in how our visual system perceives important spatial cues as depth and relationships between objects, and, therefore, we cannot talk about "similar environments" talking about VR and reality. The idea that VR is a "different" reality, offering potentially unlimited possibilities of use, even overcoming some physical limits of the real world, in which this "new" reality can be acquired by our cognitive system just by interacting with it, is therefore discussed in the conclusions of this work

    Spatial cognition in virtual environments

    Get PDF
    Since the last decades of the past century, Virtual Reality (VR) has been developed also as a methodology in research, besides a set of helpful applications in medical field (trainings for surgeons, but also rehabilitation tools). In science, there is still no agreement if the use of this technology in research on cognitive processes allows us to generalize results found in a Virtual Environment (VE) to the human behavior or cognition in the real world. This happens because of a series of differences found in basic perceptual processes (for example, depth perception) suggest a big difference in visual environmental representation capabilities of Virtual scenarios. On the other side, in literature quite a lot of studies can be found, which give a proof of VEs reliability in more than one field (trainings and rehabilitation, but also in some research paradigms). The main aim of this thesis is to investigate if, and in which cases, these two different views can be integrated and shed a new light and insights on the use of VR in research. Through the many experiments conducted in the "Virtual Development and Training Center" of the Fraunhofer Institute in Magdeburg, we addressed both low-level spatial processes (within an "evaluation of distances paradigm") and high-level spatial cognition (using a navigation and visuospatial planning task, called "3D Maps"), trying to address, at the same time, also practical problems as, for example, the use of stereoscopy in VEs or the problem of "Simulator Sickness" during navigation in immersive VEs. The results obtained with our research fill some gaps in literature about spatial cognition in VR and allow us to suggest that the use of VEs in research is quite reliable, mainly if the investigated processes are from the higher level of complexity. In this case, in fact, human brain "adapts" pretty well even to a "new" reality like the one offered by the VR, providing of course a familiarization period and the possibility to interact with the environment; the behavior will then be “like if” the environment was real: what is strongly lacking, at the moment, is the possibility to give a completely multisensorial experience, which is a very important issue in order to get the best from this kind of “visualization” of an artificial world. From a low-level point of view, we can confirm what already found in literature, that there are some basic differences in how our visual system perceives important spatial cues as depth and relationships between objects, and, therefore, we cannot talk about "similar environments" talking about VR and reality. The idea that VR is a "different" reality, offering potentially unlimited possibilities of use, even overcoming some physical limits of the real world, in which this "new" reality can be acquired by our cognitive system just by interacting with it, is therefore discussed in the conclusions of this work
    • …
    corecore