71 research outputs found

    Donald P. Brutzman: a biography

    Get PDF
    Design and implement large-scale networked underwater virtual worlds using Web-accessible 3D graphics and network streams. Integrate sensors, models and datasets for real-time interactive use by scientists, underwater robots, ships and students of all ages

    Computational haptics : the Sandpaper system for synthesizing texture for a force-feedback display

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1995.Includes bibliographical references (p. 155-180).by Margaret Diane Rezvan Minsky.Ph.D

    A new taxonomy for locomotion in virtual environments

    Get PDF
    The concept of virtual reality, although evolving due to technological advances, has always been fundamentally defined as a revolutionary way for humans to interact with computers. The revolution comes from the concept of immersion, which is the essence of virtual reality. Users are no longer passive observers of information, but active participants that have leaped through the computer screen and are now part of the information. This has tremendous implications on how users interact with computer information in the virtual world.;Perhaps the most common form of interaction in a virtual environment is locomotion. The term locomotion is used to indicate a user\u27s control of movement through the virtual environment. There are many ways for a user to change his viewpoint in the virtual world. Because virtual reality is a relatively young field, no standard interfaces exist for interaction, particularly locomotion, in a virtual world. There have been few attempts to formally classify the ways in which virtual locomotion can occur. These classification schemes do not take into account the various interaction devices such as joysticks and vehicle mock-ups that are used to perform the locomotion. Nor do they account for the differences in display devices, such as head-mounted displays, monitors, or projected walls.;This work creates a new classification system for virtual locomotion methods. The classification provides guidelines for designers of new VR applications, on what types of locomotion are best suited to the requirements of new applications. Unlike previous taxonomies, this work incorporates display devices, interaction devices, and travel tasks, along with identifying two major components of travel: translation and rotation. The classification also identifies important sub-components of these two.;In addition, we have experimentally validated the importance of display device and rotation method in this new classification system. This was accomplished through a large-scale user experiment. Users performed an architectural walkthrough of a virtual building. Both objective and subjective measures indicate that choice of display device is extremely important to the task of locomotion, and that for each display device, the choice of rotation method is also important

    ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    Get PDF
    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation

    Scene understanding through semantic image segmentation in augmented reality

    Get PDF
    Abstract. Semantic image segmentation, the task of assigning a label to each pixel in an image, is a major challenge in the field of computer vision. Semantic image segmentation using fully convolutional neural networks (FCNNs) offers an online solution to the scene understanding while having a simple training procedure and fast inference speed if designed efficiently. The semantic information provided by the semantic segmentation is a detailed understanding of the current context and this scene understanding is vital for scene modification in augmented reality (AR), especially if one aims to perform destructive scene augmentation. Augmented reality systems, by nature, aim to have a real-time modification of the context through head-mounted see-through or video-see-through displays, thus require efficiency in each step. Although there are many solutions to the semantic image segmentation in the literature such as DeeplabV3+, Deeplab DPC, they fail to offer a low latency inference due to their complex architectures in aim to acquire the best accuracy. As a part of this thesis work, we provide an efficient architecture for semantic image segmentation using an FCNN model and achieve real-time performance on smartphones at 19.65 frames per second (fps) while maintaining a high mean intersection over union (mIOU) of 67.7% on Cityscapes validation set with our "Basic" variant and 15.41 fps and 70.3% mIOU on Cityscapes test set using our "DPC" variant. The implementation is open-sourced and compatible with Tensorflow Lite, thus able to run on embedded and mobile devices. Furthermore, the thesis work demonstrates an augmented reality implementation where semantic segmentation masks are tracked online in a 3D environment using Google ARCore. We show that the frequent calculation of semantic information is not necessary and by tracking the calculated semantic information in 3D space using inertial-visual odometry that is provided by the ARCore framework, we can achieve savings on battery and CPU usage while maintaining a high mIOU. We further demonstrate a possible use case of the system by inpainting the objects in 3D space that are found by the semantic image segmentation network. The implemented Android application performs real-time augmented reality at 30 fps while running the computationally efficient network that was proposed as a part of this thesis work in parallel

    Interaction en réalité mixte appliquée à l'archéologie sous-marine

    Get PDF
    L intérêt porté par l archéologie à la réalité virtuelle est croissant. La réalité virtuelle est devenue un outil nécessaire pour l exploration et l étude des sites archéologiques, et plus particulièrement, les sites archéologiques sous-marins qui se révèlent parfois difficile d accès. Les études actuelles proposent des solutions en réalité virtuelle ou en réalité augmentée sous forme d environnements virtuels avec une interaction virtuelle et/ou augmentée mais aucune étude n a vraiment essayé de comparer ces deux aspects de l interaction. Nous présentons dans ce mémoire trois environnements en réalité virtuelle et un environnement en réalité augmentée où nous proposons des nouvelles méthodes d interaction. Ainsi, nous évaluons leurs fonctionnalités d un point de vue archéologique, nous étudions l influence du niveau d immersion sur les performances de l interaction et nous réalisons une comparaison entre l interaction en réalité virtuelle et en réalité augmentée.The interest in archeology virtual reality is growing. Virtual reality has become a necessary tool for exploration and study of archaeological sites, and more specifically, the underwater archaeological sites that sometimes prove difficult to access. Current studies suggest solutions in virtual reality or augmented reality in the form of virtual environments with virtual interaction and/or augmented interaction but no studies have really tried to compare these two aspects of interaction. We present in this thesis three environments in virtual reality and an environment in augmented reality when we propose new methods of interaction. Thus, we evaluate their archaeological functionality, we study the influence of level of immersion on performance of the interaction and we make a comparison between interaction in virtual reality and interaction in augmented reality.EVRY-Bib. électronique (912289901) / SudocSudocFranceF

    DIVE on the internet

    Get PDF
    This dissertation reports research and development of a platform for Collaborative Virtual Environments (CVEs). It has particularly focused on two major challenges: supporting the rapid development of scalable applications and easing their deployment on the Internet. This work employs a research method based on prototyping and refinement and promotes the use of this method for application development. A number of the solutions herein are in line with other CVE systems. One of the strengths of this work consists in a global approach to the issues raised by CVEs and the recognition that such complex problems are best tackled using a multi-disciplinary approach that understands both user and system requirements. CVE application deployment is aided by an overlay network that is able to complement any IP multicast infrastructure in place. Apart from complementing a weakly deployed worldwide multicast, this infrastructure provides for a certain degree of introspection, remote controlling and visualisation. As such, it forms an important aid in assessing the scalability of running applications. This scalability is further facilitated by specialised object distribution algorithms and an open framework for the implementation of novel partitioning techniques. CVE application development is eased by a scripting language, which enables rapid development and favours experimentation. This scripting language interfaces many aspects of the system and enables the prototyping of distribution-related components as well as user interfaces. It is the key construct of a distributed environment to which components, written in different languages, connect and onto which they operate in a network abstracted manner. The solutions proposed are exemplified and strengthened by three collaborative applications. The Dive room system is a virtual environment modelled after the room metaphor and supporting asynchronous and synchronous cooperative work. WebPath is a companion application to a Web browser that seeks to make the current history of page visits more visible and usable. Finally, the London travel demonstrator supports travellers by providing an environment where they can explore the city, utilise group collaboration facilities, rehearse particular journeys and access tourist information data

    Remote AFIS: Development and Validation of low-cost Remote Tower Concepts for uncontrolled Aerodromes

    Get PDF
    Remote tower systems are widely established as a means to provide efcient air trafc control (ATC) from a remote location. However, even these cost-efective systems cause procurement, implementation, and maintenance costs, which make them unafordable for non-ATC aerodromes with low revenues, often only ofering an aerodrome fight information service (AFIS) or UNICOM information service. In this paper, two more inexpensive concepts enabling remote control at these aerodromes are presented. They are based on a simplifed camera set-up comprising a pan-tilt-zoom-camera (PTZ-camera) and a simple panoramic camera. A virtual reality-headset (VR-headset) is used to display the video streams and to control the PTZ-camera. The results of a validation study with nine ATC and AFIS ofcers using live trafc at the Braunschweig-Wolfsburg aerodrome (BWE) are presented. They are discussed with respect to perceived usability, virtual reality induced cybersickness, and operator acceptance. The system’s cost is compared to that of a conventional remote tower set-up. Furthermore, measured objective data in the form of angular head rotation velocities are presented and requirements for the camera set-up are deduced. In conclusion, the developed concept utilizing both the panorama camera and PTZ-camera was found to be sufciently usable and accepted by the validation participants. In contrast, the concept based only on the PTZ-camera sufered from jerky and delayed camera movements leading to considerable cybersickness and making it barely usable

    Mobile three-dimensional city maps

    Get PDF
    Maps are visual representations of environments and the objects within, depicting their spatial relations. They are mainly used in navigation, where they act as external information sources, supporting observation and decision making processes. Map design, or the art-science of cartography, has led to simplification of the environment, where the naturally three-dimensional environment has been abstracted to a two-dimensional representation, populated with simple geometrical shapes and symbols. However, abstract representation requires a map reading ability. Modern technology has reached the level where maps can be expressed in digital form, having selectable, scalable, browsable and updatable content. Maps may no longer even be limited to two dimensions, nor to an abstract form. When a real world based virtual environment is created, a 3D map is born. Given a realistic representation, would the user no longer need to interpret the map, and be able to navigate in an inherently intuitive manner? To answer this question, one needs a mobile test platform. But can a 3D map, a resource hungry real virtual environment, exist on such resource limited devices? This dissertation approaches the technical challenges posed by mobile 3D maps in a constructive manner, identifying the problems, developing solutions and providing answers by creating a functional system. The case focuses on urban environments. First, optimization methods for rendering large, static 3D city models are researched and a solution provided by combining visibility culling, level-of-detail management and out-of-core rendering, suited for mobile 3D maps. Then, the potential of mobile networking is addressed, developing efficient and scalable methods for progressive content downloading and dynamic entity management. Finally, a 3D navigation interface is developed for mobile devices, and the research validated with measurements and field experiments. It is found that near realistic mobile 3D city maps can exist in current mobile phones, and the rendering rates are excellent in 3D hardware enabled devices. Such 3D maps can also be transferred and rendered on-the-fly sufficiently fast for navigation use over cellular networks. Real world entities such as pedestrians or public transportation can be tracked and presented in a scalable manner. Mobile 3D maps are useful for navigation, but their usability depends highly on interaction methods - the potentially intuitive representation does not imply, for example, faster navigation than with a professional 2D street map. In addition, the physical interface limits the usability

    Proceedings of the Second PHANToM Users Group Workshop : October 19-22, 1997 : Endicott House, Dedham, MA, Massachusetts Institute of Technology, Cambridge, MA

    Get PDF
    "December, 1997." Cover title.Includes bibliographical references.Sponsored by SensAble Technologies, Inc., Cambridge, MA."[edited by J. Kennedy Salisbury and Mandayam A. Srinivasan]
    • …
    corecore