21,717 research outputs found

    Viewing the Future? Virtual Reality In Journalism

    Get PDF
    Journalism underwent a flurry of virtual reality content creation, production and distribution starting in the final months of 2015. The New York Times distributed more than 1 million cardboard virtual reality viewers and released an app showing a spherical video short about displaced refugees. The Los Angeles Times landed people next to a crater on Mars. USA TODAY took visitors on a ride-along in the "Back to the Future" car on the Universal Studios lot and on a spin through Old Havana in a bright pink '57 Ford. ABC News went to North Korea for a spherical view of a military parade and to Syria to see artifacts threatened by war. The Emblematic Group, a company that creates virtual reality content, followed a woman navigating a gauntlet of anti- abortion demonstrators at a family planning clinic and allowed people to witness a murder-suicide stemming from domestic violence.In short, the period from October 2015 through February 2016 was one of significant experimentation with virtual reality (VR) storytelling. These efforts are part of an initial foray into determining whether VR is a feasible way to present news. The year 2016 is shaping up as a period of further testing and careful monitoring of potential growth in the use of virtual reality among consumers

    Enabling Self-aware Smart Buildings by Augmented Reality

    Full text link
    Conventional HVAC control systems are usually incognizant of the physical structures and materials of buildings. These systems merely follow pre-set HVAC control logic based on abstract building thermal response models, which are rough approximations to true physical models, ignoring dynamic spatial variations in built environments. To enable more accurate and responsive HVAC control, this paper introduces the notion of "self-aware" smart buildings, such that buildings are able to explicitly construct physical models of themselves (e.g., incorporating building structures and materials, and thermal flow dynamics). The question is how to enable self-aware buildings that automatically acquire dynamic knowledge of themselves. This paper presents a novel approach using "augmented reality". The extensive user-environment interactions in augmented reality not only can provide intuitive user interfaces for building systems, but also can capture the physical structures and possibly materials of buildings accurately to enable real-time building simulation and control. This paper presents a building system prototype incorporating augmented reality, and discusses its applications.Comment: This paper appears in ACM International Conference on Future Energy Systems (e-Energy), 201

    Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

    Get PDF
    In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructing their 3D shape in real time. We use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.Comment: International Conference on Robotics and Automation (ICRA) 2017, http://visual.cs.ucl.ac.uk/pubs/cofusion, https://github.com/martinruenz/co-fusio

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Remote Real-Time Collaboration Platform enabled by the Capture, Digitisation and Transfer of Human-Workpiece Interactions

    Get PDF
    In this highly globalised manufacturing ecosystem, product design and verification activities, production and inspection processes, and technical support services are spread across global supply chains and customer networks. Therefore, a platform for global teams to collaborate with each other in real-time to perform complex tasks is highly desirable. This work investigates the design and development of a remote real-time collaboration platform by using human motion capture technology powered by infrared light based depth imaging sensors borrowed from the gaming industry. The unique functionality of the proposed platform is the sharing of physical contexts during a collaboration session by not only exchanging human actions but also the effects of those actions on the task environment. This enables teams to remotely work on a common task problem at the same time and also get immediate feedback from each other which is vital for collaborative design, inspection and verifications tasks in the factories of the future

    From images via symbols to contexts: using augmented reality for interactive model acquisition

    Get PDF
    Systems that perform in real environments need to bind the internal state to externally perceived objects, events, or complete scenes. How to learn this correspondence has been a long standing problem in computer vision as well as artificial intelligence. Augmented Reality provides an interesting perspective on this problem because a human user can directly relate displayed system results to real environments. In the following we present a system that is able to bootstrap internal models from user-system interactions. Starting from pictorial representations it learns symbolic object labels that provide the basis for storing observed episodes. In a second step, more complex relational information is extracted from stored episodes that enables the system to react on specific scene contexts
    corecore