18,641 research outputs found

    A Planning Pipeline for Large Multi-Agent Missions

    Get PDF
    In complex multi-agent applications, human operators are often tasked with planning and managing large heterogeneous teams of humans and autonomous vehicles. Although the use of these autonomous vehicles broadens the scope of meaningful applications, many of their systems remain unintuitive and difficult to master for human operators whose expertise lies in the application domain and not at the platform level. Current research focuses on the development of individual capabilities necessary to plan multi-agent missions of this scope, placing little emphasis on the integration of these components in to a full pipeline. The work presented in this paper presents a complete and user-agnostic planning pipeline for large multiagent missions known as the HOLII GRAILLE. The system takes a holistic approach to mission planning by integrating capabilities in human machine interaction, flight path generation, and validation and verification. Components modules of the pipeline are explored on an individual level, as well as their integration into a whole system. Lastly, implications for future mission planning are discussed

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Talk your way round: a speech interface to a virtual museum

    Get PDF
    Purpose: To explore the development of a speech interface to a Virtual World and to consider its relevance for disabled users. Method: The system was developed using mainly software that is available at minimal cost. How well the system functioned was assessed by measuring the number of times a group of users with a range of voices had to repeat commands in order for them to be successfully recognised. During an initial session, these users were asked to use the system with no instruction to see how easy this was. Results: Most of the spoken commands had to be repeated less than twice on average for successful recognition. For a set of ‘teleportation’ commands this figure was higher (2.4), but it was clear why this was so and could easily be rectified. The system was easy to use without instruction. Comments on the system were generally positive. Conclusions: While the system has some limitations, a Virtual World with a reasonably reliable speech interface has been developed almost entirely from software which is available at minimal cost. Improvements and further testing are considered. Such a system would clearly improve access to Virtual Reality technologies for those without the skills or physical ability to use a standard keyboard and mouse. It is an example of both Assistive Technology and Universal Design

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    Exploring the Front Touch Interface for Virtual Reality Headsets

    Full text link
    In this paper, we propose a new interface for virtual reality headset: a touchpad in front of the headset. To demonstrate the feasibility of the front touch interface, we built a prototype device, explored VR UI design space expansion, and performed various user studies. We started with preliminary tests to see how intuitively and accurately people can interact with the front touchpad. Then, we further experimented various user interfaces such as a binary selection, a typical menu layout, and a keyboard. Two-Finger and Drag-n-Tap were also explored to find the appropriate selection technique. As a low-cost, light-weight, and in low power budget technology, a touch sensor can make an ideal interface for mobile headset. Also, front touch area can be large enough to allow wide range of interaction types such as multi-finger interactions. With this novel front touch interface, we paved a way to new virtual reality interaction methods

    Bridging the Semantic Gap with SQL Query Logs in Natural Language Interfaces to Databases

    Full text link
    A critical challenge in constructing a natural language interface to database (NLIDB) is bridging the semantic gap between a natural language query (NLQ) and the underlying data. Two specific ways this challenge exhibits itself is through keyword mapping and join path inference. Keyword mapping is the task of mapping individual keywords in the original NLQ to database elements (such as relations, attributes or values). It is challenging due to the ambiguity in mapping the user's mental model and diction to the schema definition and contents of the underlying database. Join path inference is the process of selecting the relations and join conditions in the FROM clause of the final SQL query, and is difficult because NLIDB users lack the knowledge of the database schema or SQL and therefore cannot explicitly specify the intermediate tables and joins needed to construct a final SQL query. In this paper, we propose leveraging information from the SQL query log of a database to enhance the performance of existing NLIDBs with respect to these challenges. We present a system Templar that can be used to augment existing NLIDBs. Our extensive experimental evaluation demonstrates the effectiveness of our approach, leading up to 138% improvement in top-1 accuracy in existing NLIDBs by leveraging SQL query log information.Comment: Accepted to IEEE International Conference on Data Engineering (ICDE) 201
    • …
    corecore