1,334 research outputs found

    Augmented reality meeting table: a novel multi-user interface for architectural design

    Get PDF
    Immersive virtual environments have received widespread attention as providing possible replacements for the media and systems that designers traditionally use, as well as, more generally, in providing support for collaborative work. Relatively little attention has been given to date however to the problem of how to merge immersive virtual environments into real world work settings, and so to add to the media at the disposal of the designer and the design team, rather than to replace it. In this paper we report on a research project in which optical see-through augmented reality displays have been developed together with prototype decision support software for architectural and urban design. We suggest that a critical characteristic of multi user augmented reality is its ability to generate visualisations from a first person perspective in which the scale of rendition of the design model follows many of the conventions that designers are used to. Different scales of model appear to allow designers to focus on different aspects of the design under consideration. Augmenting the scene with simulations of pedestrian movement appears to assist both in scale recognition, and in moving from a first person to a third person understanding of the design. This research project is funded by the European Commission IST program (IST-2000-28559)

    Haptic-GeoZui3D: Exploring the Use of Haptics in AUV Path Planning

    Get PDF
    We have developed a desktop virtual reality system that we call Haptic-GeoZui3D, which brings together 3D user interaction and visualization to provide a compelling environment for AUV path planning. A key component in our system is the PHANTOM haptic device (SensAble Technologies, Inc.), which affords a sense of touch and force feedback – haptics – to provide cues and constraints to guide the user’s interaction. This paper describes our system, and how we use haptics to significantly augment our ability to lay out a vehicle path. We show how our system works well for quickly defining simple waypoint-towaypoint (e.g. transit) path segments, and illustrate how it could be used in specifying more complex, highly segmented (e.g. lawnmower survey) paths

    Evaluating the Effects of Immersive Embodied Interaction on Cognition in Virtual Reality

    Get PDF
    Virtual reality is on its advent of becoming mainstream household technology, as technologies such as head-mounted displays, trackers, and interaction devices are becoming affordable and easily available. Virtual reality (VR) has immense potential in enhancing the fields of education and training, and its power can be used to spark interest and enthusiasm among learners. It is, therefore, imperative to evaluate the risks and benefits that immersive virtual reality poses to the field of education. Research suggests that learning is an embodied process. Learning depends on grounded aspects of the body including action, perception, and interactions with the environment. This research aims to study if immersive embodiment through the means of virtual reality facilitates embodied cognition. A pedagogical VR solution which takes advantage of embodied cognition can lead to enhanced learning benefits. Towards achieving this goal, this research presents a linear continuum for immersive embodied interaction within virtual reality. This research evaluates the effects of three levels of immersive embodied interactions on cognitive thinking, presence, usability, and satisfaction among users in the fields of science, technology, engineering, and mathematics (STEM) education. Results from the presented experiments show that immersive virtual reality is greatly effective in knowledge acquisition and retention, and highly enhances user satisfaction, interest and enthusiasm. Users experience high levels of presence and are profoundly engaged in the learning activities within the immersive virtual environments. The studies presented in this research evaluate pedagogical VR software to train and motivate students in STEM education, and provide an empirical analysis comparing desktop VR (DVR), immersive VR (IVR), and immersive embodied VR (IEVR) conditions for learning. This research also proposes a fully immersive embodied interaction metaphor (IEIVR) for learning of computational concepts as a future direction, and presents the challenges faced in implementing the IEIVR metaphor due to extended periods of immersion. Results from the conducted studies help in formulating guidelines for virtual reality and education researchers working in STEM education and training, and for educators and curriculum developers seeking to improve student engagement in the STEM fields

    Haptic Media Scenes

    Get PDF
    The aim of this thesis is to apply new media phenomenological and enactive embodied cognition approaches to explain the role of haptic sensitivity and communication in personal computer environments for productivity. Prior theory has given little attention to the role of haptic senses in influencing cognitive processes, and do not frame the richness of haptic communication in interaction design—as haptic interactivity in HCI has historically tended to be designed and analyzed from a perspective on communication as transmissions, sending and receiving haptic signals. The haptic sense may not only mediate contact confirmation and affirmation, but also rich semiotic and affective messages—yet this is a strong contrast between this inherent ability of haptic perception, and current day support for such haptic communication interfaces. I therefore ask: How do the haptic senses (touch and proprioception) impact our cognitive faculty when mediated through digital and sensor technologies? How may these insights be employed in interface design to facilitate rich haptic communication? To answer these questions, I use theoretical close readings that embrace two research fields, new media phenomenology and enactive embodied cognition. The theoretical discussion is supported by neuroscientific evidence, and tested empirically through case studies centered on digital art. I use these insights to develop the concept of the haptic figura, an analytical tool to frame the communicative qualities of haptic media. The concept gauges rich machine- mediated haptic interactivity and communication in systems with a material solution supporting active haptic perception, and the mediation of semiotic and affective messages that are understood and felt. As such the concept may function as a design tool for developers, but also for media critics evaluating haptic media. The tool is used to frame a discussion on opportunities and shortcomings of haptic interfaces for productivity, differentiating between media systems for the hand and the full body. The significance of this investigation is demonstrating that haptic communication is an underutilized element in personal computer environments for productivity and providing an analytical framework for a more nuanced understanding of haptic communication as enabling the mediation of a range of semiotic and affective messages, beyond notification and confirmation interactivity

    A Seeing Place – Connecting Physical and Virtual Spaces

    Get PDF
    In the experience and design of spaces today, we meet both reality and virtuality. But how is the relation between real and virtual construed? How can we as researchers and designers contribute to resolving the physical-virtual divide regarding spaces? This thesis explores the relations between the physical and the virtual and investigates ways of connecting physical and virtual space, both in theory and practice.\ua0The basic concepts of the thesis are Space, Place, and Stage. The central idea is that the stage is a strong conceptual metaphor that has the capacity to work as a unifying concept relating physical and virtual spaces and forming a place for attention, agreements, and experience – A Seeing Place. The concept of seeing place comes from the Greek word theatre, meaning a “place for seeing”, both in the sense of looking at and understanding.\ua0In certain situations, the relations between physical and virtual spaces become important for users’ experience and understanding of these situations. This thesis presents seven cases of physical-virtual spaces, in the field of architectural and exhibition design. The method of these studies is research by design. The discussion then focuses on how each setting works as a stage, and how conceptual metaphors can contribute to the connection between physical and virtual spaces.\ua0Building upon the explorations and experiments in different domains, the thesis contains a collection of seven papers concerning the relations between physical and virtual space in different contexts outside the world of theatre. These papers range from more technical about Virtual Reality (design of networked collaborative spaces) to more conceptual about staging (methods in interaction design) and virtual space (using a transdisciplinary approach).\ua0The results of those studies suggest that the Stage metaphor of a physical-virtual space can contribute to the elucidating of relations between physical and virtual spaces in number of ways. Conceptually, the stage metaphor links together the semiotic and the hermeneutic views of space and place. And, from a practice-based perspective, A Seeing Place view opens up the way to creating contemporary spaces and resolving the physical-virtual divide

    Web GIS in practice V: 3-D interactive and real-time mapping in Second Life

    Get PDF
    This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy

    SpaceTop: integrating 2D and spatial 3D interactions in a see-through desktop environment

    Get PDF
    SpaceTop is a concept that fuses spatial 2D and 3D interactions in a single workspace. It extends the traditional desktop interface with interaction technology and visualization techniques that enable seamless transitions between 2D and 3D manipulations. SpaceTop allows users to type, click, draw in 2D, and directly manipulate interface elements that float in the 3D space above the keyboard. It makes it possible to easily switch from one modality to another, or to simultaneously use two modalities with different hands. We introduce hardware and software configurations for co-locating these various interaction modalities in a unified workspace using depth cameras and a transparent display. We describe new interaction and visualization techniques that allow users to interact with 2D elements floating in 3D space. We present the results from a preliminary user study that indicates the benefit of such hybrid workspaces

    Automatic Speed Control For Navigation in 3D Virtual Environment

    Get PDF
    As technology progresses, the scale and complexity of 3D virtual environments can also increase proportionally. This leads to multiscale virtual environments, which are environments that contain groups of objects with extremely unequal levels of scale. Ideally the user should be able to navigate such environments efficiently and robustly. Yet, most previous methods to automatically control the speed of navigation do not generalize well to environments with widely varying scales. I present an improved method to automatically control the navigation speed of the user in 3D virtual environments. The main benefit of my approach is that automatically adapts the navigation speed in multi-scale environments in a manner that enables efficient navigation with maximum freedom, while still avoiding collisions. The results of a usability tests show a significant reduction in the completion time for a multi-scale navigation task

    Social Virtual Reality Platform Comparison and Evaluation Using a Guided Group Walkthrough Method

    Get PDF
    As virtual reality (VR) headsets become more commercially accessible, a range of social platforms have been developed that exploit the immersive nature of these systems. There is a growing interest in using these platforms in social and work contexts, but relatively little work into examining the usability choices that have been made. We developed a usability inspection method based on cognitive walkthrough that we call guided group walkthrough. Guided group walkthrough is applied to existing social VR platforms by having a guide walk the participants through a series of abstract social tasks that are common across the platforms. Using this method we compared six social VR platforms for the Oculus Quest. After constructing an appropriate task hierarchy and walkthrough question structure for social VR, we ran several groups of participants through the walkthrough process. We undercover usability challenges that are common across the platforms, identify specific design considerations and comment on the utility of the walkthrough method in this situation

    Exploiting fashion x-commerce through the empowerment of voice in the fashion virtual reality arena. Integrating voice assistant and virtual reality technologies for fashion communication

    Get PDF
    The ongoing development of eXtended Reality (XR) technologies is supporting a rapid increase of their performances along with a progressive decrease of their costs, making them more and more attractive for a large class of consumers. As a result, their widespread use is expected within the next few years. This may foster new opportunities for e-commerce strategies, giving birth to an XR-based commerce (x-commerce) ecosystem. With respect to web and mobile-based shopping experiences, x-commerce could more easily support brick-and-mortar store-like experiences. One interesting and consolidated one amounts to the interactions among customers and shop assistants inside fashion stores. In this work, we concentrate on such aspects with the design and implementation of an XR-based shopping experience, where vocal dialogues with an Amazon Alexa virtual assistant are supported, to experiment with a more natural and familiar contact with the store environment. To verify the validity of such an approach, we asked a group of fashion experts to try two different XR store experiences: with and without the voice assistant integration. The users are then asked to answer a questionnaire to rate their experiences. The results support the hypothesis that vocal interactions may contribute to increasing the acceptance and comfortable perception of XR-based fashion shopping
    • …
    corecore