2,639 research outputs found

    A haptic-enabled multimodal interface for the planning of hip arthroplasty

    Get PDF
    Multimodal environments help fuse a diverse range of sensory modalities, which is particularly important when integrating the complex data involved in surgical preoperative planning. The authors apply a multimodal interface for preoperative planning of hip arthroplasty with a user interface that integrates immersive stereo displays and haptic modalities. This article overviews this multimodal application framework and discusses the benefits of incorporating the haptic modality in this area

    Combining physical constraints with geometric constraint-based modeling for virtual assembly

    Get PDF
    The research presented in this dissertation aims to create a virtual assembly environment capable of simulating the constant and subtle interactions (hand-part, part-part) that occur during manual assembly, and providing appropriate feedback to the user in real-time. A virtual assembly system called SHARP System for Haptic Assembly and Realistic Prototyping is created, which utilizes simulated physical constraints for part placement during assembly.;The first approach taken in this research attempt utilized Voxmap Point Shell (VPS) software for implementing collision detection and physics-based modeling in SHARP. A volumetric approach, where complex CAD models were represented by numerous small cubic-voxel elements was used to obtain fast physics update rates (500--1000 Hz). A novel dual-handed haptic interface was developed and integrated into the system allowing the user to simultaneously manipulate parts with both hands. However, coarse model approximations used for collision detection and physics-based modeling only allowed assembly when minimum clearance was limited to ∼8-10%.;To provide a solution to the low clearance assembly problem, the second effort focused on importing accurate parametric CAD data (B-Rep) models into SHARP. These accurate B-Rep representations are used for collision detection as well as for simulating physical contacts more accurately. A new hybrid approach is presented, which combines the simulated physical constraints with geometric constraints which can be defined at runtime. Different case studies are used to identify the suitable combination of methods (collision detection, physical constraints, geometric constraints) capable of best simulating intricate interactions and environment behavior during manual assembly. An innovative automatic constraint recognition algorithm is created and integrated into SHARP. The feature-based approach utilized for the algorithm design, facilitates faster identification of potential geometric constraints that need to be defined. This approach results in optimized system performance while providing a more natural user experience for assembly

    A multimodal framework for interactive sonification and sound-based communication

    Get PDF

    Immersive Technologies in Virtual Companions: A Systematic Literature Review

    Full text link
    The emergence of virtual companions is transforming the evolution of intelligent systems that effortlessly cater to the unique requirements of users. These advanced systems not only take into account the user present capabilities, preferences, and needs but also possess the capability to adapt dynamically to changes in the environment, as well as fluctuations in the users emotional state or behavior. A virtual companion is an intelligent software or application that offers support, assistance, and companionship across various aspects of users lives. Various enabling technologies are involved in building virtual companion, among these, Augmented Reality (AR), and Virtual Reality (VR) are emerging as transformative tools. While their potential for use in virtual companions or digital assistants is promising, their applications in these domains remain relatively unexplored. To address this gap, a systematic review was conducted to investigate the applications of VR, AR, and MR immersive technologies in the development of virtual companions. A comprehensive search across PubMed, Scopus, and Google Scholar yielded 28 relevant articles out of a pool of 644. The review revealed that immersive technologies, particularly VR and AR, play a significant role in creating digital assistants, offering a wide range of applications that brings various facilities in the individuals life in areas such as addressing social isolation, enhancing cognitive abilities and dementia care, facilitating education, and more. Additionally, AR and MR hold potential for enhancing Quality of life (QoL) within the context of virtual companion technology. The findings of this review provide a valuable foundation for further research in this evolving field

    Understanding user interactions in stereoscopic head-mounted displays

    Get PDF
    2022 Spring.Includes bibliographical references.Interacting in stereoscopic head mounted displays can be difficult. There are not yet clear standards for how interactions in these environments should be performed. In virtual reality there are a number of well designed interaction techniques; however, augmented reality interaction techniques still need to be improved before they can be easily used. This dissertation covers work done towards understanding how users navigate and interact with virtual environments that are displayed in stereoscopic head-mounted displays. With this understanding, existing techniques from virtual reality devices can be transferred to augmented reality where appropriate, and where that is not the case, new interaction techniques can be developed. This work begins by observing how participants interact with virtual content using gesture alone, speech alone, and the combination of gesture+speech during a basic object manipulation task in augmented reality. Later, a complex 3-dimensional data-exploration environment is developed and refined. That environment is capable of being used in both augmented reality (AR) and virtual reality (VR), either asynchronously or simultaneously. The process of iteratively designing that system and the design choices made during its implementation are provided for future researchers working on complex systems. This dissertation concludes with a comparison of user interactions and navigation in that complex environment when using either an augmented or virtual reality display. That comparison contributes new knowledge on how people perform object manipulations between the two devices. When viewing 3D visualizations, users will need to feel able to navigate the environment. Without careful attention to proper interaction technique design, people may struggle to use the developed system. These struggles may range from a system that is uncomfortable and not fit for long-term use, or they could be as major as causing new users to not being able to interact in these environments at all. Getting the interactions right for AR and VR environments is a step towards facilitating their widespread acceptance. This dissertation provides the groundwork needed to start designing interaction techniques around how people utilize their personal space, virtual space, body, tools, and feedback systems

    Sonic Interactions in Virtual Environments: the Egocentric Audio Perspective of the Digital Twin

    Get PDF
    The relationships between the listener, physical world and virtual environment (VE) should not only inspire the design of natural multimodal interfaces but should be discovered to make sense of the mediating action of VR technologies. This chapter aims to transform an archipelago of studies related to sonic interactions in virtual environments (SIVE) into a research field equipped with a first theoretical framework with an inclusive vision of the challenges to come: the egocentric perspective of the auditory digital twin. In a VE with immersive audio technologies implemented, the role of VR simulations must be enacted by a participatory exploration of sense-making in a network of human and non-human agents, called actors. The guardian of such locus of agency is the auditory digital twin that fosters intra-actions between humans and technology, dynamically and fluidly redefining all those configurations that are crucial for an immersive and coherent experience. The idea of entanglement theory is here mainly declined in an egocentric-spatial perspective related to emerging knowledge of the listener's perceptual capabilities. This is an actively transformative relation with the digital twin potentials to create movement, transparency, and provocative activities in VEs. The chapter contains an original theoretical perspective complemented by several bibliographical references and links to the other book chapters that have contributed significantly to the proposal presented here.Comment: 46 pages, 5 figures. Pre-print version of the introduction to the book "Sonic Interactions in Virtual Environments" in press for Springer's Human-Computer Interaction Series, Open Access license. The pre-print editors' copy of the book can be found at https://vbn.aau.dk/en/publications/sonic-interactions-in-virtual-environments - full book info: https://sive.create.aau.dk/index.php/sivebook
    corecore