716 research outputs found

    Sensory Properties in Fusion of Visual/Haptic Stimuli Using Mixed Reality

    Get PDF
    When we recognize objects, multiple sensory informa-tion (e.g., visual, auditory, and haptic) is used with fusion. For example, both eyes and hands provide rele-vant information about an object’s shape. We investi-gate how sensory stimuli interact with each other. For that purpose, we developed a system that gives hap-tic/visual sensory fusion using a mixed reality tech-nique. Our experiments show that the haptic stimulus seems to be affected by visual stimulus when a dis-crepancy exists between vision and haptic stimuli

    The scale of sense : spatial extent and multimodal urban design

    Get PDF
    This paper is derived from the work of the UK AHRC/EPSRC 'Designing for the 21st Century' research project Multimodal Representation of Urban Space. This research group seeks to establish a new form of notation for urban design which pays attention to our entire sensory experience of place. This paper addresses one of the most important aspects of this endeavour: scale. Scale is of course a familiar abstraction to all architects and urban designers, allowing for representations tailored to different levels of detail and allowing drawings to be translated into build structures. Scale is also a factor in human experience: the spatial extent of each of our senses is different. Many forms of architectonic representation are founded upon the extension of the visual modality, and designs are accordingly tuned towards this sense. We can all speak from our own experience, however, that urban environments are a feast for all the senses. The visceral quality of walking down a wide tree-lined boulevard differs greatly from the subterranean crowds of the subway, or the meandering pause invited by the city square. Similarly, our experience of hearing and listening is more than just a passive observation by virtue of our own power of voice and the feedback created by our percussive movements across a surface or through a medium. Taste and smell are also excited by the urban environment, the social importance of food preparation and the associations between smell and public health are issues of sensory experience. The tactile experience of space, felt with the entire body as well as our more sensitive hands, allowing for direct manipulation and interactions as well as sensations of mass, heat, proximity and texture. Our project team shall present a series of tools for designers which explore the variety of sensory modalities and their associated scales. This suite of notations and analytical frameworks turn our attention to the sensory experience of places, and offers a method and pattern book for more holistic multi-sensory and multi-modal urban design

    Optimization-Based wearable tactile rendering

    Get PDF
    Novel wearable tactile interfaces offer the possibility to simulate tactile interactions with virtual environments directly on our skin. But, unlike kinesthetic interfaces, for which haptic rendering is a well explored problem, they pose new questions about the formulation of the rendering problem. In this work, we propose a formulation of tactile rendering as an optimization problem, which is general for a large family of tactile interfaces. Based on an accurate simulation of contact between a finger model and the virtual environment, we pose tactile rendering as the optimization of the device configuration, such that the contact surface between the device and the actual finger matches as close as possible the contact surface in the virtual environment. We describe the optimization formulation in general terms, and we also demonstrate its implementation on a thimble-like wearable device. We validate the tactile rendering formulation by analyzing its force error, and we show that it outperforms other approaches

    Hybrid optical and magnetic manipulation of microrobots

    Get PDF
    Microrobotic systems have the potential to provide precise manipulation on cellular level for diagnostics, drug delivery and surgical interventions. These systems vary from tethered to untethered microrobots with sizes below a micrometer to a few microns. However, their main disadvantage is that they do not have the same capabilities in terms of degrees-of-freedom, sensing and control as macroscale robotic systems. In particular, their lack of on-board sensing for pose or force feedback, their control methods and interface for automated or manual user control are limited as well as their geometry has few degrees-of-freedom making three-dimensional manipulation more challenging. This PhD project is on the development of a micromanipulation framework that can be used for single cell analysis using the Optical Tweezers as well as a combination of optical trapping and magnetic actuation for recon gurable microassembly. The focus is on untethered microrobots with sizes up to a few tens of microns that can be used in enclosed environments for ex vivo and in vitro medical applications. The work presented investigates the following aspects of microrobots for single cell analysis: i) The microfabrication procedure and design considerations that are taken into account in order to fabricate components for three-dimensional micromanipulation and microassembly, ii) vision-based methods to provide 6-degree-offreedom position and orientation feedback which is essential for closed-loop control, iii) manual and shared control manipulation methodologies that take into account the user input for multiple microrobot or three-dimensional microstructure manipulation and iv) a methodology for recon gurable microassembly combining the Optical Tweezers with magnetic actuation into a hybrid method of actuation for microassembly.Open Acces

    Exploring the Use of Audio-Visual Feedback within 3D Virtual Environments to Provide Complex Sensory Cues for Scenario-Based Learning

    Get PDF
    The continuous quest for ever increasing fidelity in 3D virtual worlds is running parallel to the emergence and adoption of low-cost technologies to implement such environments. In education and training, complex simulations can now be implemented on standard desktop technologies. However, such tools lack the means to represent multisensory data beyond audio-visual feedback. This paper reports on a study that involved the design, development and implementation of a 3D learning environment for underground mine evacuation. The requirements of the environment are discussed in terms of the sensory information that needs to be conveyed and techniques are described to achieve this using multiple modes of representation, appropriate levels of abstraction and synesthesia to make up for the lack of tactile and olfactory sensory cues. The study found that audio-visual cues that used such techniques were effective in communicating complex sensory information for novice miners

    Ambient Intelligence for Next-Generation AR

    Full text link
    Next-generation augmented reality (AR) promises a high degree of context-awareness - a detailed knowledge of the environmental, user, social and system conditions in which an AR experience takes place. This will facilitate both the closer integration of the real and virtual worlds, and the provision of context-specific content or adaptations. However, environmental awareness in particular is challenging to achieve using AR devices alone; not only are these mobile devices' view of an environment spatially and temporally limited, but the data obtained by onboard sensors is frequently inaccurate and incomplete. This, combined with the fact that many aspects of core AR functionality and user experiences are impacted by properties of the real environment, motivates the use of ambient IoT devices, wireless sensors and actuators placed in the surrounding environment, for the measurement and optimization of environment properties. In this book chapter we categorize and examine the wide variety of ways in which these IoT sensors and actuators can support or enhance AR experiences, including quantitative insights and proof-of-concept systems that will inform the development of future solutions. We outline the challenges and opportunities associated with several important research directions which must be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the Springer Handbook of the Metavers

    Design and Evaluation of Neurosurgical Training Simulator

    Get PDF
    Surgical simulators are becoming more important in surgical training. Consumer smartphone technology has improved to allow deployment of VR applications and are now being targeted for medical training simulators. A surgical simulator has been designed using a smartphone, Google cardboard 3D glasses, and the Leap Motion (LM) hand controller. Two expert and 16 novice users were tasked with completing the same pointing tasks using both the LM and the medical simulator NeuroTouch. The novice users had an accuracy of 0.2717 bits (SD 0.3899) and the experts had an accuracy of 0.0925 bits (SD 0.1210) while using the NeuroTouch. Novices and experts improved their accuracy to 0.3585 bits (SD 0.4474) and 0.4581 bits (SD 0.3501) while using the LM. There were some tracking problems with the AR display and LM. Users were intrigued by the AR display and most preferred the LM, as they found it to have better usability

    Doctor of Philosophy

    Get PDF
    dissertationVirtual reality is becoming a common technology with applications in fields such as medical training, product development, and entertainment. Providing haptic (sense of touch) information along with visual and audio information can create an immersive vi

    A Person-Centric Design Framework for At-Home Motor Learning in Serious Games

    Get PDF
    abstract: In motor learning, real-time multi-modal feedback is a critical element in guided training. Serious games have been introduced as a platform for at-home motor training due to their highly interactive and multi-modal nature. This dissertation explores the design of a multimodal environment for at-home training in which an autonomous system observes and guides the user in the place of a live trainer, providing real-time assessment, feedback and difficulty adaptation as the subject masters a motor skill. After an in-depth review of the latest solutions in this field, this dissertation proposes a person-centric approach to the design of this environment, in contrast to the standard techniques implemented in related work, to address many of the limitations of these approaches. The unique advantages and restrictions of this approach are presented in the form of a case study in which a system entitled the "Autonomous Training Assistant" consisting of both hardware and software for guided at-home motor learning is designed and adapted for a specific individual and trainer. In this work, the design of an autonomous motor learning environment is approached from three areas: motor assessment, multimodal feedback, and serious game design. For motor assessment, a 3-dimensional assessment framework is proposed which comprises of 2 spatial (posture, progression) and 1 temporal (pacing) domains of real-time motor assessment. For multimodal feedback, a rod-shaped device called the "Intelligent Stick" is combined with an audio-visual interface to provide feedback to the subject in three domains (audio, visual, haptic). Feedback domains are mapped to modalities and feedback is provided whenever the user's performance deviates from the ideal performance level by an adaptive threshold. Approaches for multi-modal integration and feedback fading are discussed. Finally, a novel approach for stealth adaptation in serious game design is presented. This approach allows serious games to incorporate motor tasks in a more natural way, facilitating self-assessment by the subject. An evaluation of three different stealth adaptation approaches are presented and evaluated using the flow-state ratio metric. The dissertation concludes with directions for future work in the integration of stealth adaptation techniques across the field of exergames.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    • 

    corecore