20 research outputs found

    Contributing to VRPN with a new server for haptic devices (ext. version)

    Get PDF
    This article is an extended version of the poster paper: Cuevas-Rodriguez, M., Gonzalez-Toledo D., Molina-Tanco, L., Reyes-Lecuona A., 2015, November. “Contributing to VRPN with a new server for haptic devices”. In Proceedings of the ACM symposium on Virtual reality software and technology. ACM.http://dx.doi.org/10.1145/2821592.2821639VRPN is a middleware to access Virtual Reality peripherals. VRPN standard distribution supports Geomagic® (formerly Phantom) haptic devices through the now superseded GHOST library. This paper presents VRPN OpenHaptics Server, a contribution to VRPN library that fully reimplements VRPN support of Geomagic Haptic Devices. The implementation is based on the OpenHaptics v3.0 HLAPI layer, which supports all Geomagic Haptic Devices. We present the architecture of the contributed server, a detailed description of the offered API and an analysis of its performance in a set of example scenarios.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    An Augmented Interaction Strategy For Designing Human-Machine Interfaces For Hydraulic Excavators

    Get PDF
    Lack of adequate information feedback and work visibility, and fatigue due to repetition have been identified as the major usability gaps in the human-machine interface (HMI) design of modern hydraulic excavators that subject operators to undue mental and physical workload, resulting in poor performance. To address these gaps, this work proposed an innovative interaction strategy, termed “augmented interaction”, for enhancing the usability of the hydraulic excavator. Augmented interaction involves the embodiment of heads-up display and coordinated control schemes into an efficient, effective and safe HMI. Augmented interaction was demonstrated using a framework consisting of three phases: Design, Implementation/Visualization, and Evaluation (D.IV.E). Guided by this framework, two alternative HMI design concepts (Design A: featuring heads-up display and coordinated control; and Design B: featuring heads-up display and joystick controls) in addition to the existing HMI design (Design C: featuring monitor display and joystick controls) were prototyped. A mixed reality seating buck simulator, named the Hydraulic Excavator Augmented Reality Simulator (H.E.A.R.S), was used to implement the designs and simulate a work environment along with a rock excavation task scenario. A usability evaluation was conducted with twenty participants to characterize the impact of the new HMI types using quantitative (task completion time, TCT; and operating error, OER) and qualitative (subjective workload and user preference) metrics. The results indicated that participants had a shorter TCT with Design A. For OER, there was a lower error probability due to collisions (PER1) with Design A, and lower error probability due to misses (PER2)with Design B. The subjective measures showed a lower overall workload and a high preference for Design B. It was concluded that augmented interaction provides a viable solution for enhancing the usability of the HMI of a hydraulic excavator

    Using brain-computer interaction and multimodal virtual-reality for augmenting stroke neurorehabilitation

    Get PDF
    Every year millions of people suffer from stroke resulting to initial paralysis, slow motor recovery and chronic conditions that require continuous reha bilitation and therapy. The increasing socio-economical and psychological impact of stroke makes it necessary to find new approaches to minimize its sequels, as well as novel tools for effective, low cost and personalized reha bilitation. The integration of current ICT approaches and Virtual Reality (VR) training (based on exercise therapies) has shown significant improve ments. Moreover, recent studies have shown that through mental practice and neurofeedback the task performance is improved. To date, detailed in formation on which neurofeedback strategies lead to successful functional recovery is not available while very little is known about how to optimally utilize neurofeedback paradigms in stroke rehabilitation. Based on the cur rent limitations, the target of this project is to investigate and develop a novel upper-limb rehabilitation system with the use of novel ICT technolo gies including Brain-Computer Interfaces (BCI’s), and VR systems. Here, through a set of studies, we illustrate the design of the RehabNet frame work and its focus on integrative motor and cognitive therapy based on VR scenarios. Moreover, we broadened the inclusion criteria for low mobility pa tients, through the development of neurofeedback tools with the utilization of Brain-Computer Interfaces while investigating the effects of a brain-to-VR interaction.Todos os anos, milho˜es de pessoas sofrem de AVC, resultando em paral isia inicial, recupera¸ca˜o motora lenta e condic¸˜oes cr´onicas que requerem re abilita¸ca˜o e terapia cont´ınuas. O impacto socioecon´omico e psicol´ogico do AVC torna premente encontrar novas abordagens para minimizar as seque las decorrentes, bem como desenvolver ferramentas de reabilita¸ca˜o, efetivas, de baixo custo e personalizadas. A integra¸c˜ao das atuais abordagens das Tecnologias da Informa¸ca˜o e da Comunica¸ca˜o (TIC) e treino com Realidade Virtual (RV), com base em terapias por exerc´ıcios, tem mostrado melhorias significativas. Estudos recentes mostram, ainda, que a performance nas tare fas ´e melhorada atrav´es da pra´tica mental e do neurofeedback. At´e a` data, na˜o existem informac¸˜oes detalhadas sobre quais as estrat´egias de neurofeed back que levam a uma recupera¸ca˜o funcional bem-sucedida. De igual modo, pouco se sabe acerca de como utilizar, de forma otimizada, o paradigma de neurofeedback na recupera¸c˜ao de AVC. Face a tal, o objetivo deste projeto ´e investigar e desenvolver um novo sistema de reabilita¸ca˜o de membros supe riores, recorrendo ao uso de novas TIC, incluindo sistemas como a Interface C´erebro-Computador (ICC) e RV. Atrav´es de um conjunto de estudos, ilus tramos o design do framework RehabNet e o seu foco numa terapia motora e cognitiva, integrativa, baseada em cen´arios de RV. Adicionalmente, ampli amos os crit´erios de inclus˜ao para pacientes com baixa mobilidade, atrav´es do desenvolvimento de ferramentas de neurofeedback com a utilizac¸˜ao de ICC, ao mesmo que investigando os efeitos de uma interac¸˜ao c´erebro-para-RV

    Presence 2005: the eighth annual international workshop on presence, 21-23 September, 2005 University College London (Conference proceedings)

    Get PDF
    OVERVIEW (taken from the CALL FOR PAPERS) Academics and practitioners with an interest in the concept of (tele)presence are invited to submit their work for presentation at PRESENCE 2005 at University College London in London, England, September 21-23, 2005. The eighth in a series of highly successful international workshops, PRESENCE 2005 will provide an open discussion forum to share ideas regarding concepts and theories, measurement techniques, technology, and applications related to presence, the psychological state or subjective perception in which a person fails to accurately and completely acknowledge the role of technology in an experience, including the sense of 'being there' experienced by users of advanced media such as virtual reality. The concept of presence in virtual environments has been around for at least 15 years, and the earlier idea of telepresence at least since Minsky's seminal paper in 1980. Recently there has been a burst of funded research activity in this area for the first time with the European FET Presence Research initiative. What do we really know about presence and its determinants? How can presence be successfully delivered with today's technology? This conference invites papers that are based on empirical results from studies of presence and related issues and/or which contribute to the technology for the delivery of presence. Papers that make substantial advances in theoretical understanding of presence are also welcome. The interest is not solely in virtual environments but in mixed reality environments. Submissions will be reviewed more rigorously than in previous conferences. High quality papers are therefore sought which make substantial contributions to the field. Approximately 20 papers will be selected for two successive special issues for the journal Presence: Teleoperators and Virtual Environments. PRESENCE 2005 takes place in London and is hosted by University College London. The conference is organized by ISPR, the International Society for Presence Research and is supported by the European Commission's FET Presence Research Initiative through the Presencia and IST OMNIPRES projects and by University College London

    Multi-touch interaction with stereoscopically rendered 3D objects

    Full text link
    Anfänglich hauptsächlich im 2D Kontext betrachtet, gewinnen Multi-Touch Interfaces immer mehr an Bedeutung im Bereich dreidimensionaler Umgebungen und, in den letzten Jahren, auch im Zusammenhang mit stereoskopischen Visualisierungen. Dennoch führt die Touch-basierte Interaktion mit stereoskopisch dargestellten Objekten zu Problemen, da die Objekte in der nahen Umgebung der Displayoberfläche schweben, während die Berührungspunkte nur bei direktem Kontakt mit dem Display robust detektiert werden können. In dieser Arbeit werden die Probleme bei Touch-Interaktion in stereoskopischen Umgebungen näher untersucht und Interaktionskonzepte in diesem Kontext entwickelt. Insbesondere wird die Anwendbarkeit unterschiedlicher Wahrnehmungsillusionen für 3D Touch-Interaktion mit stereoskopisch dargestellten Objekten in einer Reihe psychologischer Experimente untersucht. Basierend auf die Experimentdaten werden einige praktische Interaktionstechniken entwickelt und auf ihre Anwendbarkeit überprüft.While touch technology has proven its usability for 2D interaction and has already become a standard input modality for many devices, the challenges to exploit its applicability with stereoscopically rendered content have barely been studied. In this thesis we exploit different hardware and perception based techniques to allow users to touch stereoscopically displayed objects when the input is constrained to a 2D surface. Therefore we analyze the relation between the 3D positions of stereoscopically displayed objects and the on-surface touch points, where users touch the interactive surface, and we have conducted a series of experiments to investigate the user’s ability to discriminate small induced shifts while performing a touch gesture. The results were then used to design practical interaction techniques, which are suitable for numerous application scenarios. <br

    Sensorimotor experience in virtual environments

    Get PDF
    The goal of rehabilitation is to reduce impairment and provide functional improvements resulting in quality participation in activities of life, Plasticity and motor learning principles provide inspiration for therapeutic interventions including movement repetition in a virtual reality environment, The objective of this research work was to investigate functional specific measurements (kinematic, behavioral) and neural correlates of motor experience of hand gesture activities in virtual environments stimulating sensory experience (VE) using a hand agent model. The fMRI compatible Virtual Environment Sign Language Instruction (VESLI) System was designed and developed to provide a number of rehabilitation and measurement features, to identify optimal learning conditions for individuals and to track changes in performance over time. Therapies and measurements incorporated into VESLI target and track specific impairments underlying dysfunction. The goal of improved measurement is to develop targeted interventions embedded in higher level tasks and to accurately track specific gains to understand the responses to treatment, and the impact the response may have upon higher level function such as participation in life. To further clarify the biological model of motor experiences and to understand the added value and role of virtual sensory stimulation and feedback which includes seeing one\u27s own hand movement, functional brain mapping was conducted with simultaneous kinematic analysis in healthy controls and in stroke subjects. It is believed that through the understanding of these neural activations, rehabilitation strategies advantaging the principles of plasticity and motor learning will become possible. The present research assessed successful practice conditions promoting gesture learning behavior in the individual. For the first time, functional imaging experiments mapped neural correlates of human interactions with complex virtual reality hands avatars moving synchronously with the subject\u27s own hands, Findings indicate that healthy control subjects learned intransitive gestures in virtual environments using the first and third person avatars, picture and text definitions, and while viewing visual feedback of their own hands, virtual hands avatars, and in the control condition, hidden hands. Moreover, exercise in a virtual environment with a first person avatar of hands recruited insular cortex activation over time, which might indicate that this activation has been associated with a sense of agency. Sensory augmentation in virtual environments modulated activations of important brain regions associated with action observation and action execution. Quality of the visual feedback was modulated and brain areas were identified where the amount of brain activation was positively or negatively correlated with the visual feedback, When subjects moved the right hand and saw unexpected response, the left virtual avatar hand moved, neural activation increased in the motor cortex ipsilateral to the moving hand This visual modulation might provide a helpful rehabilitation therapy for people with paralysis of the limb through visual augmentation of skills. A model was developed to study the effects of sensorimotor experience in virtual environments, and findings of the effect of sensorimotor experience in virtual environments upon brain activity and related behavioral measures. The research model represents a significant contribution to neuroscience research, and translational engineering practice, A model of neural activations correlated with kinematics and behavior can profoundly influence the delivery of rehabilitative services in the coming years by giving clinicians a framework for engaging patients in a sensorimotor environment that can optimally facilitate neural reorganization

    A multimodal framework for interactive sonification and sound-based communication

    Get PDF

    PolyVR - A Virtual Reality Authoring Framework for Engineering Applications

    Get PDF
    Die virtuelle Realität ist ein fantastischer Ort, frei von Einschränkungen und vielen Möglichkeiten. Für Ingenieure ist dies der perfekte Ort, um Wissenschaft und Technik zu erleben, es fehlt jedoch die Infrastruktur, um die virtuelle Realität zugänglich zu machen, insbesondere für technische Anwendungen. Diese Arbeit bescheibt die Entstehung einer Softwareumgebung, die eine einfachere Entwicklung von Virtual-Reality-Anwendungen und deren Implementierung in immersiven Hardware-Setups ermöglicht. Virtual Engineering, die Verwendung virtueller Umgebungen für Design-Reviews während des Produktentwicklungsprozesses, wird insbesondere von kleinen und mittleren Unternehmen nur äußerst selten eingesetzt. Die Hauptgründe sind nicht mehr die hohen Kosten für professionelle Virtual-Reality-Hardware, sondern das Fehlen automatisierter Virtualisierungsabläufe und die hohen Wartungs- und Softwareentwicklungskosten. Ein wichtiger Aspekt bei der Automatisierung von Virtualisierung ist die Integration von Intelligenz in künstlichen Umgebungen. Ontologien sind die Grundlage des menschlichen Verstehens und der Intelligenz. Die Kategorisierung unseres Universums in Begriffe, Eigenschaften und Regeln ist ein grundlegender Schritt von Prozessen wie Beobachtung, Lernen oder Wissen. Diese Arbeit zielt darauf ab, einen Schritt zu einem breiteren Einsatz von Virtual-Reality-Anwendungen in allen Bereichen der Wissenschaft und Technik zu entwickeln. Der Ansatz ist der Aufbau eines Virtual-Reality-Authoring-Tools, eines Softwarepakets zur Vereinfachung der Erstellung von virtuellen Welten und der Implementierung dieser Welten in fortschrittlichen immersiven Hardware-Umgebungen wie verteilten Visualisierungssystemen. Ein weiteres Ziel dieser Arbeit ist es, das intuitive Authoring von semantischen Elementen in virtuellen Welten zu ermöglichen. Dies sollte die Erstellung von virtuellen Inhalten und die Interaktionsmöglichkeiten revolutionieren. Intelligente immersive Umgebungen sind der Schlüssel, um das Lernen und Trainieren in virtuellen Welten zu fördern, Prozesse zu planen und zu überwachen oder den Weg für völlig neue Interaktionsparadigmen zu ebnen
    corecore