5,559 research outputs found

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    Sensorimotor experience in virtual environments

    Get PDF
    The goal of rehabilitation is to reduce impairment and provide functional improvements resulting in quality participation in activities of life, Plasticity and motor learning principles provide inspiration for therapeutic interventions including movement repetition in a virtual reality environment, The objective of this research work was to investigate functional specific measurements (kinematic, behavioral) and neural correlates of motor experience of hand gesture activities in virtual environments stimulating sensory experience (VE) using a hand agent model. The fMRI compatible Virtual Environment Sign Language Instruction (VESLI) System was designed and developed to provide a number of rehabilitation and measurement features, to identify optimal learning conditions for individuals and to track changes in performance over time. Therapies and measurements incorporated into VESLI target and track specific impairments underlying dysfunction. The goal of improved measurement is to develop targeted interventions embedded in higher level tasks and to accurately track specific gains to understand the responses to treatment, and the impact the response may have upon higher level function such as participation in life. To further clarify the biological model of motor experiences and to understand the added value and role of virtual sensory stimulation and feedback which includes seeing one\u27s own hand movement, functional brain mapping was conducted with simultaneous kinematic analysis in healthy controls and in stroke subjects. It is believed that through the understanding of these neural activations, rehabilitation strategies advantaging the principles of plasticity and motor learning will become possible. The present research assessed successful practice conditions promoting gesture learning behavior in the individual. For the first time, functional imaging experiments mapped neural correlates of human interactions with complex virtual reality hands avatars moving synchronously with the subject\u27s own hands, Findings indicate that healthy control subjects learned intransitive gestures in virtual environments using the first and third person avatars, picture and text definitions, and while viewing visual feedback of their own hands, virtual hands avatars, and in the control condition, hidden hands. Moreover, exercise in a virtual environment with a first person avatar of hands recruited insular cortex activation over time, which might indicate that this activation has been associated with a sense of agency. Sensory augmentation in virtual environments modulated activations of important brain regions associated with action observation and action execution. Quality of the visual feedback was modulated and brain areas were identified where the amount of brain activation was positively or negatively correlated with the visual feedback, When subjects moved the right hand and saw unexpected response, the left virtual avatar hand moved, neural activation increased in the motor cortex ipsilateral to the moving hand This visual modulation might provide a helpful rehabilitation therapy for people with paralysis of the limb through visual augmentation of skills. A model was developed to study the effects of sensorimotor experience in virtual environments, and findings of the effect of sensorimotor experience in virtual environments upon brain activity and related behavioral measures. The research model represents a significant contribution to neuroscience research, and translational engineering practice, A model of neural activations correlated with kinematics and behavior can profoundly influence the delivery of rehabilitative services in the coming years by giving clinicians a framework for engaging patients in a sensorimotor environment that can optimally facilitate neural reorganization

    An ontology-based approach towards coupling task and path planning for the simulation of manipulation tasks

    Get PDF
    This work deals with the simulation and the validation of complex manipulation tasks under strong geometric constraints in virtual environments. The targeted applications relate to the industry 4.0 framework; as up-to-date products are more and more integrated and the economic competition increases, industrial companies express the need to validate, from design stage on, not only the static CAD models of their products but also the tasks (e.g., assembly or maintenance) related to their Product Lifecycle Management (PLM). The scientific community looked at this issue from two points of view: - Task planning decomposes a manipulation task to be realized into a sequence of primitive actions (i.e., a task plan) - Path planning computes collision-free trajectories, notably for the manipulated objects. It traditionally uses purely geometric data, which leads to classical limitations (possible high computational processing times, low relevance of the proposed trajectory concerning the task to be performed, or failure); recent works have shown the interest of using higher abstraction level data. Joint task and path planning approaches found in the literature usually perform a classical task planning step, and then check out the feasibility of path planning requests associated with the primitive actions of this task plan. The link between task and path planning has to be improved, notably because of the lack of loopback between the path planning level and the task planning level: - The path planning information used to question the task plan is usually limited to the motion feasibility where richer information such as the relevance or the complexity of the proposed path would be needed; - path planning queries traditionally use purely geometric data and/or “blind” path planning methods (e.g., RRT), and no task-related information is used at the path planning level Our work focuses on using task level information at the path planning level. The path planning algorithm considered is RRT; we chose such a probabilistic algorithm because we consider path planning for the simulation and the validation of complex tasks under strong geometric constraints. We propose an ontology-based approach to use task level information to specify path planning queries for the primitive actions of a task plan. First, we propose an ontology to conceptualize the knowledge about the 3D environment in which the simulated task takes place. The environment where the simulated task takes place is considered as a closed part of 3D Cartesian space cluttered with mobile/fixed obstacles (considered as rigid bodies). It is represented by a digital model relying on a multilayer architecture involving semantic, topologic and geometric data. The originality of the proposed ontology lies in the fact that it conceptualizes heterogeneous knowledge about both the obstacles and the free space models. Second, we exploit this ontology to automatically generate a path planning query associated to each given primitive action of a task plan. Through a reasoning process involving the primitive actions instantiated in the ontology, we are able to infer the start and the goal configurations, as well as task-related geometric constraints. Finally, a multi-level path planner is called to generate the corresponding trajectory. The contributions of this work have been validated by full simulation of several manipulation tasks under strong geometric constraints. The results obtained demonstrate that using task-related information allows better control on the RRT path planning algorithm involved to check the motion feasibility for the primitive actions of a task plan, leading to lower computational time and more relevant trajectories for primitive actions

    BREP Identification During Voxel-Based Collision Detection for Haptic Manual Assembly

    Get PDF
    This paper presents a novel method to tie geometric boundary representation (BREP) to voxel-based collision detection for use in haptic manual assembly simulation. Virtual Reality, in particular haptics, has been applied with promising results to improve preliminary product design, assembly prototyping and maintenance operations. However, current methodologies do not provide support for low clearance assembly tasks, reducing the applicability of haptics to a small subset of potential situations. This paper discusses a new approach, which combines highly accurate CAD geometry (boundary representation) with voxel models to support a hybrid method involving both geometric constraint enforcement and voxel-based collision detection to provide stable haptic force feedback. With the methods presented here, BREP data can be accessed during voxel-based collision detection. This information can be used for constraint recognition and lead to constraint-guidance during the assembly process

    A virtual hand assessment system for efficient outcome measures of hand rehabilitation

    Get PDF
    Previously held under moratorium from 1st December 2016 until 1st December 2021.Hand rehabilitation is an extremely complex and critical process in the medical rehabilitation field. This is mainly due to the high articulation of the hand functionality. Recent research has focused on employing new technologies, such as robotics and system control, in order to improve the precision and efficiency of the standard clinical methods used in hand rehabilitation. However, the designs of these devices were either oriented toward a particular hand injury or heavily dependent on subjective assessment techniques to evaluate the progress. These limitations reduce the efficiency of the hand rehabilitation devices by providing less effective results for restoring the lost functionalities of the dysfunctional hands. In this project, a novel technological solution and efficient hand assessment system is produced that can objectively measure the restoration outcome and, dynamically, evaluate its performance. The proposed system uses a data glove sensorial device to measure the multiple ranges of motion for the hand joints, and a Virtual Reality system to return an illustrative and safe visual assistance environment that can self-adjust with the subject’s performance. The system application implements an original finger performance measurement method for analysing the various hand functionalities. This is achieved by extracting the multiple features of the hand digits’ motions; such as speed, consistency of finger movements and stability during the hold positions. Furthermore, an advanced data glove calibration method was developed and implemented in order to accurately manipulate the virtual hand model and calculate the hand kinematic movements in compliance with the biomechanical structure of the hand. The experimental studies were performed on a controlled group of 10 healthy subjects (25 to 42 years age). The results showed intra-subject reliability between the trials (average of crosscorrelation ρ = 0.7), inter-subject repeatability across the subject’s performance (p > 0.01 for the session with real objects and with few departures in some of the virtual reality sessions). In addition, the finger performance values were found to be very efficient in detecting the multiple elements of the fingers’ performance including the load effect on the forearm. Moreover, the electromyography measurements, in the virtual reality sessions, showed high sensitivity in detecting the tremor effect (the mean power frequency difference on the right Vextensor digitorum muscle is 176 Hz). Also, the finger performance values for the virtual reality sessions have the same average distance as the real life sessions (RSQ =0.07). The system, besides offering an efficient and quantitative evaluation of hand performance, it was proven compatible with different hand rehabilitation techniques where it can outline the primarily affected parts in the hand dysfunction. It also can be easily adjusted to comply with the subject’s specifications and clinical hand assessment procedures to autonomously detect the classification task events and analyse them with high reliability. The developed system is also adaptable with different disciplines’ involvements, other than the hand rehabilitation, such as ergonomic studies, hand robot control, brain-computer interface and various fields involving hand control.Hand rehabilitation is an extremely complex and critical process in the medical rehabilitation field. This is mainly due to the high articulation of the hand functionality. Recent research has focused on employing new technologies, such as robotics and system control, in order to improve the precision and efficiency of the standard clinical methods used in hand rehabilitation. However, the designs of these devices were either oriented toward a particular hand injury or heavily dependent on subjective assessment techniques to evaluate the progress. These limitations reduce the efficiency of the hand rehabilitation devices by providing less effective results for restoring the lost functionalities of the dysfunctional hands. In this project, a novel technological solution and efficient hand assessment system is produced that can objectively measure the restoration outcome and, dynamically, evaluate its performance. The proposed system uses a data glove sensorial device to measure the multiple ranges of motion for the hand joints, and a Virtual Reality system to return an illustrative and safe visual assistance environment that can self-adjust with the subject’s performance. The system application implements an original finger performance measurement method for analysing the various hand functionalities. This is achieved by extracting the multiple features of the hand digits’ motions; such as speed, consistency of finger movements and stability during the hold positions. Furthermore, an advanced data glove calibration method was developed and implemented in order to accurately manipulate the virtual hand model and calculate the hand kinematic movements in compliance with the biomechanical structure of the hand. The experimental studies were performed on a controlled group of 10 healthy subjects (25 to 42 years age). The results showed intra-subject reliability between the trials (average of crosscorrelation ρ = 0.7), inter-subject repeatability across the subject’s performance (p > 0.01 for the session with real objects and with few departures in some of the virtual reality sessions). In addition, the finger performance values were found to be very efficient in detecting the multiple elements of the fingers’ performance including the load effect on the forearm. Moreover, the electromyography measurements, in the virtual reality sessions, showed high sensitivity in detecting the tremor effect (the mean power frequency difference on the right Vextensor digitorum muscle is 176 Hz). Also, the finger performance values for the virtual reality sessions have the same average distance as the real life sessions (RSQ =0.07). The system, besides offering an efficient and quantitative evaluation of hand performance, it was proven compatible with different hand rehabilitation techniques where it can outline the primarily affected parts in the hand dysfunction. It also can be easily adjusted to comply with the subject’s specifications and clinical hand assessment procedures to autonomously detect the classification task events and analyse them with high reliability. The developed system is also adaptable with different disciplines’ involvements, other than the hand rehabilitation, such as ergonomic studies, hand robot control, brain-computer interface and various fields involving hand control

    Virtual reality as an educational tool in interior architecture

    Get PDF
    Ankara : The Department of Interior Architecture and Environmental Design and the Institute of Fine Arts of Bilkent Univ., 1997.Thesis (Master's) -- Bilkent University, 1997.Includes bibliographical references.This thesis discusses the use of virtual reality technology as an educational tool in interior architectural design. As a result of this discussion, it is proposed that virtual reality can be of use in aiding three-dimensional design and visualization, and may speed up the design process. It may also be of help in getting the designers/students more involved in their design projects. Virtual reality can enhance the capacity of designers to design in three dimensions. The virtual reality environment used in designing should be capable of aiding both the design and the presentation process. The tradeoffs of the technology, newly emerging trends and future directions in virtual reality are discussed.AktaƟ, OrkunM.S

    Measuring user experience for virtual reality

    Get PDF
    In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an PopularitĂ€t gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von RealitĂ€t und VirtualitĂ€t kombinieren. WĂ€hrend die Technologie sowohl fĂŒr Eingabe- als auch fĂŒr AusgabegerĂ€te marktreif ist, existieren nur wenige Lösungen fĂŒr den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen ĂŒber Leistung und BenutzerprĂ€ferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten BenutzeroberflĂ€chen fĂŒr VR zu einer großen Herausforderung. Diese Arbeit beschĂ€ftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingefĂŒhrt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und BenutzerprĂ€ferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und MenĂŒsteuerung im Kontext des tĂ€glichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst

    Tele-operation and Human Robots Interactions

    Get PDF
    • 

    corecore