201 research outputs found

    Effect of weight perception on human performance in a haptic-enabled virtual assembly platform

    Get PDF
    Virtual assembly platforms (VAPs) provide a means to interrogate product form, fit and function thereby shortening the design cycle time and improving product manufacturability while reducing assembly cost. VAPs lend themselves to training and can be used as offline programmable interfaces for planning and automation. Haptic devices are increasingly being chosen as the mode of interaction for VAPs over conventional glove-based and 3D-mice, the key benefit being the kinaesthetic feedback users receive while performing virtual assembly tasks in 2D/3D space leading to a virtual world closer to the real world. However, the challenge in recent years is to understand and evaluate the addedvalue of haptics. This paper reports on a haptic enabled VAP with a view to questioning the awareness of the environment and associated assembly tasks. The objective is to evaluate and compare human performance during virtual assembly and real-world assembly, and to identify conditions that may affect the performance of virtual assembly tasks. In particular, the effect of weight perception on virtual assembly tasks is investigated

    Haptic Interaction with a Guide Robot in Zero Visibility

    Get PDF
    Search and rescue operations are often undertaken in dark and noisy environment in which rescue team must rely on haptic feedback for exploration and safe exit. However, little attention has been paid specifically to haptic sensitivity in such contexts or the possibility of enhancing communicational proficiency in the haptic mode as a life-preserving measure. The potential of root swarms for search and rescue has been shown by the Guardians project (EU, 2006-2010); however the project also showed the problem of human robot interaction in smoky (non-visibility) and noisy conditions. The REINS project (UK, 2011-2015) focused on human robot interaction in such conditions. This research is a body of work (done as a part of he REINS project) which investigates the haptic interaction of a person wit a guide robot in zero visibility. The thesis firstly reflects upon real world scenarios where people make use of the haptic sense to interact in zero visibility (such as interaction among firefighters and symbiotic relationship between visually impaired people and guide dogs). In addition, it reflects on the sensitivity and trainability of the haptic sense, to be used for the interaction. The thesis presents an analysis and evaluation of the design of a physical interface (Designed by the consortium of the REINS project) connecting the human and the robotic guide in poor visibility conditions. Finally, it lays a foundation for the design of test cases to evaluate human robot haptic interaction, taking into consideration the two aspects of the interaction, namely locomotion guidance and environmental exploration

    How touch and hearing influence visual processing in sensory substitution, synaesthesia and cross-modal correspondences

    Get PDF
    Sensory substitution devices (SSDs) systematically turn visual dimensions into patterns of tactile or auditory stimulation. After training, a user of these devices learns to translate these audio or tactile sensations back into a mental visual picture. Most previous SSDs translate greyscale images using intuitive cross-sensory mappings to help users learn the devices. However more recent SSDs have started to incorporate additional colour dimensions such as saturation and hue. Chapter two examines how previous SSDs have translated the complexities of colour into hearing or touch. The chapter explores if colour is useful for SSD users, how SSD and veridical colour perception differ and how optimal cross-sensory mappings might be considered. After long-term training, some blind users of SSDs report visual sensations from tactile or auditory stimulation. A related phenomena is that of synaesthesia, a condition where stimulation of one modality (i.e. touch) produces an automatic, consistent and vivid sensation in another modality (i.e. vision). Tactile-visual synaesthesia is an extremely rare variant that can shed light on how the tactile-visual system is altered when touch can elicit visual sensations. Chapter three reports a series of investigations on the tactile discrimination abilities and phenomenology of tactile-vision synaesthetes, alongside questionnaire data from synaesthetes unavailable for testing. Chapter four introduces a new SSD to test if the presentation of colour information in sensory substitution affects object and colour discrimination. Chapter five presents experiments on intuitive auditory-colour mappings across a wide variety of sounds. These findings are used to predict the reported colour hallucinations resulting from LSD use while listening to these sounds. Chapter six uses a new sensory substitution device designed to test the utility of these intuitive sound-colour links for visual processing. These findings are discussed with reference to how cross-sensory links, LSD and synaesthesia can inform optimal SSD design for visual processing

    Social Touch

    Get PDF
    Interpersonal or social touch is an intuitive and powerful way to express and communicate emotions, comfort a friend, bond with teammates, comfort a child in pain, and soothe someone who is stressed. If there is one thing that the current pandemic is showing us, it is that social distancing can make some people crave physical interaction through social touch. The notion of “skin-hunger” has become tangible for many.Social touch differs at a functional and anatomical level from discriminative touch, and has clear effects at physiological, emotional, and behavioural levels. Social touch is a topic in psychology (perception, emotion, behaviour), neuroscience (neurophysiological pathways), computer science (mediated touch communication), engineering (haptic devices), robotics (social robots that can touch), humanities (science and technology studies), and sociology (the social implications of touch). Our current scientific knowledge of social touch is scattered across disciplines and not yet adequate for the purpose of meeting today's challenges of connecting human beings through the mediating channel of technology

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte RealitĂ€ten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt ĂŒber. Mit der EinfĂŒhrung von leistungsfĂ€higen Mixed-Reality-GerĂ€ten verĂ€ndert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende GerĂ€te sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, mĂŒssen Designer und Entwicklerinnen die kĂŒnftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation prĂ€sentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien fĂŒr die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von ModalitĂ€ten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenĂŒber Mixed Reality in hĂ€uslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der RealitĂ€t oder VirtualitĂ€t untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen fĂŒr Navigationsaufgaben zu einer deutlich höheren FĂ€higkeit fĂŒhrt, SehenswĂŒrdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der RealitĂ€t durch Überlagerung von Echtzeitinformationen, die fĂŒr das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, WĂ€rmestrahlung visuell wahrzunehmen. DarĂŒber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollstĂ€ndig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle RealitĂ€ten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die HĂ€nde und die Tastatur, zeigt diese in der vermischen RealitĂ€t an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der TexteingabequalitĂ€t zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man VirtualitĂ€t berĂŒhren kann, indem wir generisches haptisches Feedback fĂŒr virtuelle RealitĂ€ten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das PrĂ€senzgefĂŒhl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als EingabegerĂ€t mit einem sekundĂ€ren physischen Bildschirm verbunden, um die Ein- und AusgabemodalitĂ€ten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien fĂŒr die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den ArtikulationsfĂ€higkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben

    Motor patterns evaluation of people with neuromuscular disorders for biomechanical risk management and job integration/reintegration

    Get PDF
    Neurological diseases are now the most common pathological condition and the leading cause of disability, progressively worsening the quality of life of those affected. Because of their high prevalence, they are also a social issue, burdening both the national health service and the working environment. It is therefore crucial to be able to characterize altered motor patterns in order to develop appropriate rehabilitation treatments with the primary goal of restoring patients' daily lives and optimizing their working abilities. In this thesis, I present a collection of published scientific articles I co-authored as well as two in progress in which we looked for appropriate indices for characterizing motor patterns of people with neuromuscular disorders that could be used to plan rehabilitation and job accommodation programs. We used instrumentation for motion analysis and wearable inertial sensors to compute kinematic, kinetic and electromyographic indices. These indices proved to be a useful tool for not only developing and validating a clinical and ergonomic rehabilitation pathway, but also for designing more ergonomic prosthetic and orthotic devices and controlling collaborative robots

    A framework for cardio-pulmonary resuscitation (CPR) scene retrieval from medical simulation videos based on object and activity detection.

    Get PDF
    In this thesis, we propose a framework to detect and retrieve CPR activity scenes from medical simulation videos. Medical simulation is a modern training method for medical students, where an emergency patient condition is simulated on human-like mannequins and the students act upon. These simulation sessions are recorded by the physician, for later debriefing. With the increasing number of simulation videos, automatic detection and retrieval of specific scenes became necessary. The proposed framework for CPR scene retrieval, would eliminate the conventional approach of using shot detection and frame segmentation techniques. Firstly, our work explores the application of Histogram of Oriented Gradients in three dimensions (HOG3D) to retrieve the scenes containing CPR activity. Secondly, we investigate the use of Local Binary Patterns in Three Orthogonal Planes (LBPTOP), which is the three dimensional extension of the popular Local Binary Patterns. This technique is a robust feature that can detect specific activities from scenes containing multiple actors and activities. Thirdly, we propose an improvement to the above mentioned methods by a combination of HOG3D and LBP-TOP. We use decision level fusion techniques to combine the features. We prove experimentally that the proposed techniques and their combination out-perform the existing system for CPR scene retrieval. Finally, we devise a method to detect and retrieve the scenes containing the breathing bag activity, from the medical simulation videos. The proposed framework is tested and validated using eight medical simulation videos and the results are presented

    Evaluation of an Actuated Wrist Orthosis for Use in Assistive Upper Extremity Rehabilitation

    Get PDF
    Cerebral palsy (CP) is a neurological condition caused by damage to motor control centers of the brain. This leads to physical and cognitive deficiencies that can reduce an individual’s quality of life. Specifically, motor deficiencies of the upper extremity can make it difficult for an individual to complete everyday tasks, including eating, drinking, getting dressed, or combing their hair. Physical therapy, involving repetitive tasks, has been shown to be effective in training normal motion of the limb by invoking the neuroplasticity of the brain and its ability to adapt in order to facilitate motor learning. Creating a device for use with Activities of Daily Living (ADLs) provides an additional tool for task-based therapy with the goal of improving functional outcome. A custom wrist orthotic has been designed and developed that assists flexion/extension of the wrist and rotation of the forearm, while leaving the hand open for the grasp and manipulation of objects. Actuated joints are driven with geared brushless DC motors on a lightweight, exoskeleton frame coupled to a passive arm that tracks positional changes within the task space. Control of actuation is accomplished with a custom mapping strategy, created from nominal movement profiles for 5 ADLs collected from healthy subjects. A simple relationship was created between position within the workspace and orientation necessary for task completion to determine needed assistance. Validation of the design subjected the device to three different conditions, including robot guidance of the limb, co-contraction of the forearm, and the use of alternate approaches to complete the task. Co-contraction and alternate approach conditions were used to simulate characteristics of impaired subjects, including rigidity spasticity, and lack of muscle control. Robot guidance achieved an average orientation error of 5° or less in at least 75% of iterations across all tasks, while co-contraction and alternate approach was able to do this in flexion/extension, but saw much higher errors in forearm rotation. Causes for performance deficiencies were attributed to lack of torque bandwidth at the motor and response delay due to signal filtering, aspects that will be corrected in the next iteration of the design
    • 

    corecore