363 research outputs found

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Human–Machine Interface in Transport Systems: An Industrial Overview for More Extended Rail Applications

    Get PDF
    This paper provides an overview of Human Machine Interface (HMI) design and command systems in commercial or experimental operation across transport modes. It presents and comments on different HMIs from the perspective of vehicle automation equipment and simulators of different application domains. Considering the fields of cognition and automation, this investigation highlights human factors and the experiences of different industries according to industrial and literature reviews. Moreover, to better focus the objectives and extend the investigated industrial panorama, the analysis covers the most effective simulators in operation across various transport modes for the training of operators as well as research in the fields of safety and ergonomics. Special focus is given to new technologies that are potentially applicable in future train cabins, e.g., visual displays and haptic-shared controls. Finally, a synthesis of human factors and their limits regarding support for monitoring or driving assistance is propose

    AVEID: Automatic Video System for Measuring Engagement In Dementia

    Get PDF
    Engagement in dementia is typically measured using behavior observational scales (BOS) that are tedious and involve intensive manual labor to annotate, and are therefore not easily scalable. We propose AVEID, a low cost and easy-to-use video-based engagement measurement tool to determine the engagement level of a person with dementia (PwD) during digital interaction. We show that the objective behavioral measures computed via AVEID correlate well with subjective expert impressions for the popular MPES and OME BOS, confirming its viability and effectiveness. Moreover, AVEID measures can be obtained for a variety of engagement designs, thereby facilitating large-scale studies with PwD populations

    Requirement analysis and sensor specifications – First version

    Get PDF
    In this first version of the deliverable, we make the following contributions: to design the WEKIT capturing platform and the associated experience capturing API, we use a methodology for system engineering that is relevant for different domains such as: aviation, space, and medical and different professions such as: technicians, astronauts, and medical staff. Furthermore, in the methodology, we explore the system engineering process and how it can be used in the project to support the different work packages and more importantly the different deliverables that will follow the current. Next, we provide a mapping of high level functions or tasks (associated with experience transfer from expert to trainee) to low level functions such as: gaze, voice, video, body posture, hand gestures, bio-signals, fatigue levels, and location of the user in the environment. In addition, we link the low level functions to their associated sensors. Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their technical specifications, possible limitations, standards, and platforms. We outline a set of recommendations pertaining to the sensors that are most relevant for the WEKIT project taking into consideration the environmental, technical and human factors described in other deliverables. We recommend Microsoft Hololens (for Augmented reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift (for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand gesture tracking). For eye tracking, an existing eye-tracking system can be customised to complement the augmented reality glasses, and built-in microphone of the augmented reality glasses can capture the expert’s voice. We propose a modular approach for the design of the WEKIT experience capturing system, and recommend that the capturing system should have sufficient storage or transmission capabilities. Finally, we highlight common issues associated with the use of different sensors. We consider that the set of recommendations can be useful for the design and integration of the WEKIT capturing platform and the WEKIT experience capturing API to expedite the time required to select the combination of sensors which will be used in the first prototype.WEKI

    Animated virtual agents to cue user attention: comparison of static and dynamic deictic cues on gaze and touch responses

    Get PDF
    This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses

    Comparing attention and eye movements towards real objects versus image displays

    Get PDF
    Images of objects are commonly used as proxies of real objects in studies testing attention and eye movements. However, a lot of modern research discovered neural and behavioral differences in perception of real objects and their pictorial representations. The goal of the current investigation is to verify if covert attentional orienting and patterns eye movements are influenced by proprieties of real objects such as stereoscopic cues and tangibility. In the first experiment a modified version of the Posner cueing task was used to verify differences in spatial orienting between real tools and fruits and vegetables and their pictorial representations. The result showed that participants were faster to detect a target on the left side of real objects rather than when displayed as images, however, only if real objects were presented in a reachable distance. Therefore, the first study showed that the graspability of stimulus magnifies the leftward bias of visuospatial attention also known as ‘pseudoneglect’. The second study compared patterns of eye movements in categorization and grasping task of real familiar tools and their images and stereoscopic displays. The results showed that if participants were asked to categorize objects then the display format of those items did not affect patterns of eye movements. However, when the participants were asked to grasp the objects then their eye movements were more focused on the handles of real objects rather than any other display format. Therefore, the both experiments showed the importance of tangibility of stimuli on perception. Moreover, the two studies used novel stimuli presentation systems that can be used in the future research studies testing other aspects of perception of real objects and their pictorial representations

    Driver behaviour characterization using artificial intelligence techniques in level 3 automated vehicle.

    Get PDF
    Brighton, James L. - Associate SupervisorAutonomous vehicles free drivers from driving and allow them to engage in some non-driving related activities. However, the engagement in such activities could reduce their awareness of the driving environment, which could bring a potential risk for the takeover process in the current automation level of the intelligent vehicle. Therefore, it is of great importance to monitor the driver's behaviour when the vehicle is in automated driving mode. This research aims to develop a computer vision-based driver monitoring system for autonomous vehicles, which characterises driver behaviour inside the vehicle cabin by their visual attention and hand movement and proves the feasibility of using such features to identify the driver's non-driving related activities. This research further proposes a system, which employs both information to identify driving related activities and non-driving related activities. A novel deep learning- based model has been developed for the classification of such activities. A lightweight model has also been developed for the edge computing device, which compromises the recognition accuracy but is more suitable for further in-vehicle applications. The developed models outperform the state-of-the-art methods in terms of classification accuracy. This research also investigates the impact of the engagement in non-driving related activities on the takeover process and proposes a category method to group the activities to improve the extendibility of the driving monitoring system for unevaluated activities. The finding of this research is important for the design of the takeover strategy to improve driving safety during the control transition in Level 3 automated vehicles.PhD in Manufacturin

    Методи пом'якшення

    Get PDF
    The section describes mitigation methods.У розділі описано методи помякшення

    Hands free adjustment of the microscope in microneurosurgery

    Get PDF
    A wide array of medical errors plague the healthcare system. The repercussions of those errors are more palpable in healthcare and more so in the operative microsurgical theater. The surgical microscope, although a key element within it, has a high propensity for errors. The two communication approaches evaluated in this study took advantage of the natural physiology of the human body by tracking and utilizing eye movements and body gestures to execute tasks that would typically require manual interaction with the microscope. Independent trials at the Charité Hospital in Berlin were conducted, and different technological tools like Virtual Reality were utilized to evaluate them. Specialized tasks were created for both of the trials. The results showed us that these body tracking approaches (body gestures and gaze) were almost 30% and 20% faster than the contemporary alternative. In the last 20 years, the diffusion of technology within medicine has been enormous, these new patient-oriented technological approaches could be revolutionary in controlling an existing critical element within the microsurgical theater.Das Gesundheitssystem wird von einer Vielzahl medizinischer Fehler geplagt. Die Auswirkungen dieser Fehler sind im Gesundheitswesen und insbesondere im mikrochirurgischen Operationssaal am deutlichsten spürbar. Das Operationsmikroskop ist zwar ein Schlüsselelement in diesem Bereich, aber dennoch sehr fehleranfällig. Die beiden in dieser Studie untersuchten Kommunikationsansätze verwenden die natürliche Physiologie des menschlichen Körpers, indem sie Augenbewegungen und Körpergesten verfolgen und nutzen, um Aufgaben auszuführen, die normalerweise eine manuelle Interaktion mit dem Mikroskop erfordern würden. In der Charité in Berlin wurden separate Trials durchgeführt und verschiedene technische Hilfsmittel wie Virtual Reality eingesetzt, um sie zu bewerten. Für beide Trials wurden spezielle Aufgaben erstellt. Die Ergebnisse zeigten uns, dass diese Body-Tracking-Ansätze (Körpergesten und Blicke) fast 30 % bzw. 20 % schneller waren als der aktuelle Stand der Technik. In den letzten 20 Jahren hat die Technologie in der Medizin eine enorme Verbreitung erfahren; diese neuen patientenorientierten technologischen Ansätze könnten bei der Kontrolle eines bestehenden kritischen Elements im mikrochirurgischen Bereich revolutionär sein
    corecore