881 research outputs found

    Design and Quantitative Assessment of Teleoperation-Based Human–Robot Collaboration Method for Robot-Assisted Sonography

    Get PDF
    Tele-echography has emerged as a promising and effective solution, leveraging the expertise of sonographers and the autonomy of robots to perform ultrasound scanning for patients residing in remote areas, without the need for in-person visits by the sonographer. Designing effective and natural human-robot interfaces for tele-echography remains challenging, with patient safety being a critical concern. In this article, we develop a teleoperation system for robot-assisted sonography with two different interfaces, a haptic device-based interface and a low-cost 3D Mouse-based interface, which can achieve continuous and intuitive telemanipulation by a leader device with a small workspace. To achieve compliant interaction with patients, we design impedance controllers in Cartesian space to track the desired position and orientation for these two teleoperation interfaces. We also propose comprehensive evaluation metrics of robot-assisted sonography, including subjective and objective evaluation, to evaluate tele-echography interfaces and control performance. We evaluate the ergonomic performance based on the estimated muscle fatigue and the acquired ultrasound image quality. We conduct user studies based on the NASA Task Load Index to evaluate the performance of these two human-robot interfaces. The tracking performance and the quantitative comparison of these two teleoperation interfaces are conducted by the Franka Emika Panda robot. The results and findings provide guidance on human-robot collaboration design and implementation for robot-assisted sonography. Note to Practitioners —Robot-assisted sonography has demonstrated efficacy in medical diagnosis during clinical trials. However, deploying fully autonomous robots for ultrasound scanning remains challenging due to various constraints in practice, such as patient safety, dynamic tasks, and environmental uncertainties. Semi-autonomous or teleoperation-based robot sonography represents a promising approach for practical deployment. Previous work has produced various expensive teleoperation interfaces but lacks user studies to guide teleoperation interface selection. In this article, we present two typical teleoperation interfaces and implement a continuous and intuitive teleoperation control system. We also propose a comprehensive evaluation metric for assessing their performance. Our findings show that the haptic device outperforms the 3D Mouse, based on operators’ feedback and acquired image quality. However, the haptic device requires more learning time and effort in the training stage. Furthermore, the developed teleoperation system offers a solution for shared control and human-robot skill transfer. Our results provide valuable guidance for designing and implementing human-robot interfaces for robot-assisted sonography in practice

    Microcredentials to support PBL

    Get PDF

    Digital Traces of the Mind::Using Smartphones to Capture Signals of Well-Being in Individuals

    Get PDF
    General context and questions Adolescents and young adults typically use their smartphone several hours a day. Although there are concerns about how such behaviour might affect their well-being, the popularity of these powerful devices also opens novel opportunities for monitoring well-being in daily life. If successful, monitoring well-being in daily life provides novel opportunities to develop future interventions that provide personalized support to individuals at the moment they require it (just-in-time adaptive interventions). Taking an interdisciplinary approach with insights from communication, computational, and psychological science, this dissertation investigated the relation between smartphone app use and well-being and developed machine learning models to estimate an individual’s well-being based on how they interact with their smartphone. To elucidate the relation between smartphone trace data and well-being and to contribute to the development of technologies for monitoring well-being in future clinical practice, this dissertation addressed two overarching questions:RQ1: Can we find empirical support for theoretically motivated relations between smartphone trace data and well-being in individuals? RQ2: Can we use smartphone trace data to monitor well-being in individuals?Aims The first aim of this dissertation was to quantify the relation between the collected smartphone trace data and momentary well-being at the sample level, but also for each individual, following recent conceptual insights and empirical findings in psychological, communication, and computational science. A strength of this personalized (or idiographic) approach is that it allows us to capture how individuals might differ in how smartphone app use is related to their well-being. Considering such interindividual differences is important to determine if some individuals might potentially benefit from spending more time on their smartphone apps whereas others do not or even experience adverse effects. The second aim of this dissertation was to develop models for monitoring well-being in daily life. The present work pursued this transdisciplinary aim by taking a machine learning approach and evaluating to what extent we might estimate an individual’s well-being based on their smartphone trace data. If such traces can be used for this purpose by helping to pinpoint when individuals are unwell, they might be a useful data source for developing future interventions that provide personalized support to individuals at the moment they require it (just-in-time adaptive interventions). With this aim, the dissertation follows current developments in psychoinformatics and psychiatry, where much research resources are invested in using smartphone traces and similar data (obtained with smartphone sensors and wearables) to develop technologies for detecting whether an individual is currently unwell or will be in the future. Data collection and analysis This work combined novel data collection techniques (digital phenotyping and experience sampling methodology) for measuring smartphone use and well-being in the daily lives of 247 student participants. For a period up to four months, a dedicated application installed on participants’ smartphones collected smartphone trace data. In the same time period, participants completed a brief smartphone-based well-being survey five times a day (for 30 days in the first month and 30 days in the fourth month; up to 300 assessments in total). At each measurement, this survey comprised questions about the participants’ momentary level of procrastination, stress, and fatigue, while sleep duration was measured in the morning. Taking a time-series and machine learning approach to analysing these data, I provide the following contributions: Chapter 2 investigates the person-specific relation between passively logged usage of different application types and momentary subjective procrastination, Chapter 3 develops machine learning methodology to estimate sleep duration using smartphone trace data, Chapter 4 combines machine learning and explainable artificial intelligence to discover smartphone-tracked digital markers of momentary subjective stress, Chapter 5 uses a personalized machine learning approach to evaluate if smartphone trace data contains behavioral signs of fatigue. Collectively, these empirical studies provide preliminary answers to the overarching questions of this dissertation.Summary of results With respect to the theoretically motivated relations between smartphone trace data and wellbeing (RQ1), we found that different patterns in smartphone trace data, from time spent on social network, messenger, video, and game applications to smartphone-tracked sleep proxies, are related to well-being in individuals. The strength and nature of this relation depends on the individual and app usage pattern under consideration. The relation between smartphone app use patterns and well-being is limited in most individuals, but relatively strong in a minority. Whereas some individuals might benefit from using specific app types, others might experience decreases in well-being when spending more time on these apps. With respect to the question whether we might use smartphone trace data to monitor well-being in individuals (RQ2), we found that smartphone trace data might be useful for this purpose in some individuals and to some extent. They appear most relevant in the context of sleep monitoring (Chapter 3) and have the potential to be included as one of several data sources for monitoring momentary procrastination (Chapter 2), stress (Chapter 4), and fatigue (Chapter 5) in daily life. Outlook Future interdisciplinary research is needed to investigate whether the relationship between smartphone use and well-being depends on the nature of the activities performed on these devices, the content they present, and the context in which they are used. Answering these questions is essential to unravel the complex puzzle of developing technologies for monitoring well-being in daily life.<br/

    Evaluating Immersive Teleoperation Interfaces: Coordinating Robot Radiation Monitoring Tasks in Nuclear Facilities

    Get PDF
    We present a virtual reality (VR) teleoperation interface for a ground-based robot, featuring dense 3D environment reconstruction and a low latency video stream, with which operators can immersively explore remote environments. At the UK Atomic Energy Authority's (UKAEA) Remote Applications in Challenging Environments (RACE) facility, we applied the interface in a user study where trained robotics operators completed simulated nuclear monitoring and decommissioning style tasks to compare VR and traditional teleoperation interface designs. We found that operators in the VR condition took longer to complete the experiment, had reduced collisions, and rated the generated 3D map with higher importance when compared to non-VR operators. Additional physiological data suggested that VR operators had a lower objective cognitive workload during the experiment but also experienced increased physical demand. Overall the presented results show that VR interfaces may benefit work patterns in teleoperation tasks within the nuclear industry, but further work is needed to investigate how such interfaces can be integrated into real world decommissioning workflows

    Introduction to the special issue on “designing the robot body: Critical perspectives on affective embodied interaction”

    Get PDF
    Designing and evaluating the affectivity of the robot body has become a frontier topic in Human-Robot Interaction (HRI), with previous studies emphasizing the importance of robot embodiment for human-robot communication. In particular, there is growing interest in how the tactile, haptic materiality of the robot influences and mediates users’ affective and emotional states. Indeed, the sheer physicality of robotic systems is a crucial factor in the morphology of the robotic platform, and therefore in the robot's appearance to the user. How do the tactile properties of materials subtly influence user interaction? Why do certain morphologies prompt more empathetic interactions than others? How is nonverbal communication affected through the coordination of movements of the torso, head, and appendages to provide more naturalistic-seeming interaction? What is the role of nonverbal communication in the production of artificial empathy? And how do such factors encourage trust and foster confidence for nonexpert users to interact in the first place? This recognition of machinic corporeality has been of practical interest to designers and engineers working across a range of robot forms and functions. The objective of this special issue is to further this discussion, to consider theoretical, ethical, empirical, and methodological questions related to the design of robotic bodies in the context of affective HRI, and thus foster cross currents among engineering, design, social science, and artistic communities. It originally emerged as a set of conceptual and practical questions from a workshop at the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI’20) in Cambridge, UK, co-organized by two of the editors [3]. The workshop, like so many other events, was canceled because of the restrictions of the COVID-19 pandemic. Consequently, we tried to pursue a longer-term exchange of engineering, design, and conceptual considerations through the publication of this special issue. Building out from the more practically minded exchanges of an in-person workshop, here was an opportunity to invite more wide-ranging contributions to consider questions related to the design of robotic bodies in the context of affective HRI. The issue could thus explore topics bridging embodiment and affect, including touch, materials, and physical form from the points of view of artists, designers, engineers, and social scientists alike

    A Taxonomy of Freehand Grasping Patterns in Virtual Reality

    Get PDF
    Grasping is the most natural and primary interaction paradigm people perform every day, which allows us to pick up and manipulate objects around us such as drinking a cup of coffee or writing with a pen. Grasping has been highly explored in real environments, to understand and structure the way people grasp and interact with objects by presenting categories, models and theories for grasping approach. Due to the complexity of the human hand, classifying grasping knowledge to provide meaningful insights is a challenging task, which led to researchers developing grasp taxonomies to provide guidelines for emerging grasping work (such as in anthropology, robotics and hand surgery) in a systematic way. While this body of work exists for real grasping, the nuances of grasping transfer in virtual environments is unexplored. The emerging development of robust hand tracking sensors for virtual devices now allow the development of grasp models that enable VR to simulate real grasping interactions. However, present work has not yet explored the differences and nuances that are present in virtual grasping compared to real object grasping, which means that virtual systems that create grasping models based on real grasping knowledge, might make assumptions which are yet to be proven true or untrue around the way users intuitively grasp and interact with virtual objects. To address this, this thesis presents the first user elicitation studies to explore grasping patterns directly in VR. The first study presents main similarities and differences between real and virtual object grasping, the second study furthers this by exploring how virtual object shape influences grasping patterns, the third study focuses on visual thermal cues and how this influences grasp metrics, and the fourth study focuses on understanding other object characteristics such as stability and complexity and how they influence grasps in VR. To provide structured insights on grasping interactions in VR, the results are synthesized in the first VR Taxonomy of Grasp Types, developed following current methods for developing grasping and HCI taxonomies and re-iterated to present an updated and more complete taxonomy. Results show that users appear to mimic real grasping behaviour in VR, however they also illustrate that users present issues around object size estimation and generally a lower variability in grasp types is used. The taxonomy shows that only five grasps account for the majority of grasp data in VR, which can be used for computer systems aiming to achieve natural and intuitive interactions at lower computational cost. Further, findings show that virtual object characteristics such as shape, stability and complexity as well as visual cues for temperature influence grasp metrics such as aperture, category, type, location and dimension. These changes in grasping patterns together with virtual object categorisation methods can be used to inform design decisions when developing intuitive interactions and virtual objects and environments and therefore taking a step forward in achieving natural grasping interaction in VR

    New perspectives in surgical treatment of aortic diseases

    Get PDF

    User Experience Design and Evaluation of Persuasive Social Robot As Language Tutor At University : Design And Learning Experiences From Design Research

    Get PDF
    Human Robot Interaction (HRI) is a developing field where research and innovation are progressing. One domain where Human Robot Interaction has focused is in the educational sector. Various research has been conducted in education field to design social robots with appropriate design guidelines derived from user preferences, context, and technology to help students and teachers to foster their learning and teaching experience. Language learning has become popular in education due to students receiving opportunities to study and learn any interested subjects in any language in their preferred universities around the world. Thus, being the reason behind the research of using social robots in language learning and teaching in education field. To this context this thesis explored the design of language tutoring robot for students learning Finnish language at university. In language learning, motivation, the learning experience, context, and user preferences are important to be considered. This thesis focuses on the Finnish language learning students through language tutoring social robot at Tampere University. The design research methodology is used to design the persuasive language tutoring social robot teaching Finnish language to the international students at Tampere University. The design guidelines and the future language tutoring robot design with their benefits are formed using Design Research methodology. Elias Robot, a language tutoring application designed by Curious Technologies, Finnish EdTech company was used in the explorative user study. The user study involved Pepper, Social robot along with the Elias robot application using Mobile device technology. The user study was conducted in university, the students include three male participants and four female participants. The aim of the study was to gather the design requirements based on learning experiences from social robot tutor. Based on this study findings and the design research findings, the future language tutoring social robot was co-created through co design workshop. Based on the findings from Field study, user study, technology acceptance model findings, design research findings, student interviews, the persuasive social robot language tutor was designed. The findings revealed all the multi modalities are required for the efficient tutoring of persuasive social robots and the social robots persuade motivation with students to learn the language. The design implications were discussed, and the design of social robot tutor are created through design scenarios
    • …
    corecore