1,432 research outputs found

    A haptic-enabled multimodal interface for the planning of hip arthroplasty

    Get PDF
    Multimodal environments help fuse a diverse range of sensory modalities, which is particularly important when integrating the complex data involved in surgical preoperative planning. The authors apply a multimodal interface for preoperative planning of hip arthroplasty with a user interface that integrates immersive stereo displays and haptic modalities. This article overviews this multimodal application framework and discusses the benefits of incorporating the haptic modality in this area

    Touch Technology in Affective Human, Robot, Virtual-Human Interactions: A Survey

    Get PDF
    Given the importance of affective touch in human interactions, technology designers are increasingly attempting to bring this modality to the core of interactive technology. Advances in haptics and touch-sensing technology have been critical to fostering interest in this area. In this survey, we review how affective touch is investigated to enhance and support the human experience with or through technology. We explore this question across three different research areas to highlight their epistemology, main findings, and the challenges that persist. First, we review affective touch technology through the human–computer interaction literature to understand how it has been applied to the mediation of human–human interaction and its roles in other human interactions particularly with oneself, augmented objects/media, and affect-aware devices. We further highlight the datasets and methods that have been investigated for automatic detection and interpretation of affective touch in this area. In addition, we discuss the modalities of affective touch expressions in both humans and technology in these interactions. Second, we separately review how affective touch has been explored in human–robot and real-human–virtual-human interactions where the technical challenges encountered and the types of experience aimed at are different. We conclude with a discussion of the gaps and challenges that emerge from the review to steer research in directions that are critical for advancing affective touch technology and recognition systems. In our discussion, we also raise ethical issues that should be considered for responsible innovation in this growing area

    Telelocomotion—remotely operated legged robots

    Get PDF
    © 2020 by the authors. Li-censee MDPI, Basel, Switzerland. Teleoperated systems enable human control of robotic proxies and are particularly amenable to inaccessible environments unsuitable for autonomy. Examples include emergency response, underwater manipulation, and robot assisted minimally invasive surgery. However, teleoperation architectures have been predominantly employed in manipulation tasks, and are thus only useful when the robot is within reach of the task. This work introduces the idea of extending teleoperation to enable online human remote control of legged robots, or telelocomotion, to traverse challenging terrain. Traversing unpredictable terrain remains a challenge for autonomous legged locomotion, as demonstrated by robots commonly falling in high-profile robotics contests. Telelocomotion can reduce the risk of mission failure by leveraging the high-level understanding of human operators to command in real-time the gaits of legged robots. In this work, a haptic telelocomotion interface was developed. Two within-user studies validate the proof-of-concept interface: (i) The first compared basic interfaces with the haptic interface for control of a simulated hexapedal robot in various levels of traversal complexity; (ii) the second presents a physical implementation and investigated the efficacy of the proposed haptic virtual fixtures. Results are promising to the use of haptic feedback for telelocomotion for complex traversal tasks

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    Gemini Telepresence Robot System Design: A Low-Cost Solution for Manipulation and Enhanced Perception of Telepresence Robots

    Get PDF
    Current telepresence robots are costly and only allow the operator to see the environment on a 2D screen and move around on a wheelbase. Thus, these telepresence devices are severely limited because of the high barrier of entry, and the operator is unable to manipulate objects or easily perceive the world in 3D. Therefore, to address these gaps in capabilities, Gemini, an open-source telepresence humanoid robot and interface station, was designed to grant the operator the ability to manipulate objects, expand the human interface by putting the user in the 3D world with the use of a virtual reality (VR) headset, and be low-cost. The simplistic, low-cost, and intuitive controls of Gemini promote early adoption by businesses and medical personnel to grant increased telepresence needs. In addition, this platform can be utilized by robotics enthusiasts and university researchers studying humanoid robotics or human-robot interaction. This paper presents an overview of the Gemini robot’s mechanical, electrical, and programmatic systems. Upon completion of this study, it was found that Gemini was able to grant the ability to manipulate objects, increase user perception with intuitive controls, in addition to costing approximately 30% less than commercial telepresence robots. Furthermore, the paper is concluded with remarks on future iterations of the project

    Development of an intelligent object for grasp and manipulation research

    Get PDF
    KÔiva R, Haschke R, Ritter H. Development of an intelligent object for grasp and manipulation research. Presented at the ICAR 2011, Tallinn, Estonia.In this paper we introduce a novel device, called iObject, which is equipped with tactile and motion tracking sensors that allow for the evaluation of human and robot grasping and manipulation actions. Contact location and contact force, object acceleration in space (6D) and orientation relative to the earth (3D magnetometer) are measured and transmitted wirelessly over a Bluetooth connection. By allowing human-human, human-robot and robot-robot comparisons to be made, iObject is a versatile tool for studying manual interaction. To demonstrate the efficiency and flexibility of iObject for the study of bimanual interactions, we report on a physiological experiment and evaluate the main parameters of the considered dual-handed manipulation task

    Evaluation of haptic guidance virtual fixtures and 3D visualization methods in telemanipulation—a user study

    Get PDF
    © 2019, The Author(s). This work presents a user-study evaluation of various visual and haptic feedback modes on a real telemanipulation platform. Of particular interest is the potential for haptic guidance virtual fixtures and 3D-mapping techniques to enhance efficiency and awareness in a simple teleoperated valve turn task. An RGB-Depth camera is used to gather real-time color and geometric data of the remote scene, and the operator is presented with either a monocular color video stream, a 3D-mapping voxel representation of the remote scene, or the ability to place a haptic guidance virtual fixture to help complete the telemanipulation task. The efficacy of the feedback modes is then explored experimentally through a user study, and the different modes are compared on the basis of objective and subjective metrics. Despite the simplistic task and numerous evaluation metrics, results show that the haptic virtual fixture resulted in significantly better collision avoidance compared to 3D visualization alone. Anticipated performance enhancements were also observed moving from 2D to 3D visualization. Remaining comparisons lead to exploratory inferences that inform future direction for focused and statistically significant studies
    • 

    corecore