676 research outputs found

    Decoding of Multiple Wrist and Hand Movements Using a Transient EMG Classifier

    Get PDF
    The design of prosthetic controllers by means of neurophysiological signals still poses a crucial challenge to bioengineers. State of the art of electromyographic (EMG) continuous pattern recognition controllers rely on the questionable assumption that repeated muscular contractions produce repeatable patterns of steady-state EMG signals. Conversely, we propose an algorithm that decodes wrist and hand movements by processing the signals that immediately follow the onset of contraction (i.e., the \textit {transient} EMG). We collected EMG data from the forearms of 14 non-amputee and 5 transradial amputee participants while they performed wrist flexion/extension, pronation/supination, and four hand grasps (power, lateral, bi-digital, open). We firstly identified the combination of wrist and hand movements that yielded the best control performance for the same participant (intra-subject classification). Then, we assessed the ability of our algorithm to classify participant data that were not included in the training set (cross-subject classification). Our controller achieved a median accuracy of 96% with non-amputees, while it achieved heterogeneous outcomes with amputees, with a median accuracy of 89%. Importantly, for each amputee, it produced at least one \textit {acceptable} combination of wrist-hand movements (i.e., with accuracy >85%). Regarding the cross-subject classifier, while our algorithm obtained promising results with non-amputees (accuracy up to 80%), they were not as good with amputees (accuracy up to 35%), possibly suggesting further assessments with domain-adaptation strategies. In general, our offline outcomes, together with a preliminary online assessment, support the hypothesis that the transient EMG decoding could represent a viable pattern recognition strategy, encouraging further online assessments

    A Taxonomy of Freehand Grasping Patterns in Virtual Reality

    Get PDF
    Grasping is the most natural and primary interaction paradigm people perform every day, which allows us to pick up and manipulate objects around us such as drinking a cup of coffee or writing with a pen. Grasping has been highly explored in real environments, to understand and structure the way people grasp and interact with objects by presenting categories, models and theories for grasping approach. Due to the complexity of the human hand, classifying grasping knowledge to provide meaningful insights is a challenging task, which led to researchers developing grasp taxonomies to provide guidelines for emerging grasping work (such as in anthropology, robotics and hand surgery) in a systematic way. While this body of work exists for real grasping, the nuances of grasping transfer in virtual environments is unexplored. The emerging development of robust hand tracking sensors for virtual devices now allow the development of grasp models that enable VR to simulate real grasping interactions. However, present work has not yet explored the differences and nuances that are present in virtual grasping compared to real object grasping, which means that virtual systems that create grasping models based on real grasping knowledge, might make assumptions which are yet to be proven true or untrue around the way users intuitively grasp and interact with virtual objects. To address this, this thesis presents the first user elicitation studies to explore grasping patterns directly in VR. The first study presents main similarities and differences between real and virtual object grasping, the second study furthers this by exploring how virtual object shape influences grasping patterns, the third study focuses on visual thermal cues and how this influences grasp metrics, and the fourth study focuses on understanding other object characteristics such as stability and complexity and how they influence grasps in VR. To provide structured insights on grasping interactions in VR, the results are synthesized in the first VR Taxonomy of Grasp Types, developed following current methods for developing grasping and HCI taxonomies and re-iterated to present an updated and more complete taxonomy. Results show that users appear to mimic real grasping behaviour in VR, however they also illustrate that users present issues around object size estimation and generally a lower variability in grasp types is used. The taxonomy shows that only five grasps account for the majority of grasp data in VR, which can be used for computer systems aiming to achieve natural and intuitive interactions at lower computational cost. Further, findings show that virtual object characteristics such as shape, stability and complexity as well as visual cues for temperature influence grasp metrics such as aperture, category, type, location and dimension. These changes in grasping patterns together with virtual object categorisation methods can be used to inform design decisions when developing intuitive interactions and virtual objects and environments and therefore taking a step forward in achieving natural grasping interaction in VR

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Design, fabrication and stiffening of soft pneumatic robots

    Get PDF
    Although compliance allows the soft robot to be under-actuated and generalise its control, it also impacts the ability of the robot to exert forces on the environment. There is a trade-off between robots being compliant or precise and strong. Many mechanisms that change robots' stiffness on demand have been proposed, but none are perfect, usually compromising the device's compliance and restricting its motion capabilities. Keeping the above issues in mind, this thesis focuses on creating robust and reliable pneumatic actuators, that are designed to be easily manufactured with simple tools. They are optimised towards linear behaviour, which simplifies modelling and improve control strategies. The principle idea in relation to linearisation is a reinforcement strategy designed to amplify the desired, and limit the unwanted, deformation of the device. Such reinforcement can be achieved using fibres or 3D printed structures. I have shown that the linearity of the actuation is, among others, a function of the reinforcement density and shape, in that the response of dense fibre-reinforced actuators with a circular cross-section is significantly more linear than that of non-reinforced or non-circular actuators. I have explored moulding manufacturing techniques and a mixture of 3D printing and moulding. Many aspects of these techniques have been optimised for reliability, repeatability, and process simplification. I have proposed and implemented a novel moulding technique that uses disposable moulds and can easily be used by an inexperienced operator. I also tried to address the compliance-stiffness trade-off issue. As a result, I have proposed an intelligent structure that behaves differently depending on the conditions. Thanks to its properties, such a structure could be used in applications that require flexibility, but also the ability to resist external disturbances when necessary. Due to its nature, individual cells of the proposed system could be used to implement physical logic elements, resulting in embodied intelligent behaviours. As a proof-of-concept, I have demonstrated use of my actuators in several applications including prosthetic hands, octopus, and fish robots. Each of those devices benefits from a slightly different actuation system but each is based on the same core idea - fibre reinforced actuators. I have shown that the proposed design and manufacturing techniques have several advantages over the methods used so far. The manufacturing methods I developed are more reliable, repeatable, and require less manual work than the various other methods described in the literature. I have also shown that the proposed actuators can be successfully used in real-life applications. Finally, one of the most important outcomes of my research is a contribution to an orthotic device based on soft pneumatic actuators. The device has been successfully deployed, and, at the time of submission of this thesis, has been used for several months, with good results reported, by a patient

    Signal and Information Processing Methods for Embedded Robotic Tactile Sensing Systems

    Get PDF
    The human skin has several sensors with different properties and responses that are able to detect stimuli resulting from mechanical stimulations. Pressure sensors are the most important type of receptors for the exploration and manipulation of objects. In the last decades, smart tactile sensing based on different sensing techniques have been developed as their application in robotics and prosthetics is considered of huge interest, mainly driven by the prospect of autonomous and intelligent robots that can interact with the environment. However, regarding object properties estimation on robots, hardness detection is still a major limitation due to the lack of techniques to estimate it. Furthermore, finding processing methods that can interpret the measured information from multiple sensors and extract relevant information is a Challenging task. Moreover, embedding processing methods and machine learning algorithms in robotic applications to extract meaningful information such as object properties from tactile data is an ongoing challenge, which is controlled by the device constraints (power constraint, memory constraints, etc.), the computational complexity of the processing and machine learning algorithms, the application requirements (real-time operations, high prediction performance). In this dissertation, we focus on the design and implementation of pre-processing methods and machine learning algorithms to handle the aforementioned challenges for a tactile sensing system in robotic application. First, we propose a tactile sensing system for robotic application. Then we present efficient preprocessing and feature extraction methods for our tactile sensors. Then we propose a learning strategy to reduce the computational cost of our processing unit in object classification using sensorized Baxter robot. Finally, we present a real-time robotic tactile sensing system for hardness classification on a resource-constrained devices. The first study represents a further assessment of the sensing system that is based on the PVDF sensors and the interface electronics developed in our lab. In particular, first, it presents the development of a skin patch (multilayer structure) that allows us to use the sensors in several applications such as robotic hand/grippers. Second, it shows the characterization of the developed skin patch. Third, it validates the sensing system. Moreover, we designed a filter to remove noise and detect touch. The experimental assessment demonstrated that the developed skin patch and the interface electronics indeed can detect different touch patterns and stimulus waveforms. Moreover, the results of the experiments defined the frequency range of interest and the response of the system to realistic interactions with the sensing system to grasp and release events. In the next study, we presented an easy integration of our tactile sensing system into Baxter gripper. Computationally efficient pre-processing techniques were designed to filter the signal and extract relevant information from multiple sensor signals, in addition to feature extraction methods. These processing methods aim in turn to reduce also the computational complexity of machine learning algorithms utilized for object classification. The proposed system and processing strategy were evaluated on object classification application by integrating our system into the gripper and we collected data by grasping multiple objects. We further proposed a learning strategy to accomplish a trade-off between the generalization accuracy and the computational cost of the whole processing unit. The proposed pre-processing and feature extraction techniques together with the learning strategy have led to models with extremely low complexity and very high generalization accuracy. Moreover, the support vector machine achieved the best trade-off between accuracy and computational cost on tactile data from our sensors. Finally, we presented the development and implementation on the edge of a real–time tactile sensing system for hardness classification on Baxter robot based on machine and deep learning algorithms. We developed and implemented in plain C a set of functions that provide the fundamental layer functionalities of the Machine learning and Deep Learning models (ML and DL), along with the pre–processing methods to extract the features and normalize the data. The models can be deployed to any device that supports C code since it does not rely on any of the existing libraries. Shallow ML/DL algorithms for the deployment on resource–constrained devices are designed. To evaluate our work, we collected data by grasping objects of different hardness and shape. Two classification problems were addressed: 5 levels of hardness classified on the same objects’ shape, and 5 levels of hardness classified on two different objects’ shape. Furthermore, optimization techniques were employed. The models and pre–processing were implemented on a resource constrained device, where we assessed the performance of the system in terms of accuracy, memory footprint, time latency, and energy consumption. We achieved for both classification problems a real-time inference (< 0.08 ms), low power consumption (i.e., 3.35 μJ), extremely small models (i.e., 1576 Byte), and high accuracy (above 98%)

    Design and Experimental Evaluation of a Haptic Robot-Assisted System for Femur Fracture Surgery

    Full text link
    In the face of challenges encountered during femur fracture surgery, such as the high rates of malalignment and X-ray exposure to operating personnel, robot-assisted surgery has emerged as an alternative to conventional state-of-the-art surgical methods. This paper introduces the development of Robossis, a haptic system for robot-assisted femur fracture surgery. Robossis comprises a 7-DOF haptic controller and a 6-DOF surgical robot. A unilateral control architecture is developed to address the kinematic mismatch and the motion transfer between the haptic controller and the Robossis surgical robot. A real-time motion control pipeline is designed to address the motion transfer and evaluated through experimental testing. The analysis illustrates that the Robossis surgical robot can adhere to the desired trajectory from the haptic controller with an average translational error of 0.32 mm and a rotational error of 0.07 deg. Additionally, a haptic rendering pipeline is developed to resolve the kinematic mismatch by constraining the haptic controller (user hand) movement within the permissible joint limits of the Robossis surgical robot. Lastly, in a cadaveric lab test, the Robossis system assisted surgeons during a mock femur fracture surgery. The result shows that Robossis can provide an intuitive solution for surgeons to perform femur fracture surgery.Comment: This paper is to be submitted to an IEEE journa

    Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Methods, Models, and Datasets for Visual Servoing and Vehicle Localisation

    Get PDF
    Machine autonomy has become a vibrant part of industrial and commercial aspirations. A growing demand exists for dexterous and intelligent machines that can work in unstructured environments without any human assistance. An autonomously operating machine should sense its surroundings, classify different kinds of observed objects, and interpret sensory information to perform necessary operations. This thesis summarizes original methods aimed at enhancing machine’s autonomous operation capability. These methods and the corresponding results are grouped into two main categories. The first category consists of research works that focus on improving visual servoing systems for robotic manipulators to accurately position workpieces. We start our investigation with the hand-eye calibration problem that focuses on calibrating visual sensors with a robotic manipulator. We thoroughly investigate the problem from various perspectives and provide alternative formulations of the problem and error objectives. The experimental results demonstrate that the proposed methods are robust and yield accurate solutions when tested on real and simulated data. The work package is bundled as a toolkit and available online for public use. In an extension, we proposed a constrained multiview pose estimation approach for robotic manipulators. The approach exploits the available geometric constraints on the robotic system and infuses them directly into the pose estimation method. The empirical results demonstrate higher accuracy and significantly higher precision compared to other studies. In the second part of this research, we tackle problems pertaining to the field of autonomous vehicles and its related applications. First, we introduce a pose estimation and mapping scheme to extend the application of visual Simultaneous Localization and Mapping to unstructured dynamic environments. We identify, extract, and discard dynamic entities from the pose estimation step. Moreover, we track the dynamic entities and actively update the map based on changes in the environment. Upon observing the limitations of the existing datasets during our earlier work, we introduce FinnForest, a novel dataset for testing and validating the performance of visual odometry and Simultaneous Localization and Mapping methods in an un-structured environment. We explored an environment with a forest landscape and recorded data with multiple stereo cameras, an IMU, and a GNSS receiver. The dataset offers unique challenges owing to the nature of the environment, variety of trajectories, and changes in season, weather, and daylight conditions. Building upon the future works proposed in FinnForest Dataset, we introduce a novel scheme that can localize an observer with extreme perspective changes. More specifically, we tailor the problem for autonomous vehicles such that they can recognize a previously visited place irrespective of the direction it previously traveled the route. To the best of our knowledge, this is the first study that accomplishes bi-directional loop closure on monocular images with a nominal field of view. To solve the localisation problem, we segregate the place identification from the pose regression by using deep learning in two steps. We demonstrate that bi-directional loop closure on monocular images is indeed possible when the problem is posed correctly, and the training data is adequately leveraged. All methodological contributions of this thesis are accompanied by extensive empirical analysis and discussions demonstrating the need, novelty, and improvement in performance over existing methods for pose estimation, odometry, mapping, and place recognition
    corecore