7,530 research outputs found

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Get PDF
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Full text link
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    Is the timed-up and go test feasible in mobile devices? A systematic review

    Get PDF
    The number of older adults is increasing worldwide, and it is expected that by 2050 over 2 billion individuals will be more than 60 years old. Older adults are exposed to numerous pathological problems such as Parkinson’s disease, amyotrophic lateral sclerosis, post-stroke, and orthopedic disturbances. Several physiotherapy methods that involve measurement of movements, such as the Timed-Up and Go test, can be done to support efficient and effective evaluation of pathological symptoms and promotion of health and well-being. In this systematic review, the authors aim to determine how the inertial sensors embedded in mobile devices are employed for the measurement of the different parameters involved in the Timed-Up and Go test. The main contribution of this paper consists of the identification of the different studies that utilize the sensors available in mobile devices for the measurement of the results of the Timed-Up and Go test. The results show that mobile devices embedded motion sensors can be used for these types of studies and the most commonly used sensors are the magnetometer, accelerometer, and gyroscope available in off-the-shelf smartphones. The features analyzed in this paper are categorized as quantitative, quantitative + statistic, dynamic balance, gait properties, state transitions, and raw statistics. These features utilize the accelerometer and gyroscope sensors and facilitate recognition of daily activities, accidents such as falling, some diseases, as well as the measurement of the subject's performance during the test execution.info:eu-repo/semantics/publishedVersio

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    Wearable Sensors and Smart Devices to Monitor Rehabilitation Parameters and Sports Performance: An Overview

    Get PDF
    A quantitative evaluation of kinetic parameters, the joint’s range of motion, heart rate, and breathing rate, can be employed in sports performance tracking and rehabilitation monitoring following injuries or surgical operations. However, many of the current detection systems are expensive and designed for clinical use, requiring the presence of a physician and medical staff to assist users in the device’s positioning and measurements. The goal of wearable sensors is to overcome the limitations of current devices, enabling the acquisition of a user’s vital signs directly from the body in an accurate and non–invasive way. In sports activities, wearable sensors allow athletes to monitor performance and body movements objectively, going beyond the coach’s subjective evaluation limits. The main goal of this review paper is to provide a comprehensive overview of wearable technologies and sensing systems to detect and monitor the physiological parameters of patients during post–operative rehabilitation and athletes’ training, and to present evidence that supports the efficacy of this technology for healthcare applications. First, a classification of the human physiological parameters acquired from the human body by sensors attached to sensitive skin locations or worn as a part of garments is introduced, carrying important feedback on the user’s health status. Then, a detailed description of the electromechanical transduction mechanisms allows a comparison of the technologies used in wearable applications to monitor sports and rehabilitation activities. This paves the way for an analysis of wearable technologies, providing a comprehensive comparison of the current state of the art of available sensors and systems. Comparative and statistical analyses are provided to point out useful insights for defining the best technologies and solutions for monitoring body movements. Lastly, the presented review is compared with similar ones reported in the literature to highlight its strengths and novelties

    Smart Navigation in Surgical Robotics

    Get PDF
    La cirugía mínimamente invasiva, y concretamente la cirugía laparoscópica, ha supuesto un gran cambio en la forma de realizar intervenciones quirúrgicas en el abdomen. Actualmente, la cirugía laparoscópica ha evolucionado hacia otras técnicas aún menos invasivas, como es la cirugía de un solo puerto, en inglés Single Port Access Surgery. Esta técnica consiste en realizar una única incisión, por la que son introducidos los instrumentos y la cámara laparoscópica a través de un único trocar multipuerto. La principal ventaja de esta técnica es una reducción de la estancia hospitalaria por parte del paciente, y los resultados estéticos, ya que el trocar se suele introducir por el ombligo, quedando la cicatriz oculta en él. Sin embargo, el hecho de que los instrumentos estén introducidos a través del mismo trocar hace la intervención más complicada para el cirujano, que necesita unas habilidades específicas para este tipo de intervenciones. Esta tesis trata el problema de la navegación de instrumentos quirúrgicos mediante plataformas robóticas teleoperadas en cirugía de un solo puerto. En concreto, se propone un método de navegación que dispone de un centro de rotación remoto virtual, el cuál coincide con el punto de inserción de los instrumentos (punto de fulcro). Para estimar este punto se han empleado las fuerzas ejercidas por el abdomen en los instrumentos quirúrgicos, las cuales han sido medidas por sensores de esfuerzos colocados en la base de los instrumentos. Debido a que estos instrumentos también interaccionan con tejido blando dentro del abdomen, lo cual distorsionaría la estimación del punto de inserción, es necesario un método que permita detectar esta circunstancia. Para solucionar esto, se ha empleado un detector de interacción con tejido basado en modelos ocultos de Markov el cuál se ha entrenado para detectar cuatro gestos genéricos. Por otro lado, en esta tesis se plantea el uso de guiado háptico para mejorar la experiencia del cirujano cuando utiliza plataformas robóticas teleoperadas. En concreto, se propone la técnica de aprendizaje por demostración (Learning from Demonstration) para generar fuerzas que puedan guiar al cirujano durante la resolución de tareas específicas. El método de navegación propuesto se ha implantado en la plataforma quirúrgica CISOBOT, desarrollada por la Universidad de Málaga. Los resultados experimentales obtenidos validan tanto el método de navegación propuesto, como el detector de interacción con tejido blando. Por otro lado, se ha realizado un estudio preliminar del sistema de guiado háptico. En concreto, se ha empleado una tarea genérica, la inserción de una clavija, para realizar los experimentos necesarios que permitan demostrar que el método propuesto es válido para resolver esta tarea y otras similares

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces
    corecore