203 research outputs found
Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey
Digital twin (DT), refers to a promising technique to digitally and
accurately represent actual physical entities. One typical advantage of DT is
that it can be used to not only virtually replicate a system's detailed
operations but also analyze the current condition, predict future behaviour,
and refine the control optimization. Although DT has been widely implemented in
various fields, such as smart manufacturing and transportation, its
conventional paradigm is limited to embody non-living entities, e.g., robots
and vehicles. When adopted in human-centric systems, a novel concept, called
human digital twin (HDT) has thus been proposed. Particularly, HDT allows in
silico representation of individual human body with the ability to dynamically
reflect molecular status, physiological status, emotional and psychological
status, as well as lifestyle evolutions. These prompt the expected application
of HDT in personalized healthcare (PH), which can facilitate remote monitoring,
diagnosis, prescription, surgery and rehabilitation. However, despite the large
potential, HDT faces substantial research challenges in different aspects, and
becomes an increasingly popular topic recently. In this survey, with a specific
focus on the networking architecture and key technologies for HDT in PH
applications, we first discuss the differences between HDT and conventional
DTs, followed by the universal framework and essential functions of HDT. We
then analyze its design requirements and challenges in PH applications. After
that, we provide an overview of the networking architecture of HDT, including
data acquisition layer, data communication layer, computation layer, data
management layer and data analysis and decision making layer. Besides reviewing
the key technologies for implementing such networking architecture in detail,
we conclude this survey by presenting future research directions of HDT
Cherry-Picking with Reinforcement Learning : Robust Dynamic Grasping in Unstable Conditions
Grasping small objects surrounded by unstable or non-rigid material plays a
crucial role in applications such as surgery, harvesting, construction,
disaster recovery, and assisted feeding. This task is especially difficult when
fine manipulation is required in the presence of sensor noise and perception
errors; errors inevitably trigger dynamic motion, which is challenging to model
precisely. Circumventing the difficulty to build accurate models for contacts
and dynamics, data-driven methods like reinforcement learning (RL) can optimize
task performance via trial and error, reducing the need for accurate models of
contacts and dynamics. Applying RL methods to real robots, however, has been
hindered by factors such as prohibitively high sample complexity or the high
training infrastructure cost for providing resets on hardware. This work
presents CherryBot, an RL system that uses chopsticks for fine manipulation
that surpasses human reactiveness for some dynamic grasping tasks. By
integrating imprecise simulators, suboptimal demonstrations and external state
estimation, we study how to make a real-world robot learning system sample
efficient and general while reducing the human effort required for supervision.
Our system shows continual improvement through 30 minutes of real-world
interaction: through reactive retry, it achieves an almost 100% success rate on
the demanding task of using chopsticks to grasp small objects swinging in the
air. We demonstrate the reactiveness, robustness and generalizability of
CherryBot to varying object shapes and dynamics (e.g., external disturbances
like wind and human perturbations). Videos are available at
https://goodcherrybot.github.io/
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
Ultrasound (US) is one of the most widely used modalities for clinical
intervention and diagnosis due to the merits of providing non-invasive,
radiation-free, and real-time images. However, free-hand US examinations are
highly operator-dependent. Robotic US System (RUSS) aims at overcoming this
shortcoming by offering reproducibility, while also aiming at improving
dexterity, and intelligent anatomy and disease-aware imaging. In addition to
enhancing diagnostic outcomes, RUSS also holds the potential to provide medical
interventions for populations suffering from the shortage of experienced
sonographers. In this paper, we categorize RUSS as teleoperated or autonomous.
Regarding teleoperated RUSS, we summarize their technical developments, and
clinical evaluations, respectively. This survey then focuses on the review of
recent work on autonomous robotic US imaging. We demonstrate that machine
learning and artificial intelligence present the key techniques, which enable
intelligent patient and process-specific, motion and deformation-aware robotic
image acquisition. We also show that the research on artificial intelligence
for autonomous RUSS has directed the research community toward understanding
and modeling expert sonographers' semantic reasoning and action. Here, we call
this process, the recovery of the "language of sonography". This side result of
research on autonomous robotic US acquisitions could be considered as valuable
and essential as the progress made in the robotic US examination itself. This
article will provide both engineers and clinicians with a comprehensive
understanding of RUSS by surveying underlying techniques.Comment: Accepted by Medical Image Analysi
Ambient Intelligence for Next-Generation AR
Next-generation augmented reality (AR) promises a high degree of
context-awareness - a detailed knowledge of the environmental, user, social and
system conditions in which an AR experience takes place. This will facilitate
both the closer integration of the real and virtual worlds, and the provision
of context-specific content or adaptations. However, environmental awareness in
particular is challenging to achieve using AR devices alone; not only are these
mobile devices' view of an environment spatially and temporally limited, but
the data obtained by onboard sensors is frequently inaccurate and incomplete.
This, combined with the fact that many aspects of core AR functionality and
user experiences are impacted by properties of the real environment, motivates
the use of ambient IoT devices, wireless sensors and actuators placed in the
surrounding environment, for the measurement and optimization of environment
properties. In this book chapter we categorize and examine the wide variety of
ways in which these IoT sensors and actuators can support or enhance AR
experiences, including quantitative insights and proof-of-concept systems that
will inform the development of future solutions. We outline the challenges and
opportunities associated with several important research directions which must
be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the
Springer Handbook of the Metavers
A Framework for Tumor Localization in Robot-Assisted Minimally Invasive Surgery
Manual palpation of tissue is frequently used in open surgery, e.g., for localization of tumors and buried vessels and for tissue characterization. The overall objective of this work is to explore how tissue palpation can be performed in Robot-Assisted Minimally Invasive Surgery (RAMIS) using laparoscopic instruments conventionally used in RAMIS. This thesis presents a framework where a surgical tool is moved teleoperatively in a manner analogous to the repetitive pressing motion of a finger during manual palpation. We interpret the changes in parameters due to this motion such as the applied force and the resulting indentation depth to accurately determine the variation in tissue stiffness. This approach requires the sensorization of the laparoscopic tool for force sensing. In our work, we have used a da Vinci needle driver which has been sensorized in our lab at CSTAR for force sensing using Fiber Bragg Grating (FBG). A computer vision algorithm has been developed for 3D surgical tool-tip tracking using the da Vinci \u27s stereo endoscope. This enables us to measure changes in surface indentation resulting from pressing the needle driver on the tissue. The proposed palpation framework is based on the hypothesis that the indentation depth is inversely proportional to the tissue stiffness when a constant pressing force is applied. This was validated in a telemanipulated setup using the da Vinci surgical system with a phantom in which artificial tumors were embedded to represent areas of different stiffnesses. The region with high stiffness representing tumor and region with low stiffness representing healthy tissue showed an average indentation depth change of 5.19 mm and 10.09 mm respectively while maintaining a maximum force of 8N during robot-assisted palpation. These indentation depth variations were then distinguished using the k-means clustering algorithm to classify groups of low and high stiffnesses. The results were presented in a colour-coded map. The unique feature of this framework is its use of a conventional laparoscopic tool and minimal re-design of the existing da Vinci surgical setup. Additional work includes a vision-based algorithm for tracking the motion of the tissue surface such as that of the lung resulting from respiratory and cardiac motion. The extracted motion information was analyzed to characterize the lung tissue stiffness based on the lateral strain variations as the surface inflates and deflates
Acoustic-based Smart Tactile Sensing in Social Robots
Mención Internacional en el tÃtulo de doctorEl sentido del tacto es un componente crucial de la interacción social humana y es único
entre los cinco sentidos. Como único sentido proximal, el tacto requiere un contacto
fÃsico cercano o directo para registrar la información. Este hecho convierte al tacto en
una modalidad de interacción llena de posibilidades en cuanto a comunicación social. A través
del tacto, podemos conocer la intención de la otra persona y comunicar emociones. De esta
idea surge el concepto de social touch o tacto social como el acto de tocar a otra persona en
un contexto social. Puede servir para diversos fines, como saludar, mostrar afecto, persuadir
y regular el bienestar emocional y fÃsico.
Recientemente, el número de personas que interactúan con sistemas y agentes artificiales
ha aumentado, principalmente debido al auge de los dispositivos tecnológicos, como los smartphones
o los altavoces inteligentes. A pesar del auge de estos dispositivos, sus capacidades de
interacción son limitadas. Para paliar este problema, los recientes avances en robótica social han
mejorado las posibilidades de interacción para que los agentes funcionen de forma más fluida y
sean más útiles. En este sentido, los robots sociales están diseñados para facilitar interacciones
naturales entre humanos y agentes artificiales. El sentido del tacto en este contexto se revela
como un vehÃculo natural que puede mejorar la Human-Robot Interaction (HRI) debido a su
relevancia comunicativa en entornos sociales. Además de esto, para un robot social, la relación
entre el tacto social y su aspecto es directa, al disponer de un cuerpo fÃsico para aplicar o recibir
toques.
Desde un punto de vista técnico, los sistemas de detección táctil han sido objeto recientemente
de nuevas investigaciones, sobre todo dedicado a comprender este sentido para crear sistemas
inteligentes que puedan mejorar la vida de las personas. En este punto, los robots sociales
se han convertido en dispositivos muy populares que incluyen tecnologÃas para la detección
táctil. Esto está motivado por el hecho de que un robot puede esperada o inesperadamente
tener contacto fÃsico con una persona, lo que puede mejorar o interferir en la ejecución de sus
comportamientos. Por tanto, el sentido del tacto se antoja necesario para el desarrollo de aplicaciones
robóticas. Algunos métodos incluyen el reconocimiento de gestos táctiles, aunque
a menudo exigen importantes despliegues de hardware que requieren de múltiples sensores. Además, la fiabilidad de estas tecnologÃas de detección es limitada, ya que la mayorÃa de ellas
siguen teniendo problemas tales como falsos positivos o tasas de reconocimiento bajas. La detección
acústica, en este sentido, puede proporcionar un conjunto de caracterÃsticas capaces de
paliar las deficiencias anteriores. A pesar de que se trata de una tecnologÃa utilizada en diversos
campos de investigación, aún no se ha integrado en la interacción táctil entre humanos y robots.
Por ello, en este trabajo proponemos el sistema Acoustic Touch Recognition (ATR), un sistema
inteligente de detección táctil (smart tactile sensing system) basado en la detección acústica
y diseñado para mejorar la interacción social humano-robot. Nuestro sistema está desarrollado
para clasificar gestos táctiles y localizar su origen. Además de esto, se ha integrado en plataformas
robóticas sociales y se ha probado en aplicaciones reales con éxito. Nuestra propuesta
se ha enfocado desde dos puntos de vista: uno técnico y otro relacionado con el tacto social.
Por un lado, la propuesta tiene una motivación técnica centrada en conseguir un sistema táctil
rentable, modular y portátil. Para ello, en este trabajo se ha explorado el campo de las tecnologÃas
de detección táctil, los sistemas inteligentes de detección táctil y su aplicación en HRI. Por
otro lado, parte de la investigación se centra en el impacto afectivo del tacto social durante la
interacción humano-robot, lo que ha dado lugar a dos estudios que exploran esta idea.The sense of touch is a crucial component of human social interaction and is unique
among the five senses. As the only proximal sense, touch requires close or direct physical
contact to register information. This fact makes touch an interaction modality
full of possibilities regarding social communication. Through touch, we are able to ascertain
the other person’s intention and communicate emotions. From this idea emerges the concept
of social touch as the act of touching another person in a social context. It can serve various purposes,
such as greeting, showing affection, persuasion, and regulating emotional and physical
well-being.
Recently, the number of people interacting with artificial systems and agents has increased,
mainly due to the rise of technological devices, such as smartphones or smart speakers. Still,
these devices are limited in their interaction capabilities. To deal with this issue, recent developments
in social robotics have improved the interaction possibilities to make agents more seamless
and useful. In this sense, social robots are designed to facilitate natural interactions between
humans and artificial agents. In this context, the sense of touch is revealed as a natural interaction
vehicle that can improve HRI due to its communicative relevance. Moreover, for a social
robot, the relationship between social touch and its embodiment is direct, having a physical
body to apply or receive touches.
From a technical standpoint, tactile sensing systems have recently been the subject of further
research, mostly devoted to comprehending this sense to create intelligent systems that can
improve people’s lives. Currently, social robots are popular devices that include technologies
for touch sensing. This is motivated by the fact that robots may encounter expected or unexpected
physical contact with humans, which can either enhance or interfere with the execution
of their behaviours. There is, therefore, a need to detect human touch in robot applications.
Some methods even include touch-gesture recognition, although they often require significant
hardware deployments primarily that require multiple sensors. Additionally, the dependability
of those sensing technologies is constrained because the majority of them still struggle with issues
like false positives or poor recognition rates. Acoustic sensing, in this sense, can provide a
set of features that can alleviate the aforementioned shortcomings. Even though it is a technology that has been utilised in various research fields, it has yet to be integrated into human-robot
touch interaction.
Therefore, in thiswork,we propose theATRsystem, a smart tactile sensing system based on
acoustic sensing designed to improve human-robot social interaction. Our system is developed
to classify touch gestures and locate their source. It is also integrated into real social robotic platforms
and tested in real-world applications. Our proposal is approached from two standpoints,
one technical and the other related to social touch. Firstly, the technical motivation of thiswork
centred on achieving a cost-efficient, modular and portable tactile system. For that, we explore
the fields of touch sensing technologies, smart tactile sensing systems and their application in
HRI. On the other hand, part of the research is centred around the affective impact of touch
during human-robot interaction, resulting in two studies exploring this idea.Programa de Doctorado en IngenierÃa Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Pedro Manuel Urbano de Almeida Lima.- Secretaria: MarÃa Dolores Blanco Rojas.- Vocal: Antonio Fernández Caballer
Cable-driven parallel mechanisms for minimally invasive robotic surgery
Minimally invasive surgery (MIS) has revolutionised surgery by providing faster recovery times, less post-operative complications, improved cosmesis and reduced pain for the patient. Surgical robotics are used to further decrease the invasiveness of procedures, by using yet smaller and fewer incisions or using natural orifices as entry point. However, many robotic systems still suffer from technical challenges such as sufficient instrument dexterity and payloads, leading to limited adoption in clinical practice. Cable-driven parallel mechanisms (CDPMs) have unique properties, which can be used to overcome existing challenges in surgical robotics. These beneficial properties include high end-effector payloads, efficient force transmission and a large configurable instrument workspace. However, the use of CDPMs in MIS is largely unexplored. This research presents the first structured exploration of CDPMs for MIS and demonstrates the potential of this type of mechanism through the development of multiple prototypes: the ESD CYCLOPS, CDAQS, SIMPLE, neuroCYCLOPS and microCYCLOPS. One key challenge for MIS is the access method used to introduce CDPMs into the body. Three different access methods are presented by the prototypes. By focusing on the minimally invasive access method in which CDPMs are introduced into the body, the thesis provides a framework, which can be used by researchers, engineers and clinicians to identify future opportunities of CDPMs in MIS. Additionally, through user studies and pre-clinical studies, these prototypes demonstrate that this type of mechanism has several key advantages for surgical applications in which haptic feedback, safe automation or a high payload are required. These advantages, combined with the different access methods, demonstrate that CDPMs can have a key role in the advancement of MIS technology.Open Acces
Humanoid Robots
For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
- …