392 research outputs found
Shared control for natural motion and safety in hands-on robotic surgery
Hands-on robotic surgery is where the surgeon controls the tool's motion by applying forces and torques to the robot holding the tool, allowing the robot-environment interaction to be felt though the tool itself. To further improve results, shared control strategies are used to combine the strengths of the surgeon with those of the robot. One such strategy is active constraints, which prevent motion into regions deemed unsafe or unnecessary. While research in active constraints on rigid anatomy has been well-established, limited work on dynamic active constraints (DACs) for deformable soft tissue has been performed, particularly on strategies which handle multiple sensing modalities. In addition, attaching the tool to the robot imposes the end effector dynamics onto the surgeon, reducing dexterity and increasing fatigue. Current control policies on these systems only compensate for gravity, ignoring other dynamic effects. This thesis presents several research contributions to shared control in hands-on robotic surgery, which create a more natural motion for the surgeon and expand the usage of DACs to point clouds. A novel null-space based optimization technique has been developed which minimizes the end effector friction, mass, and inertia of redundant robots, creating a more natural motion, one which is closer to the feeling of the tool unattached to the robot. By operating in the null-space, the surgeon is left in full control of the procedure. A novel DACs approach has also been developed, which operates on point clouds. This allows its application to various sensing technologies, such as 3D cameras or CT scans and, therefore, various surgeries. Experimental validation in point-to-point motion trials and a virtual reality ultrasound scenario demonstrate a reduction in work when maneuvering the tool and improvements in accuracy and speed when performing virtual ultrasound scans. Overall, the results suggest that these techniques could increase the ease of use for the surgeon and improve patient safety.Open Acces
Vision-Based Autonomous Control in Robotic Surgery
Robotic Surgery has completely changed surgical procedures. Enhanced dexterity, ergonomics, motion scaling, and tremor filtering, are well-known advantages introduced with respect to classical laparoscopy. In the past decade, robotic plays a fundamental role in Minimally Invasive Surgery (MIS) in which the da Vinci robotic system (Intuitive Surgical Inc., Sunnyvale, CA) is the most widely used system for robot-assisted laparoscopic procedures. Robots also have great potentiality in Microsurgical applications, where human limits are crucial and surgical sub-millimetric gestures could have enormous benefits with motion scaling and tremor compensation. However, surgical robots still lack advanced assistive control methods that could notably support surgeon's activity and perform surgical tasks in autonomy for a high quality of intervention.
In this scenario, images are the main feedback the surgeon can use to correctly operate in the surgical site. Therefore, in view of the increasing autonomy in surgical robotics, vision-based techniques play an important role and can arise by extending computer vision algorithms to surgical scenarios. Moreover, many surgical tasks could benefit from the application of advanced control techniques, allowing the surgeon to work under less stressful conditions and performing the surgical procedures with more accuracy and safety. The thesis starts from these topics, providing surgical robots the ability to perform complex tasks helping the surgeon to skillfully manipulate the robotic system to accomplish the above requirements. An increase in safety and a reduction in mental workload is achieved through the introduction of active constraints, that can prevent the surgical tool from crossing a forbidden region and similarly generate constrained motion to guide the surgeon on a specific path, or to accomplish robotic autonomous tasks. This leads to the development of a vision-based method for robot-aided dissection procedure allowing the control algorithm to autonomously adapt to environmental changes during the surgical intervention using stereo images elaboration. Computer vision is exploited to define a surgical tools collision avoidance method that uses Forbidden Region Virtual Fixtures by rendering a repulsive force to the surgeon. Advanced control techniques based on an optimization approach are developed, allowing multiple tasks execution with task definition encoded through Control Barrier Functions (CBFs) and enhancing haptic-guided teleoperation system during suturing procedures. The proposed methods are tested on a different robotic platform involving da Vinci Research Kit robot (dVRK) and a new microsurgical robotic platform. Finally, the integration of new sensors and instruments in surgical robots are considered, including a multi-functional tool for dexterous tissues manipulation and different visual sensing technologies
A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions
The Internet has made several giant leaps over the years, from a fixed to a
mobile Internet, then to the Internet of Things, and now to a Tactile Internet.
The Tactile Internet goes far beyond data, audio and video delivery over fixed
and mobile networks, and even beyond allowing communication and collaboration
among things. It is expected to enable haptic communication and allow skill set
delivery over networks. Some examples of potential applications are
tele-surgery, vehicle fleets, augmented reality and industrial process
automation. Several papers already cover many of the Tactile Internet-related
concepts and technologies, such as haptic codecs, applications, and supporting
technologies. However, none of them offers a comprehensive survey of the
Tactile Internet, including its architectures and algorithms. Furthermore, none
of them provides a systematic and critical review of the existing solutions. To
address these lacunae, we provide a comprehensive survey of the architectures
and algorithms proposed to date for the Tactile Internet. In addition, we
critically review them using a well-defined set of requirements and discuss
some of the lessons learned as well as the most promising research directions
Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery
Minimally invasive surgery is playing an increasingly important role for patient
care. Whilst its direct patient benefit in terms of reduced trauma,
improved recovery and shortened hospitalisation has been well established,
there is a sustained need for improved training of the existing procedures
and the development of new smart instruments to tackle the issue of visualisation,
ergonomic control, haptic and tactile feedback. For endoscopic
intervention, the small field of view in the presence of a complex anatomy
can easily introduce disorientation to the operator as the tortuous access
pathway is not always easy to predict and control with standard endoscopes.
Effective training through simulation devices, based on either virtual reality
or mixed-reality simulators, can help to improve the spatial awareness,
consistency and safety of these procedures.
This thesis examines the use of endoscopic videos for both simulation
and navigation purposes. More specifically, it addresses the challenging
problem of how to build high-fidelity subject-specific simulation environments
for improved training and skills assessment. Issues related to mesh
parameterisation and texture blending are investigated. With the maturity
of computer vision in terms of both 3D shape reconstruction and localisation
and mapping, vision-based techniques have enjoyed significant interest
in recent years for surgical navigation. The thesis also tackles the problem
of how to use vision-based techniques for providing a detailed 3D map and
dynamically expanded field of view to improve spatial awareness and avoid
operator disorientation. The key advantage of this approach is that it does
not require additional hardware, and thus introduces minimal interference
to the existing surgical workflow. The derived 3D map can be effectively
integrated with pre-operative data, allowing both global and local 3D navigation
by taking into account tissue structural and appearance changes.
Both simulation and laboratory-based experiments are conducted throughout
this research to assess the practical value of the method proposed
Intuitive, iterative and assisted virtual guides programming for human-robot comanipulation
Pendant très longtemps, l'automatisation a été assujettie à l'usage de robots industriels traditionnels placés dans des cages et programmés pour répéter des tâches plus ou moins
complexes au maximum de leur vitesse et de leur précision. Cette automatisation, dite rigide, possède deux inconvénients majeurs : elle est chronophage dû aux contraintes
contextuelles applicatives et proscrit la présence humaine. Il existe désormais une nouvelle génération de robots avec des systèmes moins encombrants, peu coûteux et plus
flexibles. De par leur structure et leurs modes de fonctionnement ils sont intrinsèquement sûrs ce qui leurs permettent de travailler main dans la main avec les humains. Dans
ces nouveaux espaces de travail collaboratifs, l'homme peut être inclus dans la boucle comme un agent décisionnel actif. En tant qu'instructeur ou collaborateur il peut
influencer le processus décisionnel du robot : on parle de robots collaboratifs (ou cobots). Dans ce nouveau contexte, nous faisons usage de guides virtuels. Ils permettent aux
cobots de soulager les efforts physiques et la charge cognitive des opérateurs. Cependant, la définition d'un guide virtuel nécessite souvent une expertise et une modélisation
précise de la tâche. Cela restreint leur utilité aux scénarios à contraintes fixes. Pour palier ce problème et améliorer la flexibilité de la programmation du guide virtuel,
cette thèse présente une nouvelle approche par démonstration : nous faisons usage de l'apprentissage kinesthésique de façon itérative et construisons le guide virtuel avec une
spline 6D. Grâce à cette approche, l'opérateur peut modifier itérativement les guides tout en gardant leur assistance. Cela permet de rendre le processus plus intuitif et
naturel ainsi que de réduire la pénibilité. La modification locale d'un guide virtuel en trajectoire est possible par interaction physique avec le robot. L'utilisateur peut
déplacer un point clé cartésien ou modifier une portion entière du guide avec une nouvelle démonstration partielle. Nous avons également étendu notre approche aux guides
virtuels 6D, où les splines en déplacement sont définies via une interpolation Akima (pour la translation) et une 'interpolation quadratique des quaternions (pour
l'orientation). L'opérateur peut initialement définir un guide virtuel en trajectoire, puis utiliser l'assistance en translation pour ne se concentrer que sur la démonstration
de l'orientation. Nous avons appliqué notre approche dans deux scénarios industriels utilisant un cobot. Nous avons ainsi démontré l'intérêt de notre méthode qui améliore le
confort de l'opérateur lors de la comanipulation.For a very long time, automation was driven by the use of traditional industrial robots placed in cages, programmed to repeat more or less complex tasks at their highest speed
and with maximum accuracy. This robot-oriented solution is heavily dependent on hard automation which requires pre-specified fixtures and time consuming programming, hindering
robots from becoming flexible and versatile tools. These robots have evolved towards a new generation of small, inexpensive, inherently safe and flexible systems that work hand
in hand with humans. In these new collaborative workspaces the human can be included in the loop as an active agent. As a teacher and as a co-worker he can influence the
decision-making process of the robot. In this context, virtual guides are an important tool used to assist the human worker by reducing physical effort and cognitive overload
during tasks accomplishment. However, the construction of virtual guides often requires expert knowledge and modeling of the task. These limitations restrict the usefulness of
virtual guides to scenarios with unchanging constraints. To overcome these challenges and enhance the flexibility of virtual guides programming, this thesis presents a novel
approach that allows the worker to create virtual guides by demonstration through an iterative method based on kinesthetic teaching and displacement splines. Thanks to this
approach, the worker is able to iteratively modify the guides while being assisted by them, making the process more intuitive and natural while reducing its painfulness. Our
approach allows local refinement of virtual guiding trajectories through physical interaction with the robots. We can modify a specific cartesian keypoint of the guide or re-
demonstrate a portion. We also extended our approach to 6D virtual guides, where displacement splines are defined via Akima interpolation (for translation) and quadratic
interpolation of quaternions (for orientation). The worker can initially define a virtual guiding trajectory and then use the assistance in translation to only concentrate on
defining the orientation along the path. We demonstrated that these innovations provide a novel and intuitive solution to increase the human's comfort during human-robot
comanipulation in two industrial scenarios with a collaborative robot (cobot)
NASA SBIR abstracts of 1992, phase 1 projects
The objectives of 346 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1992 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 346, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1992 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included
Cognitive Hyperconnected Digital Transformation
Cognitive Hyperconnected Digital Transformation provides an overview of the current Internet of Things (IoT) landscape, ranging from research, innovation and development priorities to enabling technologies in a global context. It is intended as a standalone book in a series that covers the Internet of Things activities of the IERC-Internet of Things European Research Cluster, including both research and technological innovation, validation and deployment. The book builds on the ideas put forward by the European Research Cluster, the IoT European Platform Initiative (IoT-EPI) and the IoT European Large-Scale Pilots Programme, presenting global views and state-of-the-art results regarding the challenges facing IoT research, innovation, development and deployment in the next years. Hyperconnected environments integrating industrial/business/consumer IoT technologies and applications require new IoT open systems architectures integrated with network architecture (a knowledge-centric network for IoT), IoT system design and open, horizontal and interoperable platforms managing things that are digital, automated and connected and that function in real-time with remote access and control based on Internet-enabled tools. The IoT is bridging the physical world with the virtual world by combining augmented reality (AR), virtual reality (VR), machine learning and artificial intelligence (AI) to support the physical-digital integrations in the Internet of mobile things based on sensors/actuators, communication, analytics technologies, cyber-physical systems, software, cognitive systems and IoT platforms with multiple functionalities. These IoT systems have the potential to understand, learn, predict, adapt and operate autonomously. They can change future behaviour, while the combination of extensive parallel processing power, advanced algorithms and data sets feed the cognitive algorithms that allow the IoT systems to develop new services and propose new solutions. IoT technologies are moving into the industrial space and enhancing traditional industrial platforms with solutions that break free of device-, operating system- and protocol-dependency. Secure edge computing solutions replace local networks, web services replace software, and devices with networked programmable logic controllers (NPLCs) based on Internet protocols replace devices that use proprietary protocols. Information captured by edge devices on the factory floor is secure and accessible from any location in real time, opening the communication gateway both vertically (connecting machines across the factory and enabling the instant availability of data to stakeholders within operational silos) and horizontally (with one framework for the entire supply chain, across departments, business units, global factory locations and other markets). End-to-end security and privacy solutions in IoT space require agile, context-aware and scalable components with mechanisms that are both fluid and adaptive. The convergence of IT (information technology) and OT (operational technology) makes security and privacy by default a new important element where security is addressed at the architecture level, across applications and domains, using multi-layered distributed security measures. Blockchain is transforming industry operating models by adding trust to untrusted environments, providing distributed security mechanisms and transparent access to the information in the chain. Digital technology platforms are evolving, with IoT platforms integrating complex information systems, customer experience, analytics and intelligence to enable new capabilities and business models for digital business
Adviser\u27s guide to health care: Volume 1, An Era of Reform
https://egrove.olemiss.edu/aicpa_guides/1800/thumbnail.jp
Unmanned Vehicle Systems & Operations on Air, Sea, Land
Unmanned Vehicle Systems & Operations On Air, Sea, Land is our fourth textbook in a series covering the world of Unmanned Aircraft Systems (UAS) and Counter Unmanned Aircraft Systems (CUAS). (Nichols R. K., 2018) (Nichols R. K., et al., 2019) (Nichols R. , et al., 2020)The authors have expanded their purview beyond UAS / CUAS systems. Our title shows our concern for growth and unique cyber security unmanned vehicle technology and operations for unmanned vehicles in all theaters: Air, Sea and Land – especially maritime cybersecurity and China proliferation issues. Topics include: Information Advances, Remote ID, and Extreme Persistence ISR; Unmanned Aerial Vehicles & How They Can Augment Mesonet Weather Tower Data Collection; Tour de Drones for the Discerning Palate; Underwater Autonomous Navigation & other UUV Advances; Autonomous Maritime Asymmetric Systems; UUV Integrated Autonomous Missions & Drone Management; Principles of Naval Architecture Applied to UUV’s; Unmanned Logistics Operating Safely and Efficiently Across Multiple Domains; Chinese Advances in Stealth UAV Penetration Path Planning in Combat Environment; UAS, the Fourth Amendment and Privacy; UV & Disinformation / Misinformation Channels; Chinese UAS Proliferation along New Silk Road Sea / Land Routes; Automaton, AI, Law, Ethics, Crossing the Machine – Human Barrier and Maritime Cybersecurity.Unmanned Vehicle Systems are an integral part of the US national critical infrastructure The authors have endeavored to bring a breadth and quality of information to the reader that is unparalleled in the unclassified sphere. Unmanned Vehicle (UV) Systems & Operations On Air, Sea, Land discusses state-of-the-art technology / issues facing U.S. UV system researchers / designers / manufacturers / testers. We trust our newest look at Unmanned Vehicles in Air, Sea, and Land will enrich our students and readers understanding of the purview of this wonderful technology we call UV.https://newprairiepress.org/ebooks/1035/thumbnail.jp
- …