368 research outputs found
Ultraviolet disinfection (UV-D) robots: bridging the gaps in dentistry
Maintaining a microbe-free environment in healthcare facilities has become increasingly crucial for minimizing virus transmission, especially in the wake of recent epidemics like COVID-19. To meet the urgent need for ongoing sterilization, autonomous ultraviolet disinfection (UV-D) robots have emerged as vital tools. These robots are gaining popularity due to their automated nature, cost advantages, and ability to instantly disinfect rooms and workspaces without relying on human labor. Integrating disinfection robots into medical facilities reduces infection risk, lowers conventional cleaning costs, and instills greater confidence in patient safety. However, UV-D robots should complement rather than replace routine manual cleaning. To optimize the functionality of UV-D robots in medical settings, additional hospital and device design modifications are necessary to address visibility challenges. Achieving seamless integration requires more technical advancements and clinical investigations across various institutions. This mini-review presents an overview of advanced applications that demand disinfection, highlighting their limitations and challenges. Despite their potential, little comprehensive research has been conducted on the sterilizing impact of disinfection robots in the dental industry. By serving as a starting point for future research, this review aims to bridge the gaps in knowledge and identify unresolved issues. Our objective is to provide an extensive guide to UV-D robots, encompassing design requirements, technological breakthroughs, and in-depth use in healthcare and dentistry facilities. Understanding the capabilities and limitations of UV-D robots will aid in harnessing their potential to revolutionize infection control practices in the medical and dental fields
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This ļ¬fth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ļ¬elds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiļ¬ed Proportional Conļ¬ict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiļ¬ers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiļ¬cation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiļ¬cation.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiļ¬cation, and hybrid techniques mixing deep learning with belief functions as well
RObotic MAnipulation Network (ROMAN) \unicode{x2013} Hybrid Hierarchical Learning for Solving Complex Sequential Tasks
Solving long sequential tasks poses a significant challenge in embodied
artificial intelligence. Enabling a robotic system to perform diverse
sequential tasks with a broad range of manipulation skills is an active area of
research. In this work, we present a Hybrid Hierarchical Learning framework,
the Robotic Manipulation Network (ROMAN), to address the challenge of solving
multiple complex tasks over long time horizons in robotic manipulation. ROMAN
achieves task versatility and robust failure recovery by integrating
behavioural cloning, imitation learning, and reinforcement learning. It
consists of a central manipulation network that coordinates an ensemble of
various neural networks, each specialising in distinct re-combinable sub-tasks
to generate their correct in-sequence actions for solving complex long-horizon
manipulation tasks. Experimental results show that by orchestrating and
activating these specialised manipulation experts, ROMAN generates correct
sequential activations for accomplishing long sequences of sophisticated
manipulation tasks and achieving adaptive behaviours beyond demonstrations,
while exhibiting robustness to various sensory noises. These results
demonstrate the significance and versatility of ROMAN's dynamic adaptability
featuring autonomous failure recovery capabilities, and highlight its potential
for various autonomous manipulation tasks that demand adaptive motor skills.Comment: To appear in Nature Machine Intelligence. Includes the main and
supplementary manuscript. Total of 70 pages, with a total of 9 Figures and 17
Table
Hybrid hierarchical learning for solving complex sequential tasks using the robotic manipulation network ROMAN
Solving long sequential tasks remains a non-trivial challenge in the field of embodied artificial intelligence. Enabling a robotic system to perform diverse sequential tasks with a broad range of manipulation skills is a notable open problem and continues to be an active area of research. In this work, we present a hybrid hierarchical learning framework, the robotic manipulation network ROMAN, to address the challenge of solving multiple complex tasks over long time horizons in robotic manipulation. By integrating behavioural cloning, imitation learning and reinforcement learning, ROMAN achieves task versatility and robust failure recovery. It consists of a central manipulation network that coordinates an ensemble of various neural networks, each specializing in different recombinable subtasks to generate their correct in-sequence actions, to solve complex long-horizon manipulation tasks. Our experiments show that, by orchestrating and activating these specialized manipulation experts, ROMAN generates correct sequential activations accomplishing long sequences of sophisticated manipulation tasks and achieving adaptive behaviours beyond demonstrations, while exhibiting robustness to various sensory noises. These results highlight the significance and versatility of ROMANās dynamic adaptability featuring autonomous failure recovery capabilities, and underline its potential for various autonomous manipulation tasks that require adaptive motor skills
Signal and Information Processing Methods for Embedded Robotic Tactile Sensing Systems
The human skin has several sensors with different properties and responses that are able to detect stimuli resulting from mechanical stimulations. Pressure sensors are the most important type of receptors for the exploration and manipulation of objects. In the last decades, smart tactile sensing based on different sensing techniques have been developed as their application in robotics and prosthetics is considered of huge interest, mainly driven by the prospect of autonomous and intelligent robots that can interact with the environment. However, regarding object properties estimation on robots, hardness detection is still a major limitation due to the lack of techniques to estimate it. Furthermore, finding processing methods that can interpret the measured information from multiple sensors and extract relevant information is a Challenging task. Moreover, embedding processing methods and machine learning algorithms in robotic applications to extract meaningful information such as object properties from tactile data is an ongoing challenge, which is controlled by the device constraints (power constraint, memory constraints, etc.), the computational complexity of the processing and machine learning algorithms, the application requirements (real-time operations, high prediction performance). In this dissertation, we focus on the design and implementation of pre-processing methods and machine learning algorithms to handle the aforementioned challenges for a tactile sensing system in robotic application. First, we propose a tactile sensing system for robotic application. Then we present efficient preprocessing and feature extraction methods for our tactile sensors. Then we propose a learning strategy to reduce the computational cost of our processing unit in object classification using sensorized Baxter robot. Finally, we present a real-time robotic tactile sensing system for hardness classification on a resource-constrained devices.
The first study represents a further assessment of the sensing system that is based on the PVDF sensors and the interface electronics developed in our lab. In particular, first, it presents the development of a skin patch (multilayer structure) that allows us to use the sensors in several applications such as robotic hand/grippers. Second, it shows the characterization of the developed skin patch. Third, it validates the sensing system. Moreover, we designed a filter to remove noise and detect touch. The experimental assessment demonstrated that the developed skin patch and the interface electronics indeed can detect different touch patterns and stimulus waveforms. Moreover, the results of the experiments defined the frequency range of interest and the response of the system to realistic interactions with the sensing system to grasp and release events.
In the next study, we presented an easy integration of our tactile sensing system into Baxter gripper. Computationally efficient pre-processing techniques were designed to filter the signal and extract relevant information from multiple sensor signals, in addition to feature extraction methods. These processing methods aim in turn to reduce also the computational complexity of machine learning algorithms utilized for object classification. The proposed system and processing strategy were evaluated on object classification application by integrating our system into the gripper and we collected data by grasping multiple objects. We further proposed a learning strategy to accomplish a trade-off between the generalization accuracy and the computational cost of the whole processing unit. The proposed pre-processing and feature extraction techniques together with the learning strategy have led to models with extremely low complexity and very high generalization accuracy. Moreover, the support vector machine achieved the best trade-off between accuracy and computational cost on tactile data from our sensors.
Finally, we presented the development and implementation on the edge of a realātime tactile sensing system for hardness classification on Baxter robot based on machine and deep learning algorithms. We developed and implemented in plain C a set of functions that provide the fundamental layer functionalities of the Machine learning and Deep Learning models (ML and DL), along with the preāprocessing methods to extract the features and normalize the data. The models can be deployed to any device that supports C code since it does not rely on any of the existing libraries. Shallow ML/DL algorithms for the deployment on resourceāconstrained devices are designed. To evaluate our work, we collected data by grasping objects of different hardness and shape. Two classification problems were addressed: 5 levels of hardness classified on the same objectsā shape, and 5 levels of hardness classified on two different objectsā shape. Furthermore, optimization techniques were employed. The models and preāprocessing were implemented on a resource constrained device, where we assessed the performance of the system in terms of accuracy, memory footprint, time latency, and energy consumption. We achieved for both classification problems a real-time inference (< 0.08 ms), low power consumption (i.e., 3.35 Ī¼J), extremely small models (i.e., 1576 Byte), and high accuracy (above 98%)
Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications
L'abstract ĆØ presente nell'allegato / the abstract is in the attachmen
Development and Characteristics of a Highly Biomimetic Robotic Shoulder Through Bionics-Inspired Optimization
This paper critically analyzes conventional and biomimetic robotic arms,
underscoring the trade-offs between size, motion range, and load capacity in
current biomimetic models. By delving into the human shoulder's mechanical
intelligence, particularly the glenohumeral joint's intricate features such as
its unique ball-and-socket structure and self-locking mechanism, we pinpoint
innovations that bolster both stability and mobility while maintaining
compactness. To substantiate these insights, we present a groundbreaking
biomimetic robotic glenohumeral joint that authentically mirrors human
musculoskeletal elements, from ligaments to tendons, integrating the biological
joint's mechanical intelligence. Our exhaustive simulations and tests reveal
enhanced flexibility and load capacity for the robotic joint. The advanced
robotic arm demonstrates notable capabilities, including a significant range of
motions and a 4 kg payload capacity, even exerting over 1.5 Nm torque. This
study not only confirms the human shoulder joint's mechanical innovations but
also introduces a pioneering design for a next-generation biomimetic robotic
arm, setting a new benchmark in robotic technology
Methods, Models, and Datasets for Visual Servoing and Vehicle Localisation
Machine autonomy has become a vibrant part of industrial and commercial aspirations. A growing demand exists for dexterous and intelligent machines that can work in unstructured environments without any human assistance. An autonomously operating machine should sense its surroundings, classify diļ¬erent kinds of observed objects, and interpret sensory information to perform necessary operations.
This thesis summarizes original methods aimed at enhancing machineās autonomous operation capability. These methods and the corresponding results are grouped into two main categories. The ļ¬rst category consists of research works that focus on improving visual servoing systems for robotic manipulators to accurately position workpieces. We start our investigation with the hand-eye calibration problem that focuses on calibrating visual sensors with a robotic manipulator. We thoroughly investigate the problem from various perspectives and provide alternative formulations of the problem and error objectives. The experimental results demonstrate that the proposed methods are robust and yield accurate solutions when tested on real and simulated data. The work package is bundled as a toolkit and available online for public use. In an extension, we proposed a constrained multiview pose estimation approach for robotic manipulators. The approach exploits the available geometric constraints on the robotic system and infuses them directly into the pose estimation method. The empirical results demonstrate higher accuracy and signiļ¬cantly higher precision compared to other studies.
In the second part of this research, we tackle problems pertaining to the ļ¬eld of autonomous vehicles and its related applications. First, we introduce a pose estimation and mapping scheme to extend the application of visual Simultaneous Localization and Mapping to unstructured dynamic environments. We identify, extract, and discard dynamic entities from the pose estimation step. Moreover, we track the dynamic entities and actively update the map based on changes in the environment. Upon observing the limitations of the existing datasets during our earlier work, we introduce FinnForest, a novel dataset for testing and validating the performance of visual odometry and Simultaneous Localization and Mapping methods in an un-structured environment. We explored an environment with a forest landscape and recorded data with multiple stereo cameras, an IMU, and a GNSS receiver. The dataset oļ¬ers unique challenges owing to the nature of the environment, variety of trajectories, and changes in season, weather, and daylight conditions. Building upon the future works proposed in FinnForest Dataset, we introduce a novel scheme that can localize an observer with extreme perspective changes. More speciļ¬cally, we tailor the problem for autonomous vehicles such that they can recognize a previously visited place irrespective of the direction it previously traveled the route. To the best of our knowledge, this is the ļ¬rst study that accomplishes bi-directional loop closure on monocular images with a nominal ļ¬eld of view. To solve the localisation problem, we segregate the place identiļ¬cation from the pose regression by using deep learning in two steps. We demonstrate that bi-directional loop closure on monocular images is indeed possible when the problem is posed correctly, and the training data is adequately leveraged.
All methodological contributions of this thesis are accompanied by extensive empirical analysis and discussions demonstrating the need, novelty, and improvement in performance over existing methods for pose estimation, odometry, mapping, and place recognition
- ā¦