283 research outputs found

    Application of a mobile robot to spatial mapping of radioactive substances in indoor environment

    Get PDF
    Nuclear medicine requires the use of radioactive substances that can contaminate critical areas (dangerous or hazardous) where the presence of a human must be reduced or avoided. The present work uses a mobile robot in real environment and 3D simulation to develop a method to realize spatial mapping of radioactive substances. The robot should visit all the waypoints arranged in a grid of connectivity that represents the environment. The work presents the methodology to perform the path planning, control and estimation of the robot location. For path planning two methods are approached, one a heuristic method based on observation of problem and another one was carried out an adaptation in the operations of the genetic algorithm. The control of the actuators was based on two methodologies, being the first to follow points and the second to follow trajectories. To locate the real mobile robot, the extended Kalman filter was used to fuse an ultra-wide band sensor with odometry, thus estimating the position and orientation of the mobile agent. The validation of the obtained results occurred using a low cost system with a laser range finder.A medicina nuclear requer o uso de substâncias radioativas que pode vir a contaminar áreas críticas, onde a presença de um ser humano deve ser reduzida ou evitada. O presente trabalho utiliza um robô móvel em ambiente real e em simulação 3D para desenvolver um método para o mapeamento espacial de substâncias radioativas. O robô deve visitar todos os waypoinst dispostos em uma grelha de conectividade que representa o ambiente. O trabalho apresenta a metodologia para realizar o planejamento de rota, controle e estimação da localização do robô. Para o planejamento de rota são abordados dois métodos, um baseado na heurística ao observar o problema e ou outro foi realizado uma adaptação nas operações do algoritmo genético. O controle dos atuadores foi baseado em duas metodologias, sendo a primeira para seguir de pontos e a segunda seguir trajetórias. Para localizar o robô móvel real foi utilizado o filtro de Kalman extendido para a fusão entre um sensor ultra-wide band e odometria, estimando assim a posição e orientação do agente móvel. A validação dos resultados obtidos ocorreu utilizando um sistema de baixo custo com um laser range finder

    Towards automated visual flexible endoscope navigation

    Get PDF
    Background:\ud The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud Methods:\ud A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud Results:\ud Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud Conclusions:\ud Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process

    Magnetic-Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots

    Full text link
    Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is demonstrated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors 1.58 to 2.17 cm.Comment: submitted to IROS 201

    Contemporary Robotics

    Get PDF
    This book book is a collection of 18 chapters written by internationally recognized experts and well-known professionals of the field. Chapters contribute to diverse facets of contemporary robotics and autonomous systems. The volume is organized in four thematic parts according to the main subjects, regarding the recent advances in the contemporary robotics. The first thematic topics of the book are devoted to the theoretical issues. This includes development of algorithms for automatic trajectory generation using redudancy resolution scheme, intelligent algorithms for robotic grasping, modelling approach for reactive mode handling of flexible manufacturing and design of an advanced controller for robot manipulators. The second part of the book deals with different aspects of robot calibration and sensing. This includes a geometric and treshold calibration of a multiple robotic line-vision system, robot-based inline 2D/3D quality monitoring using picture-giving and laser triangulation, and a study on prospective polymer composite materials for flexible tactile sensors. The third part addresses issues of mobile robots and multi-agent systems, including SLAM of mobile robots based on fusion of odometry and visual data, configuration of a localization system by a team of mobile robots, development of generic real-time motion controller for differential mobile robots, control of fuel cells of mobile robots, modelling of omni-directional wheeled-based robots, building of hunter- hybrid tracking environment, as well as design of a cooperative control in distributed population-based multi-agent approach. The fourth part presents recent approaches and results in humanoid and bioinspirative robotics. It deals with design of adaptive control of anthropomorphic biped gait, building of dynamic-based simulation for humanoid robot walking, building controller for perceptual motor control dynamics of humans and biomimetic approach to control mechatronic structure using smart materials

    Inertial learning and haptics for legged robot state estimation in visually challenging environments

    Get PDF
    Legged robots have enormous potential to automate dangerous or dirty jobs because they are capable of traversing a wide range of difficult terrains such as up stairs or through mud. However, a significant challenge preventing widespread deployment of legged robots is a lack of robust state estimation, particularly in visually challenging conditions such as darkness or smoke. In this thesis, I address these challenges by exploiting proprioceptive sensing from inertial, kinematic and haptic sensors to provide more accurate state estimation when visual sensors fail. Four different methods are presented, including the use of haptic localisation, terrain semantic localisation, learned inertial odometry, and deep learning to infer the evolution of IMU biases. The first approach exploits haptics as a source of proprioceptive localisation by comparing geometric information to a prior map. The second method expands on this concept by fusing both semantic and geometric information, allowing for accurate localisation on diverse terrain. Next, I combine new techniques in inertial learning with classical IMU integration and legged robot kinematics to provide more robust state estimation. This is further developed to use only IMU data, for an application entirely different from robotics: 3D reconstruction of bone with a handheld ultrasound scanner. Finally, I present the novel idea of using deep learning to infer the evolution of IMU biases, improving state estimation in exteroceptive systems where vision fails. Legged robots have the potential to benefit society by automating dangerous, dull, or dirty jobs and by assisting first responders in emergency situations. However, there remain many unsolved challenges to the real-world deployment of legged robots, including accurate state estimation in vision-denied environments. The work presented in this thesis takes a step towards solving these challenges and enabling the deployment of legged robots in a variety of applications

    Vision-based legged robot navigation: localisation, local planning, learning

    Get PDF
    The recent advances in legged locomotion control have made legged robots walk up staircases, go deep into underground caves, and walk in the forest. Nevertheless, autonomously achieving this task is still a challenge. Navigating and acomplishing missions in the wild relies not only on robust low-level controllers but also higher-level representations and perceptual systems that are aware of the robot's capabilities. This thesis addresses the navigation problem for legged robots. The contributions are four systems designed to exploit unique characteristics of these platforms, from the sensing setup to their advanced mobility skills over different terrain. The systems address localisation, scene understanding, and local planning, and advance the capabilities of legged robots in challenging environments. The first contribution tackles localisation with multi-camera setups available on legged platforms. It proposes a strategy to actively switch between the cameras and stay localised while operating in a visual teach and repeat context---in spite of transient changes in the environment. The second contribution focuses on local planning, effectively adding a safety layer for robot navigation. The approach uses a local map built on-the-fly to generate efficient vector field representations that enable fast and reactive navigation. The third contribution demonstrates how to improve local planning in natural environments by learning robot-specific traversability from demonstrations. The approach leverages classical and learning-based methods to enable online, onboard traversability learning. These systems are demonstrated via different robot deployments on industrial facilities, underground mines, and parklands. The thesis concludes by presenting a real-world application: an autonomous forest inventory system with legged robots. This last contribution presents a mission planning system for autonomous surveying as well as a data analysis pipeline to extract forestry attributes. The approach was experimentally validated in a field campaign in Finland, evidencing the potential that legged platforms offer for future applications in the wild

    Design of Autonomous Cleaning Robot

    Get PDF
    Today, the research is concentrated on designing and developing robots to address the challenges of human life in their everyday activities. The cleaning robots are the class of service robots whose demands are increasing exponentially. Nevertheless, the application of cleaning robots is confined to smaller areas such as homes. Not much autonomous cleaning products are commercialized for big areas such as schools, hospitals, malls, etc. In this thesis, the proof of concept is designed for the autonomous floor-cleaning robot and autonomous board-cleaning robot for schools. A thorough background study is conducted on domestic service robots to understand the technologies involved in these robots. The components of the vacuum cleaner are assembled on a commercial robotic platform. The principles of vacuum cleaning technology and airflow equations are employed for the component selection of the vacuum cleaner. As the autonomous board-cleaning robot acts against gravity, a magnetic adhesion is used to adhere the robot to the classroom board. This system uses a belt drive mechanism to manoeurve. The use of belt drive increases the area of magnetic attraction while the robot is in motion. A semi-systematic approach using patterned path planning techniques for the complete coverage of the working environment is discussed in this thesis. The outcome of this thesis depicts a new and conceptual mechanical design of an autonomous floor-cleaning robot and an autonomous board-cleaning robot. This evidence creates a preliminary design for proof-of-concept for these robots. This proof of concept design is developed from the basic equations of vacuum cleaning technology, airflow and magnetic adhesion. A general overview is discussed for collaborating the two robots. This research provides an extensive initial step to illustrate the development of an autonomous cleaning robot and further validates with quantitative data discussed in the thesis

    An overview of robotics and autonomous systems for harsh environments

    Get PDF
    Across a wide range of industries and applications, robotics and autonomous systems can fulfil the crucial and challenging tasks such as inspection, exploration, monitoring, drilling, sampling and mapping in areas of scientific discovery, disaster prevention, human rescue and infrastructure management, etc. However, in many situations, the associated environment is either too dangerous or inaccessible to humans. Hence, a wide range of robots have been developed and deployed to replace or aid humans in these activities. A look at these harsh environment applications of robotics demonstrate the diversity of technologies developed. This paper reviews some key application areas of robotics that involve interactions with harsh environments (such as search and rescue, space exploration, and deep-sea operations), gives an overview of the developed technologies and provides a discussion of the key trends and future directions common to many of these areas
    • …
    corecore