668 research outputs found

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance

    A non-holonomic, highly human-in-the-loop compatible, assistive mobile robotic platform guidance navigation and control strategy

    Get PDF
    The provision of assistive mobile robotics for empowering and providing independence to the infirm, disabled and elderly in society has been the subject of much research. The issue of providing navigation and control assistance to users, enabling them to drive their powered wheelchairs effectively, can be complex and wide-ranging; some users fatigue quickly and can find that they are unable to operate the controls safely, others may have brain injury re-sulting in periodic hand tremors, quadriplegics may use a straw-like switch in their mouth to provide a digital control signal. Advances in autonomous robotics have led to the development of smart wheelchair systems which have attempted to address these issues; however the autonomous approach has, ac-cording to research, not been successful; users reporting that they want to be active drivers and not passengers. Recent methodologies have been to use collaborative or shared control which aims to predict or anticipate the need for the system to take over control when some pre-decided threshold has been met, yet these approaches still take away control from the us-er. This removal of human supervision and control by an autonomous system makes the re-sponsibility for accidents seriously problematic. This thesis introduces a new human-in-the-loop control structure with real-time assistive lev-els. One of these levels offers improved dynamic modelling and three of these levels offer unique and novel real-time solutions for: collision avoidance, localisation and waypoint iden-tification, and assistive trajectory generation. This architecture and these assistive functions always allow the user to remain fully in control of any motion of the powered wheelchair, shown in a series of experiments

    Microdrone-Based Indoor Mapping with Graph SLAM

    Get PDF
    Unmanned aerial vehicles offer a safe and fast approach to the production of three-dimensional spatial data on the surrounding space. In this article, we present a low-cost SLAM-based drone for creating exploration maps of building interiors. The focus is on emergency response mapping in inaccessible or potentially dangerous places. For this purpose, we used a quadcopter microdrone equipped with six laser rangefinders (1D scanners) and an optical sensor for mapping and positioning. The employed SLAM is designed to map indoor spaces with planar structures through graph optimization. It performs loop-closure detection and correction to recognize previously visited places, and to correct the accumulated drift over time. The proposed methodology was validated for several indoor environments. We investigated the performance of our drone against a multilayer LiDAR-carrying macrodrone, a vision-aided navigation helmet, and ground truth obtained with a terrestrial laser scanner. The experimental results indicate that our SLAM system is capable of creating quality exploration maps of small indoor spaces, and handling the loop-closure problem. The accumulated drift without loop closure was on average 1.1% (0.35 m) over a 31-m-long acquisition trajectory. Moreover, the comparison results demonstrated that our flying microdrone provided a comparable performance to the multilayer LiDAR-based macrodrone, given the low deviation between the point clouds built by both drones. Approximately 85 % of the cloud-to-cloud distances were less than 10 cm

    Simultaneous Localization and Mapping Technologies

    Get PDF
    Il problema dello SLAM (Simultaneous Localization And Mapping) consiste nel mappare un ambiente sconosciuto per mezzo di un dispositivo che si muove al suo interno, mentre si effettua la localizzazione di quest'ultimo. All'interno di questa tesi viene analizzato il problema dello SLAM e le differenze che lo contraddistinguono dai problemi di mapping e di localizzazione trattati separatamente. In seguito, si effettua una analisi dei principali algoritmi impiegati al giorno d'oggi per la sua risoluzione, ovvero i filtri estesi di Kalman e i particle filter. Si analizzano poi le diverse tecnologie implementative esistenti, tra le quali figurano sistemi SONAR, sistemi LASER, sistemi di visione e sistemi RADAR; questi ultimi, allo stato dell'arte, impiegano onde millimetriche (mmW) e a banda larga (UWB), ma anche tecnologie radio già affermate, fra le quali il Wi-Fi. Infine, vengono effettuate delle simulazioni di tecnologie basate su sistema di visione e su sistema LASER, con l'ausilio di due pacchetti open source di MATLAB. Successivamente, il pacchetto progettato per sistemi LASER è stato modificato al fine di simulare una tecnologia SLAM basata su segnali Wi-Fi. L'utilizzo di tecnologie a basso costo e ampiamente diffuse come il Wi-Fi apre alla possibilità, in un prossimo futuro, di effettuare localizzazione indoor a basso costo, sfruttando l'infrastruttura esistente, mediante un semplice smartphone. Più in prospettiva, l'avvento della tecnologia ad onde millimetriche (5G) consentirà di raggiungere prestazioni maggiori

    Behavioural strategy for indoor mobile robot navigation in dynamic environments

    Get PDF
    PhD ThesisDevelopment of behavioural strategies for indoor mobile navigation has become a challenging and practical issue in a cluttered indoor environment, such as a hospital or factory, where there are many static and moving objects, including humans and other robots, all of which trying to complete their own specific tasks; some objects may be moving in a similar direction to the robot, whereas others may be moving in the opposite direction. The key requirement for any mobile robot is to avoid colliding with any object which may prevent it from reaching its goal, or as a consequence bring harm to any individual within its workspace. This challenge is further complicated by unobserved objects suddenly appearing in the robots path, particularly when the robot crosses a corridor or an open doorway. Therefore the mobile robot must be able to anticipate such scenarios and manoeuvre quickly to avoid collisions. In this project, a hybrid control architecture has been designed to navigate within dynamic environments. The control system includes three levels namely: deliberative, intermediate and reactive, which work together to achieve short, fast and safe navigation. The deliberative level creates a short and safe path from the current position of the mobile robot to its goal using the wavefront algorithm, estimates the current location of the mobile robot, and extracts the region from which unobserved objects may appear. The intermediate level links the deliberative level and the reactive level, that includes several behaviours for implementing the global path in such a way to avoid any collision. In avoiding dynamic obstacles, the controller has to identify and extract obstacles from the sensor data, estimate their speeds, and then regular its speed and direction to minimize the collision risk and maximize the speed to the goal. The velocity obstacle approach (VO) is considered an easy and simple method for avoiding dynamic obstacles, whilst the collision cone principle is used to detect the collision situation between two circular-shaped objects. However the VO approach has two challenges when applied in indoor environments. The first challenge is extraction of collision cones of non-circular objects from sensor data, in which applying fitting circle methods generally produces large and inaccurate collision cones especially for line-shaped obstacle such as walls. The second challenge is that the mobile robot cannot sometimes move to its goal because all its velocities to the goal are located within collision cones. In this project, a method has been demonstrated to extract the colliii sion cones of circular and non-circular objects using a laser sensor, where the obstacle size and the collision time are considered to weigh the robot velocities. In addition the principle of the virtual obstacle was proposed to minimize the collision risk with unobserved moving obstacles. The simulation and experiments using the proposed control system on a Pioneer mobile robot showed that the mobile robot can successfully avoid static and dynamic obstacles. Furthermore the mobile robot was able to reach its target within an indoor environment without causing any collision or missing the target

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    3D Cameras: 3D Computer Vision of Wide Scope

    Get PDF
    The human visual sense is the one among all other senses that gathers most information we receive. Evolution has optimized our visual system to negotiate one's way in three dimensions even through cluttered environments. For perceiving 3D information, the human brain uses three important principles: stereo vision, motion parallax and a-priori knowledge about the perspective appearance of objects in dependency of their distance. These tasks pose a challenge to computer vision since decades. Today the most common techniques for 3D sensing are CCD- or CMOS-camera, laser scanner or 3D time-of-flight camera based. Even though evolution has shown predominance for passive stereo vision systems, three additional problems are remaining for 3D perception compared with the two mentioned active vision systems above. First, the computation needs a great deal of performance, since the correlation of two images from a different point of view has to be found. Second, distances to structureless surfaces cannot be measured, if the perspective projection of the object is larger than the camera’s field of view. This problem is often called aperture problem. Finally, a passive visual sensor has to cope with shadowing effects and changes in illumination over time. That is why for mapping purposes mostly active vision systems like laser scanners are used , e.g. [Thrun et al., 2000], [Wulf & Wagner, 2003], [Surmann et al., 2003]. But these approaches are usually not applicable to tasks considering environment dynamics. Due to this restriction, 3D cameras [CSEM SA, 2007], [PMDTec, 2007] have attracted attention since their invention nearly a decade ago. Distance measurements are also based on a time-of-flight principle but with an important difference. Instead of sampling laser beams serially to acquire distance data point-wise, the entire scene is measured in parallel with a modulated surface. This principle allows for higher frame rates and thus enables the consideration of environment dynamics. The first part of this chapter discusses the physical principles of 3D sensors, which are commonly used in the robotics community for typical problems like mapping and navigation. The second part concentrates on 3D cameras, their assets, drawbacks and perspectives. Based on these examining parts, some solutions are discussed that handle common problems occurring in dynamic environments with changing lighting conditions. Finally, it is shown in the last part of this chapter how 3D cameras can be applied to mapping, object localization and feature tracking tasks

    Recent Advances in Indoor Localization Systems and Technologies

    Get PDF
    Despite the enormous technical progress seen in the past few years, the maturity of indoor localization technologies has not yet reached the level of GNSS solutions. The 23 selected papers in this book present the recent advances and new developments in indoor localization systems and technologies, propose novel or improved methods with increased performance, provide insight into various aspects of quality control, and also introduce some unorthodox positioning methods

    PCA-based line detection from range data for mapping and localization-aiding of UAVs

    Get PDF
    This paper presents an original technique for robust detection of line features from range data, which is also the core element of an algorithm conceived for mapping 2D environments. A new approach is also discussed to improve the accuracy of position and attitude estimates of the localization by feeding back angular information extracted from the detected edges in the updating map. The innovative aspects of the line detection algorithm regard the proposed hierarchical clusterization method for segmentation. Instead, line fitting is carried out by exploiting the Principal Component Analysis, unlike traditional techniques relying on Least Squares linear regression. Numerical simulations are purposely conceived to compare these approaches for line fitting. Results demonstrate the applicability of the proposed technique as it provides comparable performance in terms of computational load and accuracy compared to the least squares method. Also, performance of the overall line detection architecture, as well as of the solutions proposed for line-based mapping and localization-aiding is evaluated exploiting real range data acquired in indoor environments using an UTM-30LX-EW 2D LIDAR. This paper lies in the framework of autonomous navigation of unmanned vehicles moving in complex 2D areas, e.g. unexplored, full of obstacles, GPS-challenging or denied
    corecore