1,501 research outputs found

    A review of sensor technology and sensor fusion methods for map-based localization of service robot

    Get PDF
    Service robot is currently gaining traction, particularly in hospitality, geriatric care and healthcare industries. The navigation of service robots requires high adaptability, flexibility and reliability. Hence, map-based navigation is suitable for service robot because of the ease in updating changes in environment and the flexibility in determining a new optimal path. For map-based navigation to be robust, an accurate and precise localization method is necessary. Localization problem can be defined as recognizing the robot’s own position in a given environment and is a crucial step in any navigational process. Major difficulties of localization include dynamic changes of the real world, uncertainties and limited sensor information. This paper presents a comparative review of sensor technology and sensor fusion methods suitable for map-based localization, focusing on service robot applications

    A novel distributed architecture for UAV indoor navigation

    Get PDF
    Abstract In the last decade, different indoor flight navigation systems for small Unmanned Aerial Vehicles (UAVs) have been investigated, with a special focus on different configurations and on sensor technologies. The main idea of this paper is to propose a distributed Guidance Navigation and Control (GNC) system architecture, based on Robotic Operation System (ROS) for light weight UAV autonomous indoor flight. The proposed framework is shown to be more robust and flexible than common configurations. A flight controller and companion computer running ROS for control and navigation are also included in the section. Both hardware and software diagrams are given to show the complete architecture. Further works will be based on the experimental validation of the proposed configuration by indoor flight tests

    Localization and 2D Mapping Using Low-Cost Lidar

    Get PDF
    Autonomous vehicles are expected to make a profound change in auto industry. An autonomous vehicle is a vehicle that is able to sense its surroundings and travel with little or no human intervention. The four key capabilities of autonomous vehicles are a comprehensive understanding of sensor data, knowledge of their positions in the world, building the map of unknown environment, as well as following the planed route and collision avoidance. This thesis is aimed at building a low-cost autonomous vehicle prototype that is capable of localization and 2D mapping simultaneously. In addition, the prototype should be able to detect obstacles and avoid collision. In this thesis, a Redbot is utilized as a moving vehicle to evaluate collision avoidance functionality. A mechnical bumper in front of the Redbot is used to detect obstacles, and a remote user can send appropriate commands to control the Redbot via Zigbee network, then Redbot acts accordingly, including driving straightly, changing direction to right or left, and stop. Redbot are also used to carry the lidar scanner which consists of Lidar Lite V3 and a servo motor. Lidar data are sent back to a Laptop running ROS via Zigbee network. In ROS, Hector SLAM metapackage is adopted to process the lidar data, and realize the functionality of simultaneous localization and 2D mapping. After implementing the autonomous vehicle prototype, a series of tests are con- ducted to evaluate the functionality of localization, 2D mapping, obstacle detection, and collision avoidance. The results demonstrated that the prototype is capable of building usable 2D maps of unknown environment, simultaneous localization, obstacle detection and collision avoidance in time. Due to the limited scan range of the low-cost lidar scanner, boundary missing problem can happen. This limitation can be solved through the use of a lidar scanner with larger scan range

    Towards a Prototype Platform for ROS Integrations on a Ground Robot

    Get PDF
    The intent of this work was to develop, evaluate, and demonstrate a prototype robot platform on which ROS integrations could be explored. With observations of features and requirements of existing industrial and service mobile ground robots, a platform was designed and outfitted with appropriate components to enable the most common operational-critical functionalities and account for unforeseen components and features. The resulting Arlo Demonstration Robot accommodates basic mapping, localization, and navigation in both two and three-dimensional space as well as additional safety and teleoperation features. The control system is centered around the Zybo Z7 FPGA SoC hosting a custom hardware design. The platform is validated through an analysis of feature requirements and limitations and additional evaluations of a series of real-world use cases demonstrating high-level behaviors. In order to promote further development, this work serves as detailed documentation of the selection, implementation, and testing of this platform and complements initial binary releases for the Zybo Z7 control system and accompanying source code for the functionalities implemented. This prototype robot stack can be further developed to enable additional capabilities and validate its performance in other real-world scenarios or used as a reference for porting to alternative robot platforms

    Virtuaalse proovikabiini 3D kehakujude ja roboti juhtimisalgoritmide uurimine

    Get PDF
    VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneVirtuaalne riiete proovimine on ĂŒks pĂ”hilistest teenustest, mille pakkumine vĂ”ib suurendada rĂ”ivapoodide edukust, sest tĂ€nu sellele lahendusele vĂ€heneb fĂŒĂŒsilise töö vajadus proovimise faasis ning riiete proovimine muutub kasutaja jaoks mugavamaks. Samas pole enamikel varem vĂ€lja pakutud masinnĂ€gemise ja graafika meetoditel Ă”nnestunud inimkeha realistlik modelleerimine, eriti terve keha 3D modelleerimine, mis vajab suurt kogust andmeid ja palju arvutuslikku ressurssi. Varasemad katsed on ebaĂ”nnestunud pĂ”hiliselt seetĂ”ttu, et ei ole suudetud korralikult arvesse vĂ”tta samaaegseid muutusi keha pinnal. Lisaks pole varasemad meetodid enamasti suutnud kujutiste liikumisi realistlikult reaalajas visualiseerida. KĂ€esolev projekt kavatseb kĂ”rvaldada eelmainitud puudused nii, et rahuldada virtuaalse proovikabiini vajadusi. VĂ€lja pakutud meetod seisneb nii kasutaja keha kui ka riiete skaneerimises, analĂŒĂŒsimises, modelleerimises, mÔÔtmete arvutamises, orientiiride paigutamises, mannekeenidelt vĂ”etud 3D visuaalsete andmete segmenteerimises ning riiete mudeli paigutamises ja visualiseerimises kasutaja kehal. Selle projekti kĂ€igus koguti visuaalseid andmeid kasutades 3D laserskannerit ja Kinecti optilist kaamerat ning koostati nendest andmebaas. Neid andmeid kasutati vĂ€lja töötatud algoritmide testimiseks, mis peamiselt tegelevad riiete realistliku visuaalse kujutamisega inimkehal ja suuruse pakkumise sĂŒsteemi tĂ€iendamisega virtuaalse proovikabiini kontekstis.Virtual fitting constitutes a fundamental element of the developments expected to rise the commercial prosperity of online garment retailers to a new level, as it is expected to reduce the load of the manual labor and physical efforts required. Nevertheless, most of the previously proposed computer vision and graphics methods have failed to accurately and realistically model the human body, especially, when it comes to the 3D modeling of the whole human body. The failure is largely related to the huge data and calculations required, which in reality is caused mainly by inability to properly account for the simultaneous variations in the body surface. In addition, most of the foregoing techniques cannot render realistic movement representations in real-time. This project intends to overcome the aforementioned shortcomings so as to satisfy the requirements of a virtual fitting room. The proposed methodology consists in scanning and performing some specific analyses of both the user's body and the prospective garment to be virtually fitted, modeling, extracting measurements and assigning reference points on them, and segmenting the 3D visual data imported from the mannequins. Finally, superimposing, adopting and depicting the resulting garment model on the user's body. The project is intended to gather sufficient amounts of visual data using a 3D laser scanner and the Kinect optical camera, to manage it in form of a usable database, in order to experimentally implement the algorithms devised. The latter will provide a realistic visual representation of the garment on the body, and enhance the size-advisor system in the context of the virtual fitting room under study

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
    • 

    corecore