152 research outputs found

    Unlimited-wokspace teleoperation

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Mechanical Engineering, Izmir, 2012Includes bibliographical references (leaves: 100-105)Text in English; Abstract: Turkish and Englishxiv, 109 leavesTeleoperation is, in its brief description, operating a vehicle or a manipulator from a distance. Teleoperation is used to reduce mission cost, protect humans from accidents that can be occurred during the mission, and perform complex missions for tasks that take place in areas which are difficult to reach or dangerous for humans. Teleoperation is divided into two main categories as unilateral and bilateral teleoperation according to information flow. This flow can be configured to be in either one direction (only from master to slave) or two directions (from master to slave and from slave to master). In unlimited-workspace teleoperation, one of the types of bilateral teleoperation, mobile robots are controlled by the operator and environmental information is transferred from the mobile robot to the operator. Teleoperated vehicles can be used in a variety of missions in air, on ground and in water. Therefore, different constructional types of robots can be designed for the different types of missions. This thesis aims to design and develop an unlimited-workspace teleoperation which includes an omnidirectional mobile robot as the slave system to be used in further researches. Initially, an omnidirectional mobile robot was manufactured and robot-operator interaction and efficient data transfer was provided with the established communication line. Wheel velocities were measured in real-time by Hall-effect sensors mounted on robot chassis to be integrated in controllers. A dynamic obstacle detection system, which is suitable for omnidirectional mobility, was developed and two obstacle avoidance algorithms (semi-autonomous and force reflecting) were created and tested. Distance information between the robot and the obstacles was collected by an array of sensors mounted on the robot. In the semi-autonomous teleoperation scenario, distance information is used to avoid obstacles autonomously and in the force-reflecting teleoperation scenario obstacles are informed to the user by sending back the artificially created forces acting on the slave robot. The test results indicate that obstacle avoidance performance of the developed vehicle with two algorithms is acceptable in all test scenarios. In addition, two control models were developed (kinematic and dynamic control) for the local controller of the slave robot. Also, kinematic controller was supported by gyroscope

    Loop closure for topological mapping and navigation with omnidirectional images

    Get PDF
    Dans le cadre de la robotique mobile, des progrès significatifs ont été obtenus au cours des trois dernières décennies pour la cartographie et la localisation. La plupart des projets de recherche traitent du problème de SLAM métrique. Les techniques alors développées sont sensibles aux erreurs liées à la dérive ce qui restreint leur utilisation à des environnements de petite échelle. Dans des environnements de grande taille, l utilisation de cartes topologiques, qui sont indépendantes de l information métrique, se présentent comme une alternative aux approches métriques.Cette thèse porte principalement sur le problème de la construction de cartes topologiques pour la navigation de robots mobiles dans des environnements urbains de grande taille, en utilisant des caméras omnidirectionnelles. La principale contribution de cette thèse est la résolution efficace et avec précision du problème de fermeture de boucles, problème qui est au coeur de tout algorithme de cartographie topologique. Le cadre de cartographie topologique éparse / hiérarchique proposé allie une approche de partionnement de séquence d images (ISP) par regroupement des images visuellement similaires dans un noeud avec une approche de détection de fermeture de boucles permettant de connecter ces noeux. Le graphe topologique alors obtenu représente l environnement du robot. L algorithme de fermeture de boucle hiérarchique développé permet d extraire dans un premier temps les noeuds semblables puis, dans un second temps, l image la plus similaire. Cette détection de fermeture de boucles hiérarchique est rendue efficace par le stockage du contenu des cartes éparses sous la forme d une structure de données d indexation appelée fichier inversé hiérarchique (HIF). Nous proposons de combiner le score de pondération TFIDF avec des contraintes spatiales et la fréquence des amers détectés pour obtenir une meilleur robustesse de la fermeture de boucles. Les résultats en terme de densité et précision des cartes obtenues et d efficacité sont évaluées et comparées aux résultats obtenus avec des approches de l état de l art sur des séquences d images omnidirectionnelles acquises en milieu extérieur. Au niveau de la précision des détections de boucles, des résultats similaires ont été observés vis-à-vis des autres approches mais sans étape de vérification utilisant la géométrie épipolaire. Bien qu efficace, l approche basée sur HIF présente des inconvénients comme la faible densité des cartes et le faible taux de détection des boucles. Une seconde technique de fermeture de boucle a alors été développée pour combler ces lacunes. Le problème de la faible densité des cartes est causé par un sur-partionnement de la séquence d images. Celui-ci est résolu en utilisant des vecteurs de descripteurs agrégés localement (VLAD) lors de l étape de ISP. Une mesure de similarité basée sur une contrainte spatiale spécifique à la structure des images omnidirectionnelles a également été développée. Des résultats plus précis sont obtenus, même en présence de peu d appariements. Les taux de réussite sont meilleurs qu avec FABMAP 2.0, la méthode la plus utilisée actuellement, sans étape supplémentaire de vérification géométrique.L environnement est souvent supposé invariant au cours du temps : la carte de l environnement est construite lors d une phase d apprentissage puis n est pas modifiée ensuite. Une gestion de la mémoire à long terme est nécessaire pour prendre en compte les modifications dans l environnement au cours du temps. La deuxième contribution de cette thèse est la formulation d une approche de gestion de la mémoire visuelle à long terme qui peut être utilisée dans le cadre de cartes visuelles topologiques et métriques. Les premiers résultats obtenus sont encourageants. (...)Over the last three decades, research in mobile robotic mapping and localization has seen significant progress. However, most of the research projects these problems into the SLAM framework while trying to map and localize metrically. As metrical mapping techniques are vulnerable to errors caused by drift, their ability to produce consistent maps is limited to small scale environments. Consequently, topological mapping approaches which are independent of metrical information stand as an alternative to metrical approaches in large scale environments. This thesis mainly deals with the loop closure problem which is the crux of any topological mapping algorithm. Our main aim is to solve the loop closure problem efficiently and accurately using an omnidirectional imaging sensor. Sparse topological maps can be built by representing groups of visually similar images of a sequence as nodes of a topological graph. We propose a sparse / hierarchical topological mapping framework which uses Image Sequence Partitioning (ISP) to group visually similar images of a sequence as nodes which are then connected on occurrence of loop closures to form a topological graph. A hierarchical loop closure algorithm that can first retrieve the similar nodes and then perform an image similarity analysis on the retrieved nodes is used. An indexing data structure called Hierarchical Inverted File (HIF) is proposed to store the sparse maps to facilitate an efficient hierarchical loop closure. TFIDF weighting is combined with spatial and frequency constraints on the detected features for improved loop closure robustness. Sparsity, efficiency and accuracy of the resulting maps are evaluated and compared to that of the other two existing techniques on publicly available outdoor omni-directional image sequences. Modest loop closure recall rates have been observed without using the epi-polar geometry verification step common in other approaches. Although efficient, the HIF based approach has certain disadvantages like low sparsity of maps and low recall rate of loop closure. To address these shortcomings, another loop closure technique using spatial constraint based similarity measure on omnidirectional images has been proposed. The low sparsity of maps caused by over-partitioning of the input sequence has been overcome by using Vector of Locally Aggregated Descriptors (VLAD) for ISP. Poor resolution of the omnidirectional images causes fewer feature matches in image pairs resulting in reduced recall rates. A spatial constraint exploiting the omnidirectional image structure is used for feature matching which gives accurate results even with fewer feature matches. Recall rates better than the contemporary FABMAP 2.0 approach have been observed without the additional geometric verification. The second contribution of this thesis is the formulation of a visual memory management approach suitable for long term operability of mobile robots. The formulated approach is suitable for both topological and metrical visual maps. Initial results which demonstrate the capabilities of this approach have been provided. Finally, a detailed description of the acquisition and construction of our multi-sensor dataset is provided. The aim of this dataset is to serve the researchers working in the mobile robotics and vision communities for evaluating applications like visual SLAM, mapping and visual odometry. This is the first dataset with omnidirectional images acquired on a car-like vehicle driven along a trajectory with multiple loops. The dataset consists of 6 sequences with data from 11 sensors including 7 cameras, stretching 18 kilometers in a semi-urban environmental setting with complete and precise ground-truth.CLERMONT FD-Bib.électronique (631139902) / SudocSudocFranceF

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    CHARMIE: a collaborative healthcare and home service and assistant robot for elderly care

    Get PDF
    The global population is ageing at an unprecedented rate. With changes in life expectancy across the world, three major issues arise: an increasing proportion of senior citizens; cognitive and physical problems progressively affecting the elderly; and a growing number of single-person households. The available data proves the ever-increasing necessity for efficient elderly care solutions such as healthcare service and assistive robots. Additionally, such robotic solutions provide safe healthcare assistance in public health emergencies such as the SARS-CoV-2 virus (COVID-19). CHARMIE is an anthropomorphic collaborative healthcare and domestic assistant robot capable of performing generic service tasks in non-standardised healthcare and domestic environment settings. The combination of its hardware and software solutions demonstrates map building and self-localisation, safe navigation through dynamic obstacle detection and avoidance, different human-robot interaction systems, speech and hearing, pose/gesture estimation and household object manipulation. Moreover, CHARMIE performs end-to-end chores in nursing homes, domestic houses, and healthcare facilities. Some examples of these chores are to help users transport items, fall detection, tidying up rooms, user following, and set up a table. The robot can perform a wide range of chores, either independently or collaboratively. CHARMIE provides a generic robotic solution such that older people can live longer, more independent, and healthier lives.This work has been supported by FCT—Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020. The author T.R. received funding through a doctoral scholarship from the Portuguese Foundation for Science and Technology (Fundação para a Ciência e a Tecnologia) [grant number SFRH/BD/06944/2020], with funds from the Portuguese Ministry of Science, Technology and Higher Education and the European Social Fund through the Programa Operacional do Capital Humano (POCH). The author F.G. received funding through a doctoral scholarship from the Portuguese Foundation for Science and Technology (Fundação para a Ciência e a Tecnologia) [grant number SFRH/BD/145993/2019], with funds from the Portuguese Ministry of Science, Technology and Higher Education and the European Social Fund through the Programa Operacional do Capital Humano (POCH)

    Dataset of Panoramic Images for People Tracking in Service Robotics

    Get PDF
    We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility.We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility

    Neural Network based Robot 3D Mapping and Navigation using Depth Image Camera

    Get PDF
    Robotics research has been developing rapidly in the past decade. However, in order to bring robots into household or office environments and cooperate well with humans, it is still required more research works. One of the main problems is robot localization and navigation. To be able to accomplish its missions, the mobile robot needs to solve problems of localizing itself in the environment, finding the best path and navigate to the goal. The navigation methods can be categorized into map-based navigation and map-less navigation. In this research we propose a method based on neural networks, using a depth image camera to solve the robot navigation problem. By using a depth image camera, the surrounding environment can be recognized regardless of the lighting conditions. A neural network-based approach is fast enough for robot navigation in real-time which is important to develop the full autonomous robots.In our method, mapping and annotating of the surrounding environment is done by the robot using a Feed-Forward Neural Network and a CNN network. The 3D map not only contains the geometric information of the environments but also their semantic contents. The semantic contents are important for robots to accomplish their tasks. For instance, consider the task “Go to cabinet to take a medicine”. The robot needs to know the position of the cabinet and medicine which is not supplied by solely the geometrical map. A Feed-Forward Neural Network is trained to convert the depth information from depth images into 3D points in real-world coordination. A CNN network is trained to segment the image into classes. By combining the two neural networks, the objects in the environment are segmented and their positions are determined.We implemented the proposed method using the mobile humanoid robot. Initially, the robot moves in the environment and build the 3D map with objects placed in their positions. Then, the robot utilizes the developed 3D map for goal-directed navigation.The experimental results show good performance in terms of the 3D map accuracy and robot navigation. Most of the objects in the working environments are classified by the trained CNN. Un-recognized objects are classified by Feed-Forward Neural Network. As a result, the generated maps reflected exactly working environments and can be applied for robots to safely navigate in them. The 3D geometric maps can be generated regardless of the lighting conditions. The proposed localization method is robust even in texture-less environments which are the toughest environments in the field of vision-based localization.博士(工学)法政大学 (Hosei University

    DEVELOPMENT OF AN AUTONOMOUS NAVIGATION SYSTEM FOR THE SHUTTLE CAR IN UNDERGROUND ROOM & PILLAR COAL MINES

    Get PDF
    In recent years, autonomous solutions in the multi-disciplinary field of the mining engineering have been an extremely popular applied research topic. The growing demand for mineral supplies combined with the steady decline in the available surface reserves has driven the mining industry to mine deeper underground deposits. These deposits are difficult to access, and the environment may be hazardous to mine personnel (e.g., increased heat, difficult ventilation conditions, etc.). Moreover, current mining methods expose the miners to numerous occupational hazards such as working in the proximity of heavy mining equipment, possible roof falls, as well as noise and dust. As a result, the mining industry, in its efforts to modernize and advance its methods and techniques, is one of the many industries that has turned to autonomous systems. Vehicle automation in such complex working environments can play a critical role in improving worker safety and mine productivity. One of the most time-consuming tasks of the mining cycle is the transportation of the extracted ore from the face to the main haulage facility or to surface processing facilities. Although conveyor belts have long been the autonomous transportation means of choice, there are still many cases where a discrete transportation system is needed to transport materials from the face to the main haulage system. The current dissertation presents the development of a navigation system for an autonomous shuttle car (ASC) in underground room and pillar coal mines. By introducing autonomous shuttle cars, the operator can be relocated from the dusty, noisy, and potentially dangerous environment of the underground mine to the safer location of a control room. This dissertation focuses on the development and testing of an autonomous navigation system for an underground room and pillar coal mine. A simplified relative localization system which determines the location of the vehicle relatively to salient features derived from on-board 2D LiDAR scans was developed for a semi-autonomous laboratory-scale shuttle car prototype. This simplified relative localization system is heavily dependent on and at the same time leverages the room and pillar geometry. Instead of keeping track of a global position of the vehicle relatively to a fixed coordinates frame, the proposed custom localization technique requires information regarding only the immediate surroundings. The followed approach enables the prototype to navigate around the pillars in real-time using a deterministic Finite-State Machine which models the behavior of the vehicle in the room and pillar mine with only a few states. Also, a user centered GUI has been developed that allows for a human user to control and monitor the autonomous vehicle by implementing the proposed navigation system. Experimental tests have been conducted in a mock mine in order to evaluate the performance of the developed system. A number of different scenarios simulating common missions that a shuttle car needs to undertake in a room and pillar mine. The results show a minimum success ratio of 70%
    corecore