692 research outputs found

    3D measurement systems for robot manipulators

    Get PDF

    Actuators and sensors for application in agricultural robots: A review

    Get PDF
    In recent years, with the rapid development of science and technology, agricultural robots have gradually begun to replace humans, to complete various agricultural operations, changing traditional agricultural production methods. Not only is the labor input reduced, but also the production efficiency can be improved, which invariably contributes to the development of smart agriculture. This paper reviews the core technologies used for agricultural robots in non-structural environments. In addition, we review the technological progress of drive systems, control strategies, end-effectors, robotic arms, environmental perception, and other related systems. This research shows that in a non-structured agricultural environment, using cameras and light detection and ranging (LiDAR), as well as ultrasonic and satellite navigation equipment, and by integrating sensing, transmission, control, and operation, different types of actuators can be innovatively designed and developed to drive the advance of agricultural robots, to meet the delicate and complex requirements of agricultural products as operational objects, such that better productivity and standardization of agriculture can be achieved. In summary, agricultural production is developing toward a data-driven, standardized, and unmanned approach, with smart agriculture supported by actuator-driven-based agricultural robots. This paper concludes with a summary of the main existing technologies and challenges in the development of actuators for applications in agricultural robots, and the outlook regarding the primary development directions of agricultural robots in the near future

    Design of an obstacle avoidance system for automated guided vehicles

    Get PDF
    Most Industrial Automated Guided Vehicles CAGV s) follow fixed guide paths embedded in the floor or bonded to the floor surface. Whilst reliable in their basic operation, these AGV systems fail if unexpected obstacles are placed in the vehicle path. This can be a problem particularly in semi-automated factories where men and AGVs share the same environment. The perfonnance of line-guided AGVs may therefore be enhanced with a capability to avoid unexpected obstructions in the guide path. The research described in this thesis addresses some fundamental problems associated with obstacle avoidance for utomated vehicles. A new obstacle avoidance system has been designed which operates by detecting obstacles as they disturb a light pattern projected onto the floor ahead of the AGV. A CCD camera mounted under the front of the vehicle senses obstacles as they emerge into the projection area and reflect the light pattern. Projected light patterns have been used as an aid to static image analysis in the fields f Computer Aided Design and Engineering. This research extends these ideas in a real-time mobile application. A novel light coding system has been designed which simplifies the image analysis task and allows a low-cost embedded microcontroller to carry out the image processing, code recognition and obstacle avoidance planning functions. An AGV simulation package has been developed as a design tool for obstacle avoidance algorithms. This enables potential strategies to be developed in a high level language and tested via a Graphical User Interface. The algorithms designed using the simulation package were successfully translated to assembler language and implemented on the embedded system. An experimental automated vehicle has been designed and built as a test bed for the research and the complete obstacle avoidance system was evaluated in the Flexible Manufacturing laboratory at the University of Huddersfield

    Proceedings of the 4th field robot event 2006, Stuttgart/Hohenheim, Germany, 23-24th June 2006

    Get PDF
    Zeer uitgebreid verslag van het 4e Fieldrobotevent, dat gehouden werd op 23 en 24 juni 2006 in Stuttgart/Hohenhei

    Ability of head-mounted display technology to improve mobility in people with low vision: a systematic review

    Get PDF
    Purpose: The purpose of this study was to undertake a systematic literature review on how vision enhancements, implemented using head-mounted displays (HMDs), can improve mobility, orientation, and associated aspects of visual function in people with low vision. Methods: The databases Medline, Chinl, Scopus, and Web of Science were searched for potentially relevant studies. Publications from all years until November 2018 were identified based on predefined inclusion and exclusion criteria. The data were tabulated and synthesized to produce a systematic review. Results: The search identified 28 relevant papers describing the performance of vision enhancement techniques on mobility and associated visual tasks. Simplifying visual scenes improved obstacle detection and object recognition but decreased walking speed. Minification techniques increased the size of the visual field by 3 to 5 times and improved visual search performance. However, the impact of minification on mobility has not been studied extensively. Clinical trials with commercially available devices recorded poor results relative to conventional aids. Conclusions: The effects of current vision enhancements using HMDs are mixed. They appear to reduce mobility efficiency but improved obstacle detection and object recognition. The review highlights the lack of controlled studies with robust study designs. To support the evidence base, well-designed trials with larger sample sizes that represent different types of impairments and real-life scenarios are required. Future work should focus on identifying the needs of people with different types of vision impairment and providing targeted enhancements. Translational Relevance: This literature review examines the evidence regarding the ability of HMD technology to improve mobility in people with sight loss

    Vision-based trajectory tracking algorithm with obstacle avoidance for a wheeled mobile robot

    Get PDF
    Wheeled mobile robots are becoming increasingly important in industry as means of transportation, inspection, and operation because of their efficiency and flexibility. The design of efficient algorithms for autonomous or quasi-autonomous mobile robots navigation in dynamic environments is a challenging problem that has been the focus of many researchers dining the past few decades. Computer vision, maybe, is not the most successful sensing modality used in mobile robotics until now (sonar and infra-red sensors for example being preferred), but it is the sensor which is able to give the information ’’what” and ’’where” most completely for the objects a robot is likely to encounter. In this thesis, we deal with using vision system to navigate the mobile robot to track a reference trajectory and using a sensor-based obstacle avoidance method to pass by the objects located on the trajectory. A tracking control algorithm is also described in this thesis. Finally, The experimental results are presented to verify the tracking and control algorithms

    Estimation of distance headway on two-lane highways using video recording technique

    Get PDF
    Distance headway is the physical separation, in meters, between any pair of successive vehicles in a traffic lane measured from same common feature of the subject vehicles; either rear-to-rear or front-to-front. It is a significant microscopic traffic flow characteristics parameter used in various traffic engineering applications such as level of service, highway capacity analysis, traffic safety and microscopic traffic simulation. Its values are also essential in evaluation of congestion level and overtaking manoeuvre related problems. Distance headway, being spatial parameter is difficult to measure directly in the field. However, it is usually estimated from other parameters; particularly, traffic density, which is also difficult to measure directly but estimated from other parameters based on spot observation. Estimates of distance headways from such approaches may not be real representation of desired values, especially for situations where the parameter is to be evaluated at intervals over a roadway segment. This paper presents a novel approach for direct field measurement of distance headway on two-lane highways using video recording instrumented vehicle. Data for the study were collected from six segments of two-lane highways from Johor, Malaysia. Findings form the study demonstrate that the approach reported herein can be used to measure distance headway directly in the field as against the existing practice of estimating it from other variables based on spot observation despite the fact that it is a spatial parameter

    Improvements and Applications of the Image-Based Distance Measuring System

    Get PDF
    [[notice]]補正完畢[[conferencetype]]國內[[conferencedate]]20070101~2007010

    Autonomous navigation and mapping of mobile robots based on 2D/3D cameras combination

    Get PDF
    Aufgrund der tendenziell zunehmenden Nachfrage an Systemen zur Unterstützung des alltäglichen Lebens gibt es derzeit ein großes Interesse an autonomen Systemen. Autonome Systeme werden in Häusern, Büros, Museen sowie in Fabriken eingesetzt. Sie können verschiedene Aufgaben erledigen, beispielsweise beim Reinigen, als Helfer im Haushalt, im Bereich der Sicherheit und Bildung, im Supermarkt sowie im Empfang als Auskunft, weil sie dazu verwendet werden können, die Verarbeitungszeit zu kontrollieren und präzise, zuverlässige Ergebnisse zu liefern. Ein Forschungsgebiet autonomer Systeme ist die Navigation und Kartenerstellung. Das heißt, mobile Roboter sollen selbständig ihre Aufgaben erledigen und zugleich eine Karte der Umgebung erstellen, um navigieren zu können. Das Hauptproblem besteht darin, dass der mobile Roboter in einer unbekannten Umgebung, in der keine zusätzlichen Bezugsinformationen vorhanden sind, das Gelände erkunden und eine dreidimensionale Karte davon erstellen muss. Der Roboter muss seine Positionen innerhalb der Karte bestimmen. Es ist notwendig, ein unterscheidbares Objekt zu finden. Daher spielen die ausgewählten Sensoren und der Register-Algorithmus eine relevante Rolle. Die Sensoren, die sowohl Tiefen- als auch Bilddaten liefern können, sind noch unzureichend. Der neue 3D-Sensor, nämlich der "Photonic Mixer Device" (PMD), erzeugt mit hoher Bildwiederholfrequenz eine Echtzeitvolumenerfassung des umliegenden Szenarios und liefert Tiefen- und Graustufendaten. Allerdings erfordert die höhere Qualität der dreidimensionalen Erkundung der Umgebung Details und Strukturen der Oberflächen, die man nur mit einer hochauflösenden CCD-Kamera erhalten kann. Die vorliegende Arbeit präsentiert somit eine Exploration eines mobilen Roboters mit Hilfe der Kombination einer CCD- und PMD-Kamera, um eine dreidimensionale Karte der Umgebung zu erstellen. Außerdem wird ein Hochleistungsalgorithmus zur Erstellung von 3D Karten und zur Poseschätzung in Echtzeit unter Verwendung des "Simultaneous Localization and Mapping" (SLAM) Verfahrens präsentiert. Der autonom arbeitende, mobile Roboter soll ferner Aufgaben übernehmen, wie z.B. die Erkennung von Objekten in ihrer Umgebung, um verschiedene praktische Aufgaben zu lösen. Die visuellen Daten der CCD-Kamera liefern nicht nur eine hohe Auflösung der Textur-Daten für die Tiefendaten, sondern werden auch für die Objekterkennung verwendet. Der "Iterative Closest Point" (ICP) Algorithmus benutzt zwei Punktwolken, um den Bewegungsvektor zu bestimmen. Schließlich sind die Auswertung der Korrespondenzen und die Rekonstruktion der Karte, um die reale Umgebung abzubilden, in dieser Arbeit enthalten.Presently, intelligent autonomous systems have to perform very interesting tasks due to trendy increases in support demands of human living. Autonomous systems have been used in various applications like houses, offices, museums as well as in factories. They are able to operate in several kinds of applications such as cleaning, household assistance, transportation, security, education and shop assistance because they can be used to control the processing time, and to provide precise and reliable output. One research field of autonomous systems is mobile robot navigation and map generation. That means the mobile robot should work autonomously while generating a map, which the robot follows. The main issue is that the mobile robot has to explore an unknown environment and to generate a three dimensional map of an unknown environment in case that there is not any further reference information. The mobile robot has to estimate its position and pose. It is required to find distinguishable objects. Therefore, the selected sensors and registered algorithms are significant. The sensors, which can provide both, depth as well as image data are still deficient. A new 3D sensor, namely the Photonic Mixer Device (PMD), generates a high rate output in real-time capturing the surrounding scenario as well as the depth and gray scale data. However, a higher quality of three dimension explorations requires details and textures of surfaces, which can be obtained from a high resolution CCD camera. This work hence presents the mobile robot exploration using the integration of CCD and PMD camera in order to create a three dimensional map. In addition, a high performance algorithm for 3D mapping and pose estimation of the locomotion in real time, using the "Simultaneous Localization and Mapping" (SLAM) technique is proposed. The flawlessly mobile robot should also handle the tasks, such as the recognition of objects in its environment, in order to achieve various practical missions. Visual input from the CCD camera not only delivers high resolution texture data on depth volume, but is also used for object recognition. The “Iterative Closest Point” (ICP) algorithm is using two sets of points to find out the translation and rotation vector between two scans. Finally, the evaluation of the correspondences and the reconstruction of the map to resemble the real environment are included in this thesis
    • …
    corecore