720 research outputs found

    Real-Time Lane Region Detection Using a Combination of Geometrical and Image Features

    Get PDF
    Over the past few decades, pavement markings have played a key role in intelligent vehicle applications such as guidance, navigation, and control. However, there are still serious issues facing the problem of lane marking detection. For example, problems include excessive processing time and false detection due to similarities in color and edges between traffic signs (channeling lines, stop lines, crosswalk, arrows, etc.). This paper proposes a strategy to extract the lane marking information taking into consideration its features such as color, edge, and width, as well as the vehicle speed. Firstly, defining the region of interest is a critical task to achieve real-time performance. In this sense, the region of interest is dependent on vehicle speed. Secondly, the lane markings are detected by using a hybrid color-edge feature method along with a probabilistic method, based on distance-color dependence and a hierarchical fitting model. Thirdly, the following lane marking information is extracted: the number of lane markings to both sides of the vehicle, the respective fitting model, and the centroid information of the lane. Using these parameters, the region is computed by using a road geometric model. To evaluate the proposed method, a set of consecutive frames was used in order to validate the performanceOver the past few decades, pavement markings have played a key role in intelligent vehicle applications such as guidance, navigation, and control. However, there are still serious issues facing the problem of lane marking detection. For example, problems include excessive processing time and false detection due to similarities in color and edges between traffic signs (channeling lines, stop lines, crosswalk, arrows, etc.). This paper proposes a strategy to extract the lane marking information taking into consideration its features such as color, edge, and width, as well as the vehicle speed. Firstly, defining the region of interest is a critical task to achieve real-time performance. In this sense, the region of interest is dependent on vehicle speed. Secondly, the lane markings are detected by using a hybrid color-edge feature method along with a probabilistic method, based on distance-color dependence and a hierarchical fitting model. Thirdly, the following lane marking information is extracted: the number of lane markings to both sides of the vehicle, the respective fitting model, and the centroid information of the lane. Using these parameters, the region is computed by using a road geometric model. To evaluate the proposed method, a set of consecutive frames was used in order to validate the performanc

    Perception and intelligent localization for autonomous driving

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaVisão por computador e fusão sensorial são temas relativamente recentes, no entanto largamente adoptados no desenvolvimento de robôs autónomos que exigem adaptabilidade ao seu ambiente envolvente. Esta dissertação foca-se numa abordagem a estes dois temas para alcançar percepção no contexto de condução autónoma. O uso de câmaras para atingir este fim é um processo bastante complexo. Ao contrário dos meios sensoriais clássicos que fornecem sempre o mesmo tipo de informação precisa e atingida de forma determinística, as sucessivas imagens adquiridas por uma câmara estão repletas da mais variada informação e toda esta ambígua e extremamente difícil de extrair. A utilização de câmaras como meio sensorial em robótica é o mais próximo que chegamos na semelhança com aquele que é o de maior importância no processo de percepção humana, o sistema de visão. Visão por computador é uma disciplina científica que engloba àreas como: processamento de sinal, inteligência artificial, matemática, teoria de controlo, neurobiologia e física. A plataforma de suporte ao estudo desenvolvido no âmbito desta dissertação é o ROTA (RObô Triciclo Autónomo) e todos os elementos que consistem o seu ambiente. No contexto deste, são descritas abordagens que foram introduzidas com fim de desenvolver soluções para todos os desafios que o robô enfrenta no seu ambiente: detecção de linhas de estrada e consequente percepção desta, detecção de obstáculos, semáforos, zona da passadeira e zona de obras. É também descrito um sistema de calibração e aplicação da remoção da perspectiva da imagem, desenvolvido de modo a mapear os elementos percepcionados em distâncias reais. Em consequência do sistema de percepção, é ainda abordado o desenvolvimento de auto-localização integrado numa arquitectura distribuída incluindo navegação com planeamento inteligente. Todo o trabalho desenvolvido no decurso da dissertação é essencialmente centrado no desenvolvimento de percepção robótica no contexto de condução autónoma.Computer vision and sensor fusion are subjects that are quite recent, however widely adopted in the development of autonomous robots that require adaptability to their surrounding environment. This thesis gives an approach on both in order to achieve perception in the scope of autonomous driving. The use of cameras to achieve this goal is a rather complex subject. Unlike the classic sensorial devices that provide the same type of information with precision and achieve this in a deterministic way, the successive images acquired by a camera are replete with the most varied information, that this ambiguous and extremely dificult to extract. The use of cameras for robotic sensing is the closest we got within the similarities with what is of most importance in the process of human perception, the vision system. Computer vision is a scientific discipline that encompasses areas such as signal processing, artificial intelligence, mathematics, control theory, neurobiology and physics. The support platform in which the study within this thesis was developed, includes ROTA (RObô Triciclo Autónomo) and all elements comprising its environment. In its context, are described approaches that introduced in the platform in order to develop solutions for all the challenges facing the robot in its environment: detection of lane markings and its consequent perception, obstacle detection, trafic lights, crosswalk and road maintenance area. It is also described a calibration system and implementation for the removal of the image perspective, developed in order to map the elements perceived in actual real world distances. As a result of the perception system development, it is also addressed self-localization integrated in a distributed architecture that allows navigation with long term planning. All the work developed in the course of this work is essentially focused on robotic perception in the context of autonomous driving

    Real-Time Navigation of Unmanned Vehicle based on Neural Networks Classification of Arrow Markings

    Get PDF
    Από τη δεκαετία του 1970, τα αυτόνομα οχήματα χρησιμοποιούνται ευρέως, για εξερεύνηση βαθέων υδάτων και διαστήματος καθώς και σε όλα σχεδόν τα αεροσκάφη. Τις τελευταίες δεκαετίες, έχει καταγραφεί αυξανόμενο ενδιαφέρον για την εκμετάλλευση μη επανδρωμένων οχημάτων σε τομείς όπως η παρακολούθηση του περιβάλλοντος, η αέρια επιτήρηση για εμπορικές πτήσεις, η αστυνόμευση, οι γεωφυσικές έρευνες, η αντιμετώπιση φυσικών καταστροφών κι εντοπισμός θυμάτων, η επιστημονική έρευνα, οι επιχειρήσεις έρευνας και διάσωσης, η αρχαιολογία, η περιπολία σε θάλλασες, χαρτογράφηση βυθού, διαχείριση κυκλοφορίας κ.λπ. Ανεξάρτητα από τον τομέα (δηλαδή εναέριο, επίγειο ή επιφανειακό) στον οποίο ανήκουν, τα βασικά στοιχεία που τα διακρίνουν ως αιχμή της τεχνολογίας τους είναι ο παρεχόμενος βαθμός αυτονομίας (δηλ. ικανότητα λήψης αποφάσεων χωρίς ανθρώπινη παρέμβαση), την αντοχή και το ωφέλιμο φορτίο που μπορούν να υποστηρίξουν. Ένα πολύπλοκο έργο που αποτελεί προϋπόθεση στις αποστολές αυτών των οχημάτων είναι η αυτόνομη πλοήγηση τους. Ένα αυτόνομο κινητό ρομπότ κατασκευάζει ένα ισχυρό μοντέλο του περιβάλλοντος (χαρτογράφηση), εντοπίζεται στον χάρτη (τοποθέτηση), ελέγχει τη μετακίνηση από τη μια τοποθεσία στην άλλη (πλοήγηση) και εκτελεί εργασίες που έχουν ανατεθεί. Βλέποντας ένα βήμα παραπέρα, η αυτόνομη και αυτόνομη οδήγηση επιβάλλει έναν νέο ερευνητικό χώρο όπου τα οχήματα μπορούν να παρακολουθούν τα οδικά σήματα ή τα σήματα κυκλοφορίας και να λαμβάνουν τις κατάλληλες αποφάσεις για την πλοήγησή τους στο διάστημα. Η ταξινόμηση εικόνων στην πλοήγηση σε πραγματικό χρόνο είναι μια εργασία που απαιτεί χρόνο και καταναλώνει πόρους. Σε αυτή τη διατριβή, στοχεύουμε να πλοηγήσουμε ένα μη επανδρωμένο όχημα σε άγνωστο μονοπάτι αναγνωρίζοντας και ακολουθώντας τα σημάδια και τις πινακίδες με βέλη. Για την αναγνώριση εικόνας χρησιμοποιήθηκε ένα συνελικτικό νευρωνικό δίκτυο, το οποίο εκπαιδεύεται με ένα σύνολο δεδομένων από σημάνσεις βέλους. Για τη δημιουργία δεδομένων, χρησιμοποιήθηκε ένα Turtlebot 2 μαζί με μια κάμερα raspberry pi. Παρουσιάζεται μια συγκριτική αξιολόγηση των συνελικτικών νευρωνικών δικτύων, π.χ. VGG-16 και VGG-19. Τέλος, τα πειράματα σε πραγματικό χρόνο εκτελέστηκαν με Turtlebot2 σε πλοήγηση σε πραγματικό χρόνο και συζητήθηκαν.Since the 1970s, autonomous robots have been in daily use at any altitude, for deep-sea and space exploration as well as in almost all aircraft. The last decades, an increasing interest has been recorded on the exploitation of unmanned vehicles in fields such as environmental monitoring, commercial air surveillance, domestic policing, geophysical surveys, disaster relief, scientific research, civilian casualties, search and rescue operations, archeology, maritime patrol, seabed mapping, traffic management, etc. Regardless the domain (i.e., aerial, ground or surface) that they belong to, the key elements that distinguish them as the leading edge of their technology are the provided degree of autonomy (i.e., the ability to make decisions without human intervention), the endurance and the payload that they can support. A complicated task which is a prerequisite in robotic missions is the autonomous navigation of the robots. An autonomous mobile robot constructs a robust model of the environment (mapping), locates itself on the map (localization), governs the movement from one location to the other (navigation) and accomplishes assigned tasks. Going a step further the autonomous and self-driving impose a new research area where vehicles can monitor the road marks or the traffic signs and take the proper decisions for their navigation in space. The image classification in real-time navigation is latency sensitive and resource consuming task. In this Thesis, we aimed to navigate an unmanned vehicle on an unknown path by recognizing and following arrow markings and signs. For the image recognition, a CNN was used, which is trained with a dataset of arrow markings. For dataset creation, a Turtlebot 2 was used along with a raspberry pi camera. A benchmarking of convolutional neural networks, i.e. VGG-16 and VGG-19, is presented. Finally, the real-time experiments were executed with a Turtlebot2 in real-time navigation and discussed

    Automated Extraction of Road Information from Mobile Laser Scanning Data

    Get PDF
    Effective planning and management of transportation infrastructure requires adequate geospatial data. Existing geospatial data acquisition techniques based on conventional route surveys are very time consuming, labor intensive, and costly. Mobile laser scanning (MLS) technology enables a rapid collection of enormous volumes of highly dense, irregularly distributed, accurate geo-referenced point cloud data in the format of three-dimensional (3D) point clouds. Today, more and more commercial MLS systems are available for transportation applications. However, many transportation engineers have neither interest in the 3D point cloud data nor know how to transform such data into their computer-aided model (CAD) formatted geometric road information. Therefore, automated methods and software tools for rapid and accurate extraction of 2D/3D road information from the MLS data are urgently needed. This doctoral dissertation deals with the development and implementation aspects of a novel strategy for the automated extraction of road information from the MLS data. The main features of this strategy include: (1) the extraction of road surfaces from large volumes of MLS point clouds, (2) the generation of 2D geo-referenced feature (GRF) images from the road-surface data, (3) the exploration of point density and intensity of MLS data for road-marking extraction, and (4) the extension of tensor voting (TV) for curvilinear pavement crack extraction. In accordance with this strategy, a RoadModeler prototype with three computerized algorithms was developed. They are: (1) road-surface extraction, (2) road-marking extraction, and (3) pavement-crack extraction. Four main contributions of this development can be summarized as follows. Firstly, a curb-based approach to road surface extraction with assistance of the vehicle’s trajectory is proposed and implemented. The vehicle’s trajectory and the function of curbs that separate road surfaces from sidewalks are used to efficiently separate road-surface points from large volume of MLS data. The accuracy of extracted road surfaces is validated with manually selected reference points. Secondly, the extracted road enables accurate detection of road markings and cracks for transportation-related applications in road traffic safety. To further improve computational efficiency, the extracted 3D road data are converted into 2D image data, termed as a GRF image. The GRF image of the extracted road enables an automated road-marking extraction algorithm and an automated crack detection algorithm, respectively. Thirdly, the automated road-marking extraction algorithm applies a point-density-dependent, multi-thresholding segmentation to the GRF image to overcome unevenly distributed intensity caused by the scanning range, the incidence angle, and the surface characteristics of an illuminated object. The morphological operation is then implemented to deal with the presence of noise and incompleteness of the extracted road markings. Fourthly, the automated crack extraction algorithm applies an iterative tensor voting (ITV) algorithm to the GRF image for crack enhancement. The tensor voting, a perceptual organization method that is capable of extracting curvilinear structures from the noisy and corrupted background, is explored and extended into the field of crack detection. The successful development of three algorithms suggests that the RoadModeler strategy offers a solution to the automated extraction of road information from the MLS data. Recommendations are given for future research and development to be conducted to ensure that this progress goes beyond the prototype stage and towards everyday use

    Lane and Road Marking Detection with a High Resolution Automotive Radar for Automated Driving

    Get PDF
    Die Automobilindustrie erlebt gerade einen beispiellosen Wandel, und die Fahrerassistenz und das automatisierte Fahren spielen dabei eine entscheidende Rolle. Automatisiertes Fahren System umfasst haupts\"achlich drei Schritte: Wahrnehmung und Modellierung der Umgebung, Fahrtrichtungsplanung, und Fahrzeugsteuerung. Mit einer guten Wahrnehmung und Modellierung der Umgebung kann ein Fahrzeug Funktionen wie intelligenter Tempomat, Notbremsassistent, Spurwechselassistent, usw. erfolgreich durchf\"uhren. F\"ur Fahrfunktionen, die die Fahrpuren erkennen m\"ussen, werden gegenw\"artig ausnahmslos Kamerasensoren eingesetzt. Bei wechselnden Lichtverh\"altnissen, unzureichender Beleuchtung oder bei Sichtbehinderungen z.B. durch Nebel k\"onnen Videokameras aber empfindlich gest\"ort werden. Um diese Nachteile auszugleichen, wird in dieser Doktorarbeit eine \glqq Radar\textendash taugliche\grqq{} Fahrbahnmakierungerkennung entwickelt, mit der das Fahrzeug die Fahrspuren bei allen Lichtverh\"altnissen erkennen kann. Dazu k\"onnen bereits im Fahrzeug verbaute Radare eingesetzt werden. Die heutigen Fahrbahnmarkierungen k\"onnen mit Kamerasensoren sehr gut erfasst werden. Wegen unzureichender R\"uckstreueigenschaften der existierenden Fahrbahnmarkierungen f\"ur Radarwellen werden diese vom Radar nicht erkannt. Um dies zu bewerkstelligen, werden in dieser Arbeit die R\"uckstreueigenschaften von verschiedenen Reflektortypen, sowohl durch Simulationen als auch mit praktischen Messungen, untersucht und ein Reflektortyp vorgeschlagen, der zur Verarbeitung in heutige Fahrbahnmakierungen oder sogar f\"ur direkten Verbau in der Fahrbahn geeignet ist. Ein weiterer Schwerpunkt dieser Doktorarbeit ist der Einsatz von K\"unstliche Intelligenz (KI), um die Fahrspuren auch mit Radar zu detektieren und zu klassifizieren. Die aufgenommenen Radardaten werden mittels semantischer Segmentierung analysiert und Fahrspurverl\"aufe sowie Freifl\"achenerkennung detektiert. Gleichzeitig wird das Potential von KI\textendash tauglichen Umgebungverstehen mit bildgebenden Radardaten aufgezeigt

    LIDAR-Based road signs detection For Vehicle Localization in an HD Map

    Get PDF
    International audienceSelf-vehicle localization is one of the fundamental tasks for autonomous driving. Most of current techniques for global positioning are based on the use of GNSS (Global Navigation Satellite Systems). However, these solutions do not provide a localization accuracy that is better than 2-3 m in open sky environments [1]. Alternatively, the use of maps has been widely investigated for localization since maps can be pre-built very accurately. State of the art approaches often use dense maps or feature maps for localization. In this paper, we propose a road sign perception system for vehicle localization within a third party map. This is challenging since third party maps are usually provided with sparse geometric features which make the localization task more difficult in comparison to dense maps. The proposed approach extends the work in [2] where a localization system based on lane markings has been developed. Experiments have been conducted on a Highway-like test track using GNSS/INS with RTK corrections as ground truth (GT). Error evaluations are given as cross-track and along-track errors defined in the curvilinear coordinates [3] related to the map

    Recognizing Features in Mobile Laser Scanning Point Clouds Towards 3D High-definition Road Maps for Autonomous Vehicles

    Get PDF
    The sensors mounted on a driverless vehicle are not always reliable for precise localization and navigation. By comparing the real-time sensory data with a priori map, the autonomous navigation system can transform the complicated sensor perception mission into a simple map-based localization task. However, the lack of robust solutions and standards for creating such lane-level high-definition road maps is a major challenge in this emerging field. This thesis presents a semi-automated method for extracting meaningful road features from mobile laser scanning (MLS) point clouds and creating 3D high-definition road maps for autonomous vehicles. After pre-processing steps including coordinate system transformation and non-ground point removal, a road edge detection algorithm is performed to distinguish road curbs and extract road surfaces followed by extraction of two categories of road markings. On the one hand, textual and directional road markings including arrows, symbols, and words are detected by intensity thresholding and conditional Euclidean clustering. On the other hand, lane markings (lines) are extracted by local intensity analysis and distance thresholding according to road design standards. Afterwards, centerline points in every single lane are estimated based on the position of the extracted lane markings. Ultimately, 3D road maps with precise road boundaries, road markings, and the estimated lane centerlines are created. The experimental results demonstrate the feasibility of the proposed method, which can accurately extract most road features from the MLS point clouds. The average recall, precision, and F1-score obtained from four datasets for road marking extraction are 93.87%, 93.76%, and 93.73%, respectively. All of the estimated lane centerlines are validated using the “ground truthing” data manually digitized from the 4 cm resolution UAV orthoimages. The results of a comparison study show the better performance of the proposed method than that of some other existing methods

    A magnetometer based payload for a PTOL UAV with application in geophysical surveys

    Get PDF
    Includes bibliographical references.Applying the principles of physics to studying the Earth has given rise to the field of geophysics, which has been recognised as a separate discipline since the 19th century. The practical implementation of this field has led to a separate branch, aptly named exploration geophysics. Exploration geophysics aims to measure various naturally occurring phenomena associated with the Earth in order to make predictions about what might lie beneath the Earth’s surface. One of the fundamental phenomena associated with the Earth is the magnetic field or geomagnetic field. By localising magnetic anomalies within the geomagnetic field one can make predictions or inferences about the localised geophysical makeup and potential ore bodies, hydrocarbon deposits or archaeological artefacts that might exist below the surface. The fundamental sensor used to perform these surveys is the magnetometer. The concept of an unmanned aerial vehicle (UAV) has been around since 1915, with the first manufactured UAV appearing in 1916. Subsequent to the realisation of the UAV in the 1950s by Ryan Aeronautical for military reconnaissance, the idea of using UAV platforms to perform dull, dirty and dangerous functions has become common-place in the military environment. The first practical use of a UAV came in the 1991 Gulf War. The subsequent appearance of UAVs in the civilian realm can largely be attributed to the advent of low cost, high power density, lithium based batteries in the 1990s and the growth of the radio controlled (RC) hobbyist market

    Airborne chemical sensing with mobile robots

    Get PDF
    Airborne chemical sensing with mobile robots has been an active research areasince the beginning of the 1990s. This article presents a review of research work in this field,including gas distribution mapping, trail guidance, and the different subtasks of gas sourcelocalisation. Due to the difficulty of modelling gas distribution in a real world environmentwith currently available simulation techniques, we focus largely on experimental work and donot consider publications that are purely based on simulations
    corecore