21 research outputs found

    Perception de la géométrie de l'environnement pour la navigation autonome

    Get PDF
    Le but de de la recherche en robotique mobile est de donner aux robots la capacité d'accomplir des missions dans un environnement qui n'est pas parfaitement connu. Mission, qui consiste en l'exécution d'un certain nombre d'actions élémentaires (déplacement, manipulation d'objets...) et qui nécessite une localisation précise, ainsi que la construction d'un bon modèle géométrique de l'environnement, a partir de l'exploitation de ses propres capteurs, des capteurs externes, de l'information provenant d'autres robots et de modèle existant, par exemple d'un système d'information géographique. L'information commune est la géométrie de l'environnement. La première partie du manuscrit couvre les différents méthodes d'extraction de l'information géométrique. La seconde partie présente la création d'un modèle géométrique en utilisant un graphe, ainsi qu'une méthode pour extraire de l'information du graphe et permettre au robot de se localiser dans l'environnement.The goal of the mobile robotic research is to give robots the capability to accomplish missions in an environment that might be unknown. To accomplish his mission, the robot need to execute a given set of elementary actions (movement, manipulation of objects...) which require an accurate localisation of the robot, as well as a the construction of good geometric model of the environment. Thus, a robot will need to take the most out of his own sensors, of external sensors, of information coming from an other robot and of existing model coming from a Geographic Information System. The common information is the geometry of the environment. The first part of the presentation will be about the different methods to extract geometric information. The second part will be about the creation of the geometric model using a graph structure, along with a method to retrieve information in the graph to allow the robot to localise itself in the environment

    LiDAR-Based Object Tracking and Shape Estimation

    Get PDF
    Umfeldwahrnehmung stellt eine Grundvoraussetzung für den sicheren und komfortablen Betrieb automatisierter Fahrzeuge dar. Insbesondere bewegte Verkehrsteilnehmer in der unmittelbaren Fahrzeugumgebung haben dabei große Auswirkungen auf die Wahl einer angemessenen Fahrstrategie. Dies macht ein System zur Objektwahrnehmung notwendig, welches eine robuste und präzise Zustandsschätzung der Fremdfahrzeugbewegung und -geometrie zur Verfügung stellt. Im Kontext des automatisierten Fahrens hat sich das Box-Geometriemodell über die Zeit als Quasistandard durchgesetzt. Allerdings stellt die Box aufgrund der ständig steigenden Anforderungen an Wahrnehmungssysteme inzwischen häufig eine unerwünscht grobe Approximation der tatsächlichen Geometrie anderer Verkehrsteilnehmer dar. Dies motiviert einen Übergang zu genaueren Formrepräsentationen. In der vorliegenden Arbeit wird daher ein probabilistisches Verfahren zur gleichzeitigen Schätzung von starrer Objektform und -bewegung mittels Messdaten eines LiDAR-Sensors vorgestellt. Der Vergleich dreier Freiform-Geometriemodelle mit verschiedenen Detaillierungsgraden (Polygonzug, Dreiecksnetz und Surfel Map) gegenüber dem einfachen Boxmodell zeigt, dass die Reduktion von Modellierungsfehlern in der Objektgeometrie eine robustere und präzisere Parameterschätzung von Objektzuständen ermöglicht. Darüber hinaus können automatisierte Fahrfunktionen, wie beispielsweise ein Park- oder Ausweichassistent, von einem genaueren Wissen über die Fremdobjektform profitieren. Es existieren zwei Einflussgrößen, welche die Auswahl einer angemessenen Formrepräsentation maßgeblich beeinflussen sollten: Beobachtbarkeit (Welchen Detaillierungsgrad lässt die Sensorspezifikation theoretisch zu?) und Modell-Adäquatheit (Wie gut bildet das gegebene Modell die tatsächlichen Beobachtungen ab?). Auf Basis dieser Einflussgrößen wird in der vorliegenden Arbeit eine Strategie zur Modellauswahl vorgestellt, die zur Laufzeit adaptiv das am besten geeignete Formmodell bestimmt. Während die Mehrzahl der Algorithmen zur LiDAR-basierten Objektverfolgung ausschließlich auf Punktmessungen zurückgreift, werden in der vorliegenden Arbeit zwei weitere Arten von Messungen vorgeschlagen: Information über den vermessenen Freiraum wird verwendet, um über Bereiche zu schlussfolgern, welche nicht von Objektgeometrie belegt sein können. Des Weiteren werden LiDAR-Intensitäten einbezogen, um markante Merkmale wie Nummernschilder und Retroreflektoren zu detektieren und über die Zeit zu verfolgen. Eine ausführliche Auswertung auf über 1,5 Stunden von aufgezeichneten Fremdfahrzeugtrajektorien im urbanen Bereich und auf der Autobahn zeigen, dass eine präzise Modellierung der Objektoberfläche die Bewegungsschätzung um bis zu 30%-40% verbessern kann. Darüber hinaus wird gezeigt, dass die vorgestellten Methoden konsistente und hochpräzise Rekonstruktionen von Objektgeometrien generieren können, welche die häufig signifikante Überapproximation durch das einfache Boxmodell vermeiden

    Contemporary Robotics

    Get PDF
    This book book is a collection of 18 chapters written by internationally recognized experts and well-known professionals of the field. Chapters contribute to diverse facets of contemporary robotics and autonomous systems. The volume is organized in four thematic parts according to the main subjects, regarding the recent advances in the contemporary robotics. The first thematic topics of the book are devoted to the theoretical issues. This includes development of algorithms for automatic trajectory generation using redudancy resolution scheme, intelligent algorithms for robotic grasping, modelling approach for reactive mode handling of flexible manufacturing and design of an advanced controller for robot manipulators. The second part of the book deals with different aspects of robot calibration and sensing. This includes a geometric and treshold calibration of a multiple robotic line-vision system, robot-based inline 2D/3D quality monitoring using picture-giving and laser triangulation, and a study on prospective polymer composite materials for flexible tactile sensors. The third part addresses issues of mobile robots and multi-agent systems, including SLAM of mobile robots based on fusion of odometry and visual data, configuration of a localization system by a team of mobile robots, development of generic real-time motion controller for differential mobile robots, control of fuel cells of mobile robots, modelling of omni-directional wheeled-based robots, building of hunter- hybrid tracking environment, as well as design of a cooperative control in distributed population-based multi-agent approach. The fourth part presents recent approaches and results in humanoid and bioinspirative robotics. It deals with design of adaptive control of anthropomorphic biped gait, building of dynamic-based simulation for humanoid robot walking, building controller for perceptual motor control dynamics of humans and biomimetic approach to control mechatronic structure using smart materials

    Visual attention and swarm cognition for off-road robots

    Get PDF
    Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2011Esta tese aborda o problema da modelação de atenção visual no contexto de robôs autónomos todo-o-terreno. O objectivo de utilizar mecanismos de atenção visual é o de focar a percepção nos aspectos do ambiente mais relevantes à tarefa do robô. Esta tese mostra que, na detecção de obstáculos e de trilhos, esta capacidade promove robustez e parcimónia computacional. Estas são características chave para a rapidez e eficiência dos robôs todo-o-terreno. Um dos maiores desafios na modelação de atenção visual advém da necessidade de gerir o compromisso velocidade-precisão na presença de variações de contexto ou de tarefa. Esta tese mostra que este compromisso é resolvido se o processo de atenção visual for modelado como um processo auto-organizado, cuja operação é modulada pelo módulo de selecção de acção, responsável pelo controlo do robô. Ao fechar a malha entre o processo de selecção de acção e o de percepção, o último é capaz de operar apenas onde é necessário, antecipando as acções do robô. Para fornecer atenção visual com propriedades auto-organizadas, este trabalho obtém inspiração da Natureza. Concretamente, os mecanismos responsáveis pela capacidade que as formigas guerreiras têm de procurar alimento de forma auto-organizada, são usados como metáfora na resolução da tarefa de procurar, também de forma auto-organizada, obstáculos e trilhos no campo visual do robô. A solução proposta nesta tese é a de colocar vários focos de atenção encoberta a operar como um enxame, através de interacções baseadas em feromona. Este trabalho representa a primeira realização corporizada de cognição de enxame. Este é um novo campo de investigação que procura descobrir os princípios básicos da cognição, inspeccionando as propriedades auto-organizadas da inteligência colectiva exibida pelos insectos sociais. Logo, esta tese contribui para a robótica como disciplina de engenharia e para a robótica como disciplina de modelação, capaz de suportar o estudo do comportamento adaptável.Esta tese aborda o problema da modelação de atenção visual no contexto de robôs autónomos todo-o-terreno. O objectivo de utilizar mecanismos de atenção visual é o de focar a percepção nos aspectos do ambiente mais relevantes à tarefa do robô. Esta tese mostra que, na detecção de obstáculos e de trilhos, esta capacidade promove robustez e parcimónia computacional. Estas são características chave para a rapidez e eficiência dos robôs todo-o-terreno. Um dos maiores desafios na modelação de atenção visual advém da necessidade de gerir o compromisso velocidade-precisão na presença de variações de contexto ou de tarefa. Esta tese mostra que este compromisso é resolvido se o processo de atenção visual for modelado como um processo auto-organizado, cuja operação é modulada pelo módulo de selecção de acção, responsável pelo controlo do robô. Ao fechar a malha entre o processo de selecção de acção e o de percepção, o último é capaz de operar apenas onde é necessário, antecipando as acções do robô. Para fornecer atenção visual com propriedades auto-organizadas, este trabalho obtém inspi- ração da Natureza. Concretamente, os mecanismos responsáveis pela capacidade que as formi- gas guerreiras têm de procurar alimento de forma auto-organizada, são usados como metáfora na resolução da tarefa de procurar, também de forma auto-organizada, obstáculos e trilhos no campo visual do robô. A solução proposta nesta tese é a de colocar vários focos de atenção encoberta a operar como um enxame, através de interacções baseadas em feromona. Este trabalho representa a primeira realização corporizada de cognição de enxame. Este é um novo campo de investigação que procura descobrir os princípios básicos da cognição, ins- peccionando as propriedades auto-organizadas da inteligência colectiva exibida pelos insectos sociais. Logo, esta tese contribui para a robótica como disciplina de engenharia e para a robótica como disciplina de modelação, capaz de suportar o estudo do comportamento adaptável.Fundação para a Ciência e a Tecnologia (FCT,SFRH/BD/27305/2006); Laboratory of Agent Modelling (LabMag

    Condensing a priori data for recognition based augmented reality

    Full text link
    My research proposes novel methods to reduce the cardinality of a priori data used in recognition based augmented reality, whilst retaining distinctive and persistent features in the sets. This research will help reduce latency and increase accuracy in recognition based pose estimation systems, thus improving the user experience for augmented reality applications

    Hybrid Marker-less Camera Pose Tracking with Integrated Sensor Fusion

    Get PDF
    This thesis presents a framework for a hybrid model-free marker-less inertial-visual camera pose tracking with an integrated sensor fusion mechanism. The proposed solution addresses the fundamental problem of pose recovery in computer vision and robotics and provides an improved solution for wide-area pose tracking that can be used on mobile platforms and in real-time applications. In order to arrive at a suitable pose tracking algorithm, an in-depth investigation was conducted into current methods and sensors used for pose tracking. Preliminary experiments were then carried out on hybrid GPS-Visual as well as wireless micro-location tracking in order to evaluate their suitability for camera tracking in wide-area or GPS-denied environments. As a result of this investigation a combination of an inertial measurement unit and a camera was chosen as the primary sensory inputs for a hybrid camera tracking system. After following a thorough modelling and mathematical formulation process, a novel and improved hybrid tracking framework was designed, developed and evaluated. The resulting system incorporates an inertial system, a vision-based system and a recursive particle filtering-based stochastic data fusion and state estimation algorithm. The core of the algorithm is a state-space model for motion kinematics which, combined with the principles of multi-view camera geometry and the properties of optical flow and focus of expansion, form the main components of the proposed framework. The proposed solution incorporates a monitoring system, which decides on the best method of tracking at any given time based on the reliability of the fresh vision data provided by the vision-based system, and automatically switches between visual and inertial tracking as and when necessary. The system also includes a novel and effective self-adjusting mechanism, which detects when the newly captured sensory data can be reliably used to correct the past pose estimates. The corrected state is then propagated through to the current time in order to prevent sudden pose estimation errors manifesting as a permanent drift in the tracking output. Following the design stage, the complete system was fully developed and then evaluated using both synthetic and real data. The outcome shows an improved performance compared to existing techniques, such as PTAM and SLAM. The low computational cost of the algorithm enables its application on mobile devices, while the integrated self-monitoring, self-adjusting mechanisms allow for its potential use in wide-area tracking applications

    Structural Health Monitoring using Unmanned Aerial Systems

    Get PDF
    The use of Structural Health Monitoring (SHM) techniques is paramount to the safety and longevity of the structures. Many fields use this approach to monitor the performance of a system through time to determine the proper time and funds associated with repair and replacement. The monitoring of these systems includes nondestructive testing techniques (NDT), sensors permanently installed on the structure, and can also rely heavily on visual inspection. Visual inspection is widely used due to the level of trust owners have in the inspection personnel, however it is time consuming, expensive, and relies heavily on the experience of the inspectors. It is for these reasons that rapid data acquisition platforms must be developed using remote sensing systems to collect, process, and display data to decision makers quickly to make well informed decisions based on quantitative data or provide information for further inspection with a contact technique for targeted inspection. The proposed multirotor Unmanned Aerial System (UAS) platform carries a multispectral imaging payload to collect data and serve as another tool in the SHM cycle. Several demonstrations were performed in a laboratory setting using UAS acquired imagery for identification and measurement of structures. Outdoor validation was completed using a simulated bridge deck and ground based setups on in service structures. Finally, static laboratory measurements were obtained using multispectral patterns to obtain multiscale deformation measurements that will be required for use on a UAS. The novel multiscale, multispectral image analysis using UAS acquired imagery demonstrates the value of the remote sensing system as a nondestructive testing platform and tool for SHM.Ph.D., Mechanical Engineering and Mechanics -- Drexel University, 201

    Multi-task near-field perception for autonomous driving using surround-view fisheye cameras

    Get PDF
    Die Bildung der Augen führte zum Urknall der Evolution. Die Dynamik änderte sich von einem primitiven Organismus, der auf den Kontakt mit der Nahrung wartete, zu einem Organismus, der durch visuelle Sensoren gesucht wurde. Das menschliche Auge ist eine der raffiniertesten Entwicklungen der Evolution, aber es hat immer noch Mängel. Der Mensch hat über Millionen von Jahren einen biologischen Wahrnehmungsalgorithmus entwickelt, der in der Lage ist, Autos zu fahren, Maschinen zu bedienen, Flugzeuge zu steuern und Schiffe zu navigieren. Die Automatisierung dieser Fähigkeiten für Computer ist entscheidend für verschiedene Anwendungen, darunter selbstfahrende Autos, Augmented Realität und architektonische Vermessung. Die visuelle Nahfeldwahrnehmung im Kontext von selbstfahrenden Autos kann die Umgebung in einem Bereich von 0 - 10 Metern und 360° Abdeckung um das Fahrzeug herum wahrnehmen. Sie ist eine entscheidende Entscheidungskomponente bei der Entwicklung eines sichereren automatisierten Fahrens. Jüngste Fortschritte im Bereich Computer Vision und Deep Learning in Verbindung mit hochwertigen Sensoren wie Kameras und LiDARs haben ausgereifte Lösungen für die visuelle Wahrnehmung hervorgebracht. Bisher stand die Fernfeldwahrnehmung im Vordergrund. Ein weiteres wichtiges Problem ist die begrenzte Rechenleistung, die für die Entwicklung von Echtzeit-Anwendungen zur Verfügung steht. Aufgrund dieses Engpasses kommt es häufig zu einem Kompromiss zwischen Leistung und Laufzeiteffizienz. Wir konzentrieren uns auf die folgenden Themen, um diese anzugehen: 1) Entwicklung von Nahfeld-Wahrnehmungsalgorithmen mit hoher Leistung und geringer Rechenkomplexität für verschiedene visuelle Wahrnehmungsaufgaben wie geometrische und semantische Aufgaben unter Verwendung von faltbaren neuronalen Netzen. 2) Verwendung von Multi-Task-Learning zur Überwindung von Rechenengpässen durch die gemeinsame Nutzung von initialen Faltungsschichten zwischen den Aufgaben und die Entwicklung von Optimierungsstrategien, die die Aufgaben ausbalancieren.The formation of eyes led to the big bang of evolution. The dynamics changed from a primitive organism waiting for the food to come into contact for eating food being sought after by visual sensors. The human eye is one of the most sophisticated developments of evolution, but it still has defects. Humans have evolved a biological perception algorithm capable of driving cars, operating machinery, piloting aircraft, and navigating ships over millions of years. Automating these capabilities for computers is critical for various applications, including self-driving cars, augmented reality, and architectural surveying. Near-field visual perception in the context of self-driving cars can perceive the environment in a range of 0 - 10 meters and 360° coverage around the vehicle. It is a critical decision-making component in the development of safer automated driving. Recent advances in computer vision and deep learning, in conjunction with high-quality sensors such as cameras and LiDARs, have fueled mature visual perception solutions. Until now, far-field perception has been the primary focus. Another significant issue is the limited processing power available for developing real-time applications. Because of this bottleneck, there is frequently a trade-off between performance and run-time efficiency. We concentrate on the following issues in order to address them: 1) Developing near-field perception algorithms with high performance and low computational complexity for various visual perception tasks such as geometric and semantic tasks using convolutional neural networks. 2) Using Multi-Task Learning to overcome computational bottlenecks by sharing initial convolutional layers between tasks and developing optimization strategies that balance tasks

    Event-based neuromorphic stereo vision

    Full text link

    Underwater Vehicles

    Get PDF
    For the latest twenty to thirty years, a significant number of AUVs has been created for the solving of wide spectrum of scientific and applied tasks of ocean development and research. For the short time period the AUVs have shown the efficiency at performance of complex search and inspection works and opened a number of new important applications. Initially the information about AUVs had mainly review-advertising character but now more attention is paid to practical achievements, problems and systems technologies. AUVs are losing their prototype status and have become a fully operational, reliable and effective tool and modern multi-purpose AUVs represent the new class of underwater robotic objects with inherent tasks and practical applications, particular features of technology, systems structure and functional properties
    corecore