108 research outputs found

    Method of on road vehicle tracking

    Get PDF

    High-performance hardware accelerators for image processing in space applications

    Get PDF
    Mars is a hard place to reach. While there have been many notable success stories in getting probes to the Red Planet, the historical record is full of bad news. The success rate for actually landing on the Martian surface is even worse, roughly 30%. This low success rate must be mainly credited to the Mars environment characteristics. In the Mars atmosphere strong winds frequently breath. This phenomena usually modifies the lander descending trajectory diverging it from the target one. Moreover, the Mars surface is not the best place where performing a safe land. It is pitched by many and close craters and huge stones, and characterized by huge mountains and hills (e.g., Olympus Mons is 648 km in diameter and 27 km tall). For these reasons a mission failure due to a landing in huge craters, on big stones or on part of the surface characterized by a high slope is highly probable. In the last years, all space agencies have increased their research efforts in order to enhance the success rate of Mars missions. In particular, the two hottest research topics are: the active debris removal and the guided landing on Mars. The former aims at finding new methods to remove space debris exploiting unmanned spacecrafts. These must be able to autonomously: detect a debris, analyses it, in order to extract its characteristics in terms of weight, speed and dimension, and, eventually, rendezvous with it. In order to perform these tasks, the spacecraft must have high vision capabilities. In other words, it must be able to take pictures and process them with very complex image processing algorithms in order to detect, track and analyse the debris. The latter aims at increasing the landing point precision (i.e., landing ellipse) on Mars. Future space-missions will increasingly adopt Video Based Navigation systems to assist the entry, descent and landing (EDL) phase of space modules (e.g., spacecrafts), enhancing the precision of automatic EDL navigation systems. For instance, recent space exploration missions, e.g., Spirity, Oppurtunity, and Curiosity, made use of an EDL procedure aiming at following a fixed and precomputed descending trajectory to reach a precise landing point. This approach guarantees a maximum landing point precision of 20 km. By comparing this data with the Mars environment characteristics, it is possible to understand how the mission failure probability still remains really high. A very challenging problem is to design an autonomous-guided EDL system able to even more reduce the landing ellipse, guaranteeing to avoid the landing in dangerous area of Mars surface (e.g., huge craters or big stones) that could lead to the mission failure. The autonomous behaviour of the system is mandatory since a manual driven approach is not feasible due to the distance between Earth and Mars. Since this distance varies from 56 to 100 million of km approximately due to the orbit eccentricity, even if a signal transmission at the light speed could be possible, in the best case the transmission time would be around 31 minutes, exceeding so the overall duration of the EDL phase. In both applications, algorithms must guarantee self-adaptability to the environmental conditions. Since the Mars (and in general the space) harsh conditions are difficult to be predicted at design time, these algorithms must be able to automatically tune the internal parameters depending on the current conditions. Moreover, real-time performances are another key factor. Since a software implementation of these computational intensive tasks cannot reach the required performances, these algorithms must be accelerated via hardware. For this reasons, this thesis presents my research work done on advanced image processing algorithms for space applications and the associated hardware accelerators. My research activity has been focused on both the algorithm and their hardware implementations. Concerning the first aspect, I mainly focused my research effort to integrate self-adaptability features in the existing algorithms. While concerning the second, I studied and validated a methodology to efficiently develop, verify and validate hardware components aimed at accelerating video-based applications. This approach allowed me to develop and test high performance hardware accelerators that strongly overcome the performances of the actual state-of-the-art implementations. The thesis is organized in four main chapters. Chapter 2 starts with a brief introduction about the story of digital image processing. The main content of this chapter is the description of space missions in which digital image processing has a key role. A major effort has been spent on the missions in which my research activity has a substantial impact. In particular, for these missions, this chapter deeply analizes and evaluates the state-of-the-art approaches and algorithms. Chapter 3 analyzes and compares the two technologies used to implement high performances hardware accelerators, i.e., Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs). Thanks to this information the reader may understand the main reasons behind the decision of space agencies to exploit FPGAs instead of ASICs for high-performance hardware accelerators in space missions, even if FPGAs are more sensible to Single Event Upsets (i.e., transient error induced on hardware component by alpha particles and solar radiation in space). Moreover, this chapter deeply describes the three available space-grade FPGA technologies (i.e., One-time Programmable, Flash-based, and SRAM-based), and the main fault-mitigation techniques against SEUs that are mandatory for employing space-grade FPGAs in actual missions. Chapter 4 describes one of the main contribution of my research work: a library of high-performance hardware accelerators for image processing in space applications. The basic idea behind this library is to offer to designers a set of validated hardware components able to strongly speed up the basic image processing operations commonly used in an image processing chain. In other words, these components can be directly used as elementary building blocks to easily create a complex image processing system, without wasting time in the debug and validation phase. This library groups the proposed hardware accelerators in IP-core families. The components contained in a same family share the same provided functionality and input/output interface. This harmonization in the I/O interface enables to substitute, inside a complex image processing system, components of the same family without requiring modifications to the system communication infrastructure. In addition to the analysis of the internal architecture of the proposed components, another important aspect of this chapter is the methodology used to develop, verify and validate the proposed high performance image processing hardware accelerators. This methodology involves the usage of different programming and hardware description languages in order to support the designer from the algorithm modelling up to the hardware implementation and validation. Chapter 5 presents the proposed complex image processing systems. In particular, it exploits a set of actual case studies, associated with the most recent space agency needs, to show how the hardware accelerator components can be assembled to build a complex image processing system. In addition to the hardware accelerators contained in the library, the described complex system embeds innovative ad-hoc hardware components and software routines able to provide high performance and self-adaptable image processing functionalities. To prove the benefits of the proposed methodology, each case study is concluded with a comparison with the current state-of-the-art implementations, highlighting the benefits in terms of performances and self-adaptability to the environmental conditions

    Kodizajn arhitekture i algoritama za lokalizacijumobilnih robota i detekciju prepreka baziranih namodelu

    No full text
    This thesis proposes SoPC (System on a Programmable Chip) architectures for efficient embedding of vison-based localization and obstacle detection tasks in a navigational pipeline on autonomous mobile robots. The obtained results are equivalent or better in comparison to state-ofthe- art. For localization, an efficient hardware architecture that supports EKF-SLAM's local map management with seven-dimensional landmarks in real time is developed. For obstacle detection a novel method of object recognition is proposed - detection by identification framework based on single detection window scale. This framework allows adequate algorithmic precision and execution speeds on embedded hardware platforms.Ova teza bavi se dizajnom SoPC (engl. System on a Programmable Chip) arhitektura i algoritama za efikasnu implementaciju zadataka lokalizacije i detekcije prepreka baziranih na viziji u kontekstu autonomne robotske navigacije. Za lokalizaciju, razvijena je efikasna računarska arhitektura za EKF-SLAM algoritam, koja podržava skladištenje i obradu sedmodimenzionalnih orijentira lokalne mape u realnom vremenu. Za detekciju prepreka je predložena nova metoda prepoznavanja objekata u slici putem prozora detekcije fiksne dimenzije, koja omogućava veću brzinu izvršavanja algoritma detekcije na namenskim računarskim platformama

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    Erkennung bewegter Objekte durch raum-zeitliche Bewegungsanalyse

    Get PDF
    Driver assistance systems of the future, that will support the driver in complex driving situations, require a thorough understanding of the car's environment. This includes not only the comprehension of the infrastructure, but also the precise detection and measurement of other moving traffic participants. In this thesis, a novel principle is presented and investigated in detail, that allows the reconstruction of the 3d motion field from the image sequence obtained by a stereo camera system. Given correspondences of stereo measurements over time, this principle estimates the 3d position and the 3d motion vector of selected points using Kalman Filters, resulting in a real-time estimation of the observed motion field. Since the state vector of the Kalman Filter consists of six elements, this principle is called 6d-Vision. To estimate the absolute motion field, the ego-motion of the moving observer must be known precisely. Since cars are usually not equipped with high-end inertial sensors, a novel algorithm to estimate the ego-motion from the image sequence is presented. Based on a Kalman Filter, it is able to support even complex vehicle models, and takes advantage of all available data, namely the previously estimated motion field and eventually available inertial sensors. As the 6d-Vision principle is not restricted to particular algorithms to obtain the image measurements, various optical flow and stereo algorithms are evaluated. In particular, a novel dense stereo algorithm is presented, that gives excellent precision results and runs at real-time. In addition, two novel scene flow algorithms are introduced, that measure the optical flow and stereo information in a combined approach, yielding more precise and robust results than a separate analysis of the two information sources. The application of the 6d-Vision principle to real-world data is illustrated throughout the thesis. As practical applications usually require an object understanding, rather than a 3d motion field, a simple, yet efficient algorithm to detect and track moving objects is presented. This algorithm was successfully implemented in a demonstrator vehicle, that performs an autonomous braking resp. steering manoeuvre to avoid collisions with moving pedestrians.Fahrerassistenzsysteme der Zukunft, die den Fahrer in kritischen Situationen unterstützen sollen, benötigen ein umfangreiches Verständnis der Fahrzeugumgebung. Dieses umfasst nicht nur die Erkennung und Interpretation der Infrastruktur, sondern auch die Detektion und präzise Vermessung anderer Verkehrsteilnehmer. In dieser Arbeit wird ein neues Verfahren vorgestellt und ausführlich untersucht, welches die Rekonstruktion des 3d-Bewegungsfeldes aus Stereo-Bildsequenzen erlaubt. Auf Basis zeitlicher Korrespondenzen von Stereo-Messungen wird sowohl die 3d-Position, als auch der 3d-Geschwindigkeitsvektor einzelner Punkte mit Hilfe von Kalman Filtern geschätzt. Dies erlaubt die Schätzung des beobachteten Bewegungsfeldes in Echtzeit. Da der geschätzte Zustandsvektor sechs Elemente umfasst, wurde dieses Verfahren 6d-Vision genannt. Um das absolute Bewegungsfeld zu schätzen muss die Eigenbewegung des Beobachters bekannt sein. Da Fahrzeuge in der Regel nicht mit einer hoch-präzisen Intertialsensorik ausgestattet sind, muss die Eigenbewegung aus der Bildfolge bestimmt werden. In dieser Arbeit wird dazu ein neuer Algorithmus vorgestellt und untersucht, der mit Hilfe eines Kalman Filters die Eigenbewegung schätzt, und sich optimal in den Datenverarbeitungsprozess des 6d-Vision Verfahrens integriert. Da das 6d-Vision Verfahren nicht auf bestimmte Bildverarbeitungsalgorithmen beschränkt ist, werden in dieser Arbeit verschiedene Algorithmen zur Bestimmung des Optischen Flusses und der Stereo-Korrespondenzen im Hinblick auf Genauigkeit und Robustheit untersucht. Hierbei wird ein neues dichtes Stereo-Verfahren vorgestellt, das im Hinblick auf Genauigkeit sehr gute Ergebnisse erzielt und zudem in Echtzeit läuft. Daneben werden zwei neue Scene-Flow-Algorithmen vorgestellt, die in einem kombinierten Verfahren den Optischen Fluß und Stereo-Korrespondenzen bestimmen, und einer getrennten Analyse hinsichtlich Genauigkeit und Robustheit überlegen sind. Das Verfahren wurde ausführlich auf der Straße getestet und stellt heute eine wichtige Informationsgrundlage für verschiedene Anwendungen dar. Beispielhaft wird in dieser Arbeit auf ein Versuchsfahrzeug eingegangen, das ein autonomes Brems- bzw. Ausweichmanöver durchführt, um eine drohende Kollision mit einem Fußgänger zu vermeiden

    Belle II Technical Design Report

    Full text link
    The Belle detector at the KEKB electron-positron collider has collected almost 1 billion Y(4S) events in its decade of operation. Super-KEKB, an upgrade of KEKB is under construction, to increase the luminosity by two orders of magnitude during a three-year shutdown, with an ultimate goal of 8E35 /cm^2 /s luminosity. To exploit the increased luminosity, an upgrade of the Belle detector has been proposed. A new international collaboration Belle-II, is being formed. The Technical Design Report presents physics motivation, basic methods of the accelerator upgrade, as well as key improvements of the detector.Comment: Edited by: Z. Dole\v{z}al and S. Un
    corecore