481 research outputs found
High-performance hardware accelerators for image processing in space applications
Mars is a hard place to reach. While there have been many notable success stories in getting probes to the Red Planet, the historical record is full of bad news. The success rate for actually landing on the Martian surface is even worse, roughly 30%. This low success rate must be mainly credited to the Mars environment characteristics. In the Mars atmosphere strong winds frequently breath. This phenomena usually modifies the lander descending trajectory diverging it from the target one. Moreover, the Mars surface is not the best place where performing a safe land. It is pitched by many and close craters and huge stones, and characterized by huge mountains and hills (e.g., Olympus Mons is 648 km in diameter and 27 km tall). For these reasons a mission failure due to a landing in huge craters, on big stones or on part of the surface characterized by a high slope is highly probable.
In the last years, all space agencies have increased their research efforts in order to enhance the success rate of Mars missions. In particular, the two hottest research topics are: the active debris removal and the guided landing on Mars.
The former aims at finding new methods to remove space debris exploiting unmanned spacecrafts. These must be able to autonomously: detect a debris, analyses it, in order to extract its characteristics in terms of weight, speed and dimension, and, eventually, rendezvous with it. In order to perform these tasks, the spacecraft must have high vision capabilities. In other words, it must be able to take pictures and process them with very complex image processing algorithms in order to detect, track and analyse the debris.
The latter aims at increasing the landing point precision (i.e., landing ellipse) on Mars. Future space-missions will increasingly adopt Video Based Navigation systems to assist the entry, descent and landing (EDL) phase of space modules (e.g., spacecrafts), enhancing the precision of automatic EDL navigation systems. For instance, recent space exploration missions, e.g., Spirity, Oppurtunity, and Curiosity, made use of an EDL procedure aiming at following a fixed and precomputed descending trajectory to reach a precise landing point. This approach guarantees a maximum landing point precision of 20 km. By comparing this data with the Mars environment characteristics, it is possible to understand how the mission failure probability still remains really high.
A very challenging problem is to design an autonomous-guided EDL system able to even more reduce the landing ellipse, guaranteeing to avoid the landing in dangerous area of Mars surface (e.g., huge craters or big stones) that could lead to the mission failure. The autonomous behaviour of the system is mandatory since a manual driven approach is not feasible due to the distance between Earth and Mars. Since this distance varies from 56 to 100 million of km approximately due to the orbit eccentricity, even if a signal transmission at the light speed could be possible, in the best case the transmission time would be around 31 minutes, exceeding so the overall duration of the EDL phase.
In both applications, algorithms must guarantee self-adaptability to the environmental conditions. Since the Mars (and in general the space) harsh conditions are difficult to be predicted at design time, these algorithms must be able to automatically tune the internal parameters depending on the current conditions.
Moreover, real-time performances are another key factor. Since a software implementation of these computational intensive tasks cannot reach the required performances, these algorithms must be accelerated via hardware.
For this reasons, this thesis presents my research work done on advanced image processing algorithms for space applications and the associated hardware accelerators. My research activity has been focused on both the algorithm and their hardware implementations. Concerning the first aspect, I mainly focused my research effort to integrate self-adaptability features in the existing algorithms. While concerning the second, I studied and validated a methodology to efficiently develop, verify and validate hardware components aimed at accelerating video-based applications. This approach allowed me to develop and test high performance hardware accelerators that strongly overcome the performances of the actual state-of-the-art implementations.
The thesis is organized in four main chapters.
Chapter 2 starts with a brief introduction about the story of digital image processing. The main content of this chapter is the description of space missions in which digital image processing has a key role. A major effort has been spent on the missions in which my research activity has a substantial impact. In particular, for these missions, this chapter deeply analizes and evaluates the state-of-the-art approaches and algorithms.
Chapter 3 analyzes and compares the two technologies used to implement high performances hardware accelerators, i.e., Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs). Thanks to this information the reader may understand the main reasons behind the decision of space agencies to exploit FPGAs instead of ASICs for high-performance hardware accelerators in space missions, even if FPGAs are more sensible to Single Event Upsets (i.e., transient error induced on hardware component by alpha particles and solar radiation in space). Moreover, this chapter deeply describes the three available space-grade FPGA technologies (i.e., One-time Programmable, Flash-based, and SRAM-based), and the main fault-mitigation techniques against SEUs that are mandatory for employing space-grade FPGAs in actual missions.
Chapter 4 describes one of the main contribution of my research work: a library of high-performance hardware accelerators for image processing in space applications. The basic idea behind this library is to offer to designers a set of validated hardware components able to strongly speed up the basic image processing operations commonly used in an image processing chain. In other words, these components can be directly used as elementary building blocks to easily create a complex image processing system, without wasting time in the debug and validation phase. This library groups the proposed hardware accelerators in IP-core families. The components contained in a same family share the same provided functionality and input/output interface. This harmonization in the I/O interface enables to substitute, inside a complex image processing system, components of the same family without requiring modifications to the system communication infrastructure. In addition to the analysis of the internal architecture of the proposed components, another important aspect of this chapter is the methodology used to develop, verify and validate the proposed high performance image processing hardware accelerators. This methodology involves the usage of different programming and hardware description languages in order to support the designer from the algorithm modelling up to the hardware implementation and validation.
Chapter 5 presents the proposed complex image processing systems. In particular, it exploits a set of actual case studies, associated with the most recent space agency needs, to show how the hardware accelerator components can be assembled to build a complex image processing system. In addition to the hardware accelerators contained in the library, the described complex system embeds innovative ad-hoc hardware components and software routines able to provide high performance and self-adaptable image processing functionalities. To prove the benefits of the proposed methodology, each case study is concluded with a comparison with the current state-of-the-art implementations, highlighting the benefits in terms of performances and self-adaptability to the environmental conditions
ENERGY-EFFICIENT FEATURE EXTRACTION ENGINE AND SECURE CHIP IDENTIFICATION FOR UBIQUITOUS SURVEILLANCE
Ph.DDOCTOR OF PHILOSOPH
OPTIMIZED ARCHITECTURE DESIGN AND IMPLEMENTATION OF OBJECT TRACKING ALGORITHM ON FPGA
FPGA based Object tracking implementation is one of the most recent video
surveillance applications in embedded systems. In general, FPGA implementation is
more efficient than general purpose computers in attaining high throughput due to its
parallelism and execution speed. The system need to be designed on a standard frame
rate in such a way to achieve optimal performance in real time environment. Optimal
design of a system is dependent on minimizing the cost, area (device utility) and
power while achieving the required speed. Past research work that investigated object
tracking systems' implementation on FPGA achieved a significantly high throughput
but have shown high device utilization. This research work aims at optimizing the
device utilization under real time constraints. The Adaptive Hybrid Difference
algorithm (AHD), which is used to detect the moving objects, was chosen to be
implemented on FPGA due to its computation ability and efficiency with regard to
hardware implementation. AHD can work at various lighting conditions automatically
by determining the adaptive threshold in every period of time
A Novel Hardware Architecture for Real Time Extraction of Local Features
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ์ ๊ธฐยท์ปดํจํฐ๊ณตํ๋ถ, 2016. 8. ์ดํ์ฌ.์ปดํจํ
์ฑ๋ฅ์ ๋น์ฝ์ ์ธ ๋ฐ์ ๊ณผ ๋ณด๊ธ์ ์ปดํจํฐ ๊ธฐ์ ์ ์ ์ฉ ๋ถ์ผ๋ฅผ ๋ฐ์คํฌํ์์ ์ค๋งํธํฐ, ์ค๋งํธ TV, ์น์ฉ์ฐจ ๋ฑ์ ์ด๋ฅด๊ธฐ๊น์ง ํญ๋์ ๋ฒ์๋ก ๋ํ๋ ๊ฒฐ๊ณผ๋ฅผ ์ผ๊ธฐํ๋ค. ๋ณํ๋ ํ๊ฒฝ์์ ๋์ค์ ๊ธฐ์กด์ ์์๋ ์ข ๋ ํ์ ์ ์ธ ๊ธฐ๋ฅ์ ๋ฐ์๋ค์ผ ์ค๋น๊ฐ ๋์๊ณ , ์ด์ ๋ถํฉํ๊ธฐ ์ํด Computer vision ๊ธฐ์ ์ ์ ์ฐจ ์์ฉํ์ ๊ธธ์ ๊ฑท๊ฒ ๋์๋ค. ๋ฌผ์ฒด ์ธ์ ๋ฐ ์ถ์ , 3D reconstruction ๋ฑ ํญ๋๊ฒ ์์ฉ๋ ์ ์๋ computer vision ๊ธฐ์ ๋ค์ ์๋ก ๋ค๋ฅธ ์์ ์ฌ์ด์์ ๋์ผํ pixel์ ์ฐพ๋ image matching ๊ธฐ์ ์ ํ์๋ก ํ๋ค. ๊ด๋ จ ์ฐ๊ตฌ๋ค ์ค ์์์ ํฌ๊ธฐ๊ฐ ๋ณํ๊ฑฐ๋ ํ์ ํ์ฌ๋ ์์ ์ ์ผ๋ก matching์ด ๊ฐ๋ฅํ Scale- Invariant Feature Transform (SIFT) ์๊ณ ๋ฆฌ์ฆ์ด ์ ์๋์๊ณ , ์ดํ ์นด๋ฉ๋ผ์ viewpoint ๋ณํ์๋ ๊ฐ์ธํ Affine Invariant Extension of SIFT (ASIFT) ์๊ณ ๋ฆฌ์ฆ์ด ์ ์๋์๋ค. SIFT ๋ฐ ASIFT ์๊ณ ๋ฆฌ์ฆ์ image matching์ ์์ ์ฑ์ด ๋์ ๋ฐ๋ฉด ๋ง์ ์ฐ์ฐ๋์ ์๊ตฌํ๋ค. ์ด๋ฅผ ์ค์๊ฐ ์ฒ๋ฆฌํ๊ธฐ ์ํด specifically designed hardware์ ์ด์ฉํ ์ฐ์ฐ ๊ฐ์ ์ฐ๊ตฌ๊ฐ ์งํ๋๊ณ ์๋ค.
๋ณธ ๋
ผ๋ฌธ์์๋ ์ค์๊ฐ (30frames/sec)์ผ๋ก ๋์ ๊ฐ๋ฅํ ASIFT ํ๋์จ์ด ๊ตฌ์กฐ๋ฅผ ์ ์ํ๋ค. ์ด๋ฅผ ์ํด ์ฒซ๋ฒ์งธ๋ก SIFT feature๋ฅผ ์ค์๊ฐ์ผ๋ก ์ฐ์ฐ ํ ์ ์๋ SIFT ํ๋์จ์ด ๊ตฌ์กฐ๋ฅผ ์ ์ํ๋ค. SIFT ์๊ณ ๋ฆฌ์ฆ์ ๋๋ฆฌ ์ฌ์ฉ๋๋ ๋งํผ ๋ง์ ์์ ๊ฐ์ ํ๋์จ์ด ๊ตฌ์กฐ๊ฐ ์ฐ๊ตฌ๋์๋ค. ๋๋ถ๋ถ์ ๊ธฐ์กด SIFT ํ๋์จ์ด๋ ์ค์๊ฐ์ฑ์ ์ถฉ์กฑ์ํค์ง๋ง, ์ด๋ฅผ ์ํด ๊ณผ๋ํ๊ฒ ๋ง์ ๋ด๋ถ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ฌ์ฉํ์ฌ ํ๋์จ์ด ๋น์ฉ์ ํฌ๊ฒ ์ฆ๊ฐ์์ผฐ๋ค. ์ด ์ด์ ์ฌํญ์ผ๋ก ์ธํด ๋ด๋ถ ๋ฉ๋ชจ๋ฆฌ์ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ํผ์ฉํ๋ ์๋ก์ด SIFT ํ๋์จ์ด ๊ตฌ์กฐ๊ฐ ์ ์๋์๋ค. ์ด ๊ฒฝ์ฐ ๋น๋ฒํ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ์ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ latency๋ก ์ธํ ๋์ ์๋ ์ ํ ๋ฌธ์ ๋ฅผ ์ผ์ผํจ๋ค. ๋ณธ ๋
ผ๋ฌธ์ ์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ์์ ์ฝ์ด์จ ๋ฐ์ดํฐ ์ฌ์ฌ์ฉ ๋ฐฉ์๊ณผ, ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ์ ์ ์ฅํ๋ ๋ฐ์ดํฐ์ ๋ํ down-sampling ๋ฐ less significant bits ์ ๊ฑฐ๋ฅผ ํตํ ๋ฐ์ดํฐ๋ ๊ฐ์ ๋ฐฉ์์ ์ ์ํ๋ค. ์ ์ํ๋ SIFT ํ๋์จ์ด๋ Gaussian image๋ฅผ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ์ ์ ์ฅํ๋ฉฐ, ์ด ๊ฒฝ์ฐ descriptor ์์ฑ์ ์ํด local-patch๋ฅผ ์ฝ์ด์ค๋๋ฐ ๋ง์ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ ์ ๊ทผ์ด ๋ฐ์ํ๋ค. ์ด๋ฅผ ์ ๊ฐํ๊ธฐ ์ํด, ์๋ก ๋ค๋ฅธ local-patch ์์ ์ค๋ณต ๋ฐ์ดํฐ๋ฅผ ์ฌ์ฌ์ฉํ๋ ๋ฐฉ์๊ณผ ์ด๋ฅผ ์ํ ํ๋์จ์ด ๊ตฌ์กฐ๋ฅผ ์ ์ํ๋ค. ๋ํ Gaussian image์ ๋ฐ์ดํฐ๋ ์์ฒด๋ฅผ ์ค์ด๊ธฐ ์ํด down-sampling ๋ฐ less significant bits ์ ๊ฑฐ ๋ฐฉ์์ ์ด์ฉํ๋ค. ์ด๋ SIFT ์๊ณ ๋ฆฌ์ฆ์ ์ ํ๋ ๊ฐ์๋ฅผ ์ต์ํํ์๋ค. ๊ฒฐ๊ณผ์ ์ผ๋ก ๋ณธ ๋
ผ๋ฌธ์ ๊ธฐ์กด state-of-the-art SIFT ํ๋์จ์ด์ 10.93% ํฌ๊ธฐ์ ๋ด๋ถ ๋ฉ๋ชจ๋ฆฌ๋ง ์ฌ์ฉํ๋ฉฐ, 3300๊ฐ์ key-point์ ๋ํด 30 frames/sec (fps)์ ์๋๋ก ๋์ ๊ฐ๋ฅํ๋ค.
ASIFT ์๊ณ ๋ฆฌ์ฆ ์ฐ์ฐ์ ๊ณ ์์ผ๋ก ์ํํ๊ธฐ ์ํด์๋ SIFT ํ๋์จ์ด์ affine transform๋ ์์์ ์ ๊ณตํ๋ affine transform ํ๋์จ์ด๊ฐ delay ์์ด ๋ฐ์ดํฐ๋ฅผ ์ ๊ณตํ ์ ์์ด์ผ ํ๋ค. ํ์ง๋ง ์ผ๋ฐ์ ์ธ affine transform ์ฐ์ฐ ๋ฐฉ์์ ์ด์ฉํ ๊ฒฝ์ฐ affine transform ํ๋์จ์ด๋ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ์์ ์๋ณธ ์์์ ์ฝ์ด ์ฌ ๋ ๋ถ์ฐ์์ ์ธ ์ฃผ์๋ก ์ ๊ทผํ๊ฒ ๋๋ค. ์ด๋ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ latency๋ฅผ ๋ฐ์์ํค๋ฉฐ affine transform module์ด ์ถฉ๋ถํ ๋ฐ์ดํฐ๋ฅผ SIFT ํ๋์จ์ด์ ๊ณต๊ธํด์ฃผ์ง ๋ชปํ๋ ๋ฌธ์ ๋ฅผ ์ผ๊ธฐํ๋ค. ์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํ์ฌ ๋ณธ ๋
ผ๋ฌธ์ SIFT feature์ rotation-invariantํ ํน์ฑ์ ์ด์ฉํ์ฌ, affine transform ์ฐ์ฐ ๋ฐฉ์์ ๋ณ๊ฒฝํ์๋ค. ์ด ๋ฐฉ์์ ASIFT ์๊ณ ๋ฆฌ์ฆ์ด ์ทจํ๋ ๋ชจ๋ affine transform ์ฐ์ฐ์ ์ํํ ๋ ์ฐ์๋ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ ์ฃผ์๋ก ์
๋ ฅ ์์์ ์ ๊ทผํ ์ ์๊ฒ ํด์ค๋ค. ์ด๋ฅผ ํตํด ๋ถํ์ํ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ latency๊ฐ ํฌ๊ฒ ๊ฐ์๋๋ค. ์ ์๋ affine transform ์ฐ์ฐ ๋ฐฉ์์ ์๋ณธ ์์์ scalingํ ๋ค skewingํ๋ ์ฐ์ฐ ๊ณผ์ ์ ๊ฑฐ์น๋ค. ๋ณธ ๋
ผ๋ฌธ์ ์ด ๊ณผ์ ์์ scaling๋ ์์ ๋ฐ์ดํฐ๋ฅผ ์๋ก ๋ค๋ฅธ affine transform ์ฐ์ฐ์ ์ฌํ์ฉํ๋ ๋ฐฉ๋ฒ์ ์ ์ํ๋ค. ์ด๋ scaling ์ฐ์ฐ๋์ ๊ฐ์์ํฌ ๋ฟ ๋ง ์๋๋ผ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ ์ ๊ทผ๋๋ ๊ฐ์์ํจ๋ค. ์ ์๋ ๋ฐฉ์๋ค๋ก ์ธํ affine transform ํ๋์จ์ด์ ์๋ ํฅ์์ SIFT ํ๋์จ์ด์ ๋๊ธฐ ์์ด ๋ฐ์ดํฐ๋ฅผ ๊ณต๊ธํ ์ ์๊ฒ ํด์ฃผ๊ณ , ์ต์ข
์ ์ผ๋ก utilization ํฅ์์ ํตํ ASIFT ํ๋์จ์ด์ ๋์ ์๋ ํฅ์์ ๊ธฐ์ฌํ๋ค. ๊ฒฐ๊ณผ์ ์ผ๋ก ๋ณธ ๋
ผ๋ฌธ์์ ์ ์ํ๋ ASIFT ํ๋์จ์ด๋ ๋์ utilization์ผ๋ก ๋์์ด ๊ฐ๋ฅํ๋ฉฐ, ์ด๋ก ์ธํด 2,500๊ฐ์ key- point๊ฐ ๊ฒ์ถ๋๋ ์์์ ๋ํ์ฌ 30fps์ ๋์ ์๋๋ก ASIFT ์๊ณ ๋ฆฌ์ฆ์ ์ํํ ์ ์๋ค.์ 1 ์ฅ ์๋ก 1
1.1 ์ฐ๊ตฌ ๋ฐฐ๊ฒฝ 1
1.2 ์ฐ๊ตฌ ๋ด์ฉ 4
1.3 ๋
ผ๋ฌธ ๊ตฌ์ฑ 6
์ 2 ์ฅ ์ด์ ์ฐ๊ตฌ ์๊ฐ ๋ฐ ๋ฌธ์ ์ ์ 7
2.1 SIFT ์๊ณ ๋ฆฌ์ฆ ๋ฐ ์ฐ์ฐ ๊ฐ์ํ ๊ธฐ์ 7
2.1.1 Scale-Invariant Feture Transform (SIFT) 7
2.1.2 ๊ธฐ์กด SIFT ์ฐ์ฐ ๊ฐ์ํ ์ฐ๊ตฌ ๋ฐ ๋ฌธ์ ์ 16
2.2 ASIFT ์๊ณ ๋ฆฌ์ฆ ๋ฐ ์ฐ์ฐ ๊ฐ์ํ ๊ธฐ์ 19
2.2.1 Scale-Invariant Feture Transform (SIFT) 19
2.2.2 ๊ธฐ์กด SIFT ์ฐ์ฐ ๊ฐ์ํ ์ฐ๊ตฌ 23
2.3 ์ค์๊ฐ ASIFT ํ๋์จ์ด ๊ตฌํ์ ์ํ ์ฐ๊ตฌ ๋ฐฉํฅ 24
์ 3 ์ฅ ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ bandwidth ์ ๊ฐ๋ SIFT ํ๋์จ์ด ๊ตฌ์กฐ 26
3.1 ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ์ ์ ์ฅ๋ SIFT ์ฐ์ฐ์ ์ค๊ฐ ๋ฐ์ดํฐ ๊ณ ์ฐฐ 26
3.2 ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ bandwidth๋ฅผ ์ค์ด๊ธฐ ์ํ ๋ฐฉ์ 31
3.2.1 Local-patch ์ฌ์ฌ์ฉ ๋ฐฉ์ 31
3.2.2 Local-patch down sampling ๋ฐฉ์ 44
3.2.3 Gaussian image์ less significant bit ์ ๊ฑฐ 47
3.2.4 Bandwidth ์ต์ ํ ๋ฐฉ์์ด ์ ์ฉ๋ SIFT ํ๋์จ์ด ๊ตฌ์กฐ 50
3.3 SIFT ํ๋์จ์ด์ ๋ํ ์คํ ๊ฒฐ๊ณผ 55
3.3.1 SIFT ํ๋์จ์ด์ ์คํ 55
3.3.2 ์ธ๋ถ ๋ฉ๋ชจ๋ฆฌ bandwidth ์๊ตฌ๋ ๋ถ์ 57
3.3.3 ๋์ ์๋ 60
3.3.4 Feature matching ์ ํ๋ 64
์ 4 ์ฅ ASIFT ํ๋์จ์ด ๊ตฌ์กฐ 68
4.1 ASIFT ํ๋์จ์ด์ ์ ํฉํ affine transform ๋ฐฉ์ 68
4.1.1 ์๋ก์ด affine transform ๋ฐฉ์ 68
4.1.2 ๋ด๋ถ image buffer์ ๋ฉ๋ชจ๋ฆฌ ๊ณต๊ฐ ์ต์ ํ 74
4.2 ASIFT ํ๋์จ์ด์ ๊ตฌ์กฐ 78
4.2.1 ๊ธฐ๋ณธ ํ๋์จ์ด ๊ตฌ์กฐ ๋ฐ scaling ์ฐ์ฐ ์ฌ์ฌ์ฉ 78
4.2.2 Affine transform parameter์ ๊ตฌ์ฑ 81
4.2.3 ASIFT ํ๋์จ์ด ๊ตฌ์กฐ ์ค๋ช
85
4.3 ASIFT ํ๋์จ์ด์ ๋ํ ์คํ ๊ฒฐ๊ณผ 89
4.3.1 ์ affine transform ๋ฐฉ์์ ์ํ ๋ฉ๋ชจ๋ฆฌ latency ๊ฐ์ 89
4.3.2 Affine transform module์ ์ถ๋ ฅ bandwidth ํฅ์ 91
4.3.3 ASIFT ํ๋์จ์ด์ ์คํ๊ณผ ๋์ ์๋ 93
4.3.4 Feature matching ์ ํ๋ 95
์ 5 ์ฅ ๊ฒฐ๋ก 104
์ฐธ๊ณ ๋ฌธํ 106
Abstract 109Docto
Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems
The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system
Smart environment monitoring through micro unmanned aerial vehicles
In recent years, the improvements of small-scale Unmanned Aerial Vehicles (UAVs) in terms of flight time, automatic control, and remote transmission are promoting the development of a wide range of practical applications. In aerial video surveillance, the monitoring of broad areas still has many challenges due to the achievement of different tasks in real-time, including mosaicking, change detection, and object detection. In this thesis work, a small-scale UAV based vision system to maintain regular surveillance over target areas is proposed. The system works in two modes. The first mode allows to monitor an area of interest by performing several flights. During the first flight, it creates an incremental geo-referenced mosaic of an area of interest and classifies all the known elements (e.g., persons) found on the ground by an improved Faster R-CNN architecture previously trained. In subsequent reconnaissance flights, the system searches for any changes (e.g., disappearance of persons) that may occur in the mosaic by a histogram equalization and RGB-Local Binary Pattern (RGB-LBP) based algorithm. If present, the mosaic is updated. The second mode, allows to perform a real-time classification by using, again, our improved Faster R-CNN model, useful for time-critical operations. Thanks to different design features, the system works in real-time and performs mosaicking and change detection tasks at low-altitude, thus allowing the classification even of small objects. The proposed system was tested by using the whole set of challenging video sequences contained in the UAV Mosaicking and Change Detection (UMCD) dataset and other public datasets. The evaluation of the system by well-known performance metrics has shown remarkable results in terms of mosaic creation and updating, as well as in terms of change detection and object detection
Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems
The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system
Dynamically reconfigurable architecture for embedded computer vision systems
The objective of this research work is to design, develop and implement a new architecture which integrates on the same chip all the processing levels of a complete Computer Vision system, so that the execution is efficient without compromising the power consumption while keeping a reduced cost. For this purpose, an analysis and classification of different mathematical operations and algorithms commonly used in Computer Vision are carried out, as well as a in-depth review of the image processing capabilities of current-generation hardware devices. This permits to determine the requirements and the key aspects for an efficient architecture. A representative set of algorithms is employed as benchmark to evaluate the proposed architecture, which is implemented on an FPGA-based system-on-chip. Finally, the prototype is compared to other related approaches in order to determine its advantages and weaknesses
- โฆ