147 research outputs found

    Software for Embedded Module for Image Processing

    Get PDF
    katedra kybernetik

    Design and implementation of a real-time miniaturized embedded stereo-vision system

    Get PDF
    The main motivation of the thesis is to develop a fully integrated, modular, small baseline (\u3c=3cm), low cost (\u3c=CAD$600), real-time miniaturized embedded stereo-vision system which fits within 5x5cm and consumes very low power ([email protected]). The system consists of two small profile cameras and a dualcore embedded media processor, running at 600MHz per core. The stereo-matching engine performs sub-sampling, rectification, pre-processing using census transform, correlation-based Sum of Hamming Distance matching using three levels of recursion, LRC check and post-processing. The novel post processing algorithm removes outliers due to low-texture regions and depth-discontinuities. A quantitative performance of the post processing algorithm is presented which shows that for all regions, it has an average percentage improvement of 13.61% (based on 2006 Middlebury dataset). To further enhance the performance of the system, optimization steps are employed to achieve a speed of around 10fps for disparity maps in MESVS-I and 20fps in MESVS-II system

    Fast and robust stereo matching algorithm for obstacle detection in robotic vision systems

    Get PDF
    In this paper, we propose a new area-based stereo matching method by improving the classical Census transform. It is a difficult task to match the corresponding points in two images taken by stereo cameras, mostly under variant illumination and non-ideal conditions. The classic Census nonparametric transform offers some improvements in the accuracy of disparity map in these conditions but it also has some disadvantages. Because of the complexity of the algorithm, the performance is not suitable for real-time robotic systems. In order to solve this problem, this paper presents the differential transform using Maximum intensity differences of the pixel placed in the center of a defined window and the pixel in the neighborhood to reduce complexity and obtain better performance compared to the Census transform. Experimental results show that the proposed method, achieves better efficiency in terms of speed and memory consumption. Moreover, we have added a new feature to widen the depth detection range. With the help of the proposed method, robots can detect obstacles between 25cm to 400cm from robot cameras. The result shows that the method has the ability to work in a wide variety of lighting conditions, while the stereo matching performs the depth detection computation with speed of 30FPS

    Design of a Real-time Image-based Distance Sensing System by Stereo Vision on FPGA

    Get PDF
    A stereo vision system is a robust method to sense the distance information in a scene. This research explores the stereo vision system from the fundamentals of stereo vision and the computer stereo vision algorithm to the final implementation of the system on a FPGA chip. In a stereo vision system, images are captured by a pair of stereo image sensors. The distance information can be derived from the disparities between the stereo image pair, based on the theory of binocular geometry. With the increasing focus on 3D vision, stereo vision is becoming a hot topic in the areas of computer games, robot vision and medical applications. Particularly, most stereo vision systems are expected to be used in real-time applications. In this thesis, several stereo correspondence algorithms that determine the disparities between stereo image pair are examined. The algorithms can be categorized into global stereo algorithms and local stereo algorithms depending on the optimization techniques. The global algorithms examined are the Dynamic Time Warp (DTW) algorithm and the DTW with quantization algorithm, while the local algorithms examined are the window based Sum of Squared Differences (SSD), Sum of Absolute Differences (SAD) and Census transform correlation algorithms. With analysis among them, the window based SAD correlation algorithm is proposed for implementation on a FPGA platform. The proposed algorithm is implemented onto an Altera DE2 board featuring an Altera Cyclone II 2C35 FPGA. The implemented module of the algorithm is simulated using ModelSim-Altera to verify the correctness of its functionality. Along with a pair of stere image sensors and a LCD monitor, a stereo vision system is built. The entire system realizes a real-time video frame rate of 16.83 frames per second with an image resolution of 640 by 480 and produces disparity maps in which the objects are clearly distinguished by their relative distance information

    Real-Time High-Resolution Multiple-Camera Depth Map Estimation Hardware and Its Applications

    Get PDF
    Depth information is used in a variety of 3D based signal processing applications such as autonomous navigation of robots and driving systems, object detection and tracking, computer games, 3D television, and free view-point synthesis. These applications require high accuracy and speed performances for depth estimation. Depth maps can be generated using disparity estimation methods, which are obtained from stereo matching between multiple images. The computational complexity of disparity estimation algorithms and the need of large size and bandwidth for the external and internal memory make the real-time processing of disparity estimation challenging, especially for high resolution images. This thesis proposes a high-resolution high-quality multiple-camera depth map estimation hardware. The proposed hardware is verified in real-time with a complete system from the initial image capture to the display and applications. The details of the complete system are presented. The proposed binocular and trinocular adaptive window size disparity estimation algorithms are carefully designed to be suitable to real-time hardware implementation by allowing efficient parallel and local processing while providing high-quality results. The proposed binocular and trinocular disparity estimation hardware implementations can process 55 frames per second on a Virtex-7 FPGA at a 1024 x 768 XGA video resolution for a 128 pixel disparity range. The proposed binocular disparity estimation hardware provides best quality compared to existing real-time high-resolution disparity estimation hardware implementations. A novel compressed-look up table based rectification algorithm and its real-time hardware implementation are presented. The low-complexity decompression process of the rectification hardware utilizes a negligible amount of LUT and DFF resources of the FPGA while it does not require the existence of external memory. The first real-time high-resolution free viewpoint synthesis hardware utilizing three-camera disparity estimation is presented. The proposed hardware generates high-quality free viewpoint video in real-time for any horizontally aligned arbitrary camera positioned between the leftmost and rightmost physical cameras. The full embedded system of the depth estimation is explained. The presented embedded system transfers disparity results together with synchronized RGB pixels to the PC for application development. Several real-time applications are developed on a PC using the obtained RGB+D results. The implemented depth estimation based real-time software applications are: depth based image thresholding, speed and distance measurement, head-hands-shoulders tracking, virtual mouse using hand tracking and face tracking integrated with free viewpoint synthesis. The proposed binocular disparity estimation hardware is implemented in an ASIC. The ASIC implementation of disparity estimation imposes additional constraints with respect to the FPGA implementation. These restrictions, their implemented efficient solutions and the ASIC implementation results are presented. In addition, a very high-resolution (82.3 MP) 360°x90° omnidirectional multiple camera system is proposed. The hemispherical camera system is able to view the target locations close to horizontal plane with more than two cameras. Therefore, it can be used in high-resolution 360° depth map estimation and its applications in the future

    Miniaturized embedded stereo vision system (MESVS)

    Get PDF
    Stereo vision is one of the fundamental problems of computer vision. It is also one of the oldest and heavily investigated areas of 3D vision. Recent advances of stereo matching methodologies and availability of high performance and efficient algorithms along with availability of fast and affordable hardware technology, have allowed researchers to develop several stereo vision systems capable of operating at real-time. Although a multitude of such systems exist in the literature, the majority of them concentrates only on raw performance and quality rather than factors such as dimension, and power requirement, which are of significant importance in the embedded settings. In this thesis a new miniaturized embedded stereo vision system (MESVS) is presented, which is miniaturized to fit within a package of 5x5cm, is power efficient, and cost-effective. Furthermore, through application of embedded programming techniques and careful optimization, MESVS achieves the real-time performance of 20 frames per second. This work discusses the various challenges involved regarding design and implementation of this system and the measures taken to tackle them
    corecore