5,533 research outputs found

    Integrated sorting, noise estimation, object detection and contour analysis on one FPGA for video object segmentation

    Get PDF
    Although solutions for robust video processing methods, such as compression or segmentation, have been considerably investigated using general-purpose processors (GPPs), these software implementations are too slow to achieve real-time performance due to the computational complexity and memory bandwidth involved in present complex video processing methods. As such, efficient hardware accelerations are inevitable for fast, video systems. The state-of-the-art, field programmable gate arrays (FPGAs) fill the gap between very inflexible, but high performance ASICs and flexible, yet performance-constrained GPPs. Thus, FPGAs are increasingly employed on hardware platforms in many signal and video processing applications. This thesis proposes an FPGA-based architecture that integrates four video processing methods (sorting, noise estimation, object detection, and contour analysis) on one FPGA, which takes a video signal and outputs a, contour filled video sequence along with the corresponding contour chain codes. The proposed architecture aims at segmenting moving objects in video signals. A video object segmentation consists of several steps: pre-processing (e.g., noise estimation), object detection (i.e., separation of objects and background), and contour analysis. The proposed architecture is simulated, synthesized and verified for its functionality, accuracy and performance on an actual hardware platform consisting of a Xilinx Virtex-4 SX35 FPGA. Compared to related work, our architecture obtains orders of magnitude performance improvements utilizing minimal hardware resources and power, and possesses key algorithmic features, which are inherently required in many video processing applications

    Block-Matching Optical Flow for Dynamic Vision Sensor- Algorithm and FPGA Implementation

    Full text link
    Rapid and low power computation of optical flow (OF) is potentially useful in robotics. The dynamic vision sensor (DVS) event camera produces quick and sparse output, and has high dynamic range, but conventional OF algorithms are frame-based and cannot be directly used with event-based cameras. Previous DVS OF methods do not work well with dense textured input and are designed for implementation in logic circuits. This paper proposes a new block-matching based DVS OF algorithm which is inspired by motion estimation methods used for MPEG video compression. The algorithm was implemented both in software and on FPGA. For each event, it computes the motion direction as one of 9 directions. The speed of the motion is set by the sample interval. Results show that the Average Angular Error can be improved by 30\% compared with previous methods. The OF can be calculated on FPGA with 50\,MHz clock in 0.2\,us per event (11 clock cycles), 20 times faster than a Java software implementation running on a desktop PC. Sample data is shown that the method works on scenes dominated by edges, sparse features, and dense texture.Comment: Published in ISCAS 201

    A reconfigurable frame interpolation hardware architecture for high definition video

    Get PDF
    Since Frame Rate Up-Conversion (FRC) is started to be used in recent consumer electronics products like High Definition TV, real-time and low cost implementation of FRC algorithms has become very important. Therefore, in this paper, we propose a low cost hardware architecture for realtime implementation of frame interpolation algorithms. The proposed hardware architecture is reconfigurable and it allows adaptive selection of frame interpolation algorithms for each Macroblock. The proposed hardware architecture is implemented in VHDL and mapped to a low cost Xilinx XC3SD1800A-4 FPGA device. The implementation results show that the proposed hardware can run at 101 MHz on this FPGA and consumes 32 BRAMs and 15384 slices

    Implementation of JPEG compression and motion estimation on FPGA hardware

    Full text link
    A hardware implementation of JPEG allows for real-time compression in data intensivve applications, such as high speed scanning, medical imaging and satellite image transmission. Implementation options include dedicated DSP or media processors, FPGA boards, and ASICs. Factors that affect the choice of platform selection involve cost, speed, memory, size, power consumption, and case of reconfiguration. The proposed hardware solution is based on a Very high speed integrated circuit Hardware Description Language (VHDL) implememtation of the codec with prefered realization using an FPGA board due to speed, cost and flexibility factors; The VHDL language is commonly used to model hardware impletations from a top down perspective. The VHDL code may be simulated to correct mistakes and subsequently synthesized into hardware using a synthesis tool, such as the xilinx ise suite. The same VHDL code may be synthesized into a number of sifferent hardware architetcures based on constraints given. For example speed was the major constraint when synthesizing the pipeline of jpeg encoding and decoding, while chip area and power consumption were primary constraints when synthesizing the on-die memory because of large area. Thus, there is a trade off between area and speed in logic synthesis

    A toolset for the analysis and optimization of motion estimation algorithms and processors

    Get PDF
    corecore