11,462 research outputs found

    A High permormance hardware architecture for an sad reuse based hierarchical motion estimation algorithm for H.264 video coding

    Get PDF
    In this paper, we present a high performance and low cost hardware architecture for real-time implementation of an SAD reuse based hierarchical motion estimation algorithm for H.264 / MPEG4 Part 10 video coding. This hardware is designed to be used as part of a complete H.264 video coding system for portable applications. The proposed architecture is implemented in Verilog HDL. The Verilog RTL code is verified to work at 68 MHz in a Xilinx Virtex II FPGA. The FPGA implementation can process 27 VGA frames (640x480) or 82 CIF frames (352x288) per second

    An efficient hardware architecture for H.264 intra prediction algorithm

    Get PDF
    In this paper, we present an efficient hardware architecture for real-time implementation of intra prediction algorithm used in H.264 / MPEG4 Part 10 video coding standard. The hardware design is based on a novel organization of the intra prediction equations. This hardware is designed to be used as part of a complete H.264 video coding system for portable applications. The proposed architecture is implemented in Verilog HDL. The Verilog RTL code is verified to work at 90 MHz in a Xilinx Virtex II FPGA. The FPGA implementation can process 27 VGA frames (640x480) per second

    H.264 motion estimator design

    Get PDF
    Recently, a new international standard for video compression named H.264 / MPEG-4 Part 10 is developed. This new standard offers significantly better video compression efficiency than previous international standards. The variable block size motion estimation is the most compute-intensive part of an H.264 video encoder. The full search method is impractical for real-time implementations since it requires a high computational complexity. Therefore, many fast motion estimation algorithms have been developed for real-time implementations. In this thesis, we used an SAD reuse based hierarchical motion estimation algorithm for real-time H.264 / MPEG-4 Part 10 video coding. This algorithm uses the Lagrangian cost parameter (SAD+λR) for selecting the best motion vector. We designed a high performance and low cost hardware architecture for real-time implementation of this algorithm. We have considered several alternative designs and decided on this architecture based on a cost/performance analysis. This architecture uses a novel data flow resulting in a low cost and high performance hardware. This hardware is designed to be used as part of a complete H.264 video coding system for portable applications. The proposed architecture is implemented in Verilog HDL. The Verilog RTL code is verified to work at 63 MHz in a Xilinx Virtex II FPGA. The FPGA implementation can process 25 VGA frames (640x480) or 76 CIF frames (352x288) per second

    Hardware Acceleration of Video analytics on FPGA using OpenCL

    Get PDF
    abstract: With the exponential growth in video content over the period of the last few years, analysis of videos is becoming more crucial for many applications such as self-driving cars, healthcare, and traffic management. Most of these video analysis application uses deep learning algorithms such as convolution neural networks (CNN) because of their high accuracy in object detection. Thus enhancing the performance of CNN models become crucial for video analysis. CNN models are computationally-expensive operations and often require high-end graphics processing units (GPUs) for acceleration. However, for real-time applications in an energy-thermal constrained environment such as traffic management, GPUs are less preferred because of their high power consumption, limited energy efficiency. They are challenging to fit in a small place. To enable real-time video analytics in emerging large scale Internet of things (IoT) applications, the computation must happen at the network edge (near the cameras) in a distributed fashion. Thus, edge computing must be adopted. Recent studies have shown that field-programmable gate arrays (FPGAs) are highly suitable for edge computing due to their architecture adaptiveness, high computational throughput for streaming processing, and high energy efficiency. This thesis presents a generic OpenCL-defined CNN accelerator architecture optimized for FPGA-based real-time video analytics on edge. The proposed CNN OpenCL kernel adopts a highly pipelined and parallelized 1-D systolic array architecture, which explores both spatial and temporal parallelism for energy efficiency CNN acceleration on FPGAs. The large fan-in and fan-out of computational units to the memory interface are identified as the limiting factor in existing designs that causes scalability issues, and solutions are proposed to resolve the issue with compiler automation. The proposed CNN kernel is highly scalable and parameterized by three architecture parameters, namely pe_num, reuse_fac, and vec_fac, which can be adapted to achieve 100% utilization of the coarse-grained computation resources (e.g., DSP blocks) for a given FPGA. The proposed CNN kernel is generic and can be used to accelerate a wide range of CNN models without recompiling the FPGA kernel hardware. The performance of Alexnet, Resnet-50, Retinanet, and Light-weight Retinanet has been measured by the proposed CNN kernel on Intel Arria 10 GX1150 FPGA. The measurement result shows that the proposed CNN kernel, when mapped with 100% utilization of computation resources, can achieve a latency of 11ms, 84ms, 1614.9ms, and 990.34ms for Alexnet, Resnet-50, Retinanet, and Light-weight Retinanet respectively when the input feature maps and weights are represented using 32-bit floating-point data type.Dissertation/ThesisMasters Thesis Electrical Engineering 201

    Embedded video stabilization system on field programmable gate array for unmanned aerial vehicle

    Get PDF
    Unmanned Aerial Vehicles (UAVs) equipped with lightweight and low-cost cameras have grown in popularity and enable new applications of UAV technology. However, the video retrieved from small size UAVs is normally in low-quality due to high frequency jitter. This thesis presents the development of video stabilization algorithm implemented on Field Programmable Gate Array (FPGA). The video stabilization algorithm consists of three main processes, which are motion estimation, motion stabilization and motion compensation to minimize the jitter. Motion estimation involves block matching and Random Sample Consensus (RANSAC) to estimate the affine matrix that defines the motion perspective between two consecutive frames. Then, parameter extraction, motion smoothing and motion vector correction, which are parts of the motion stabilization, are tasked in removing unwanted camera movement. Finally, motion compensation stabilizes two consecutive frames based on filtered motion vectors. In order to facilitate the ground station mobility, this algorithm needs to be processed onboard the UAV in real-time. The nature of parallelization of video stabilization processing is suitable to be utilized by using FPGA in order to achieve real-time capability. The implementation of this system is on Altera DE2-115 FPGA board. Full hardware dedicated cores without Nios II processor are designed in stream-oriented architecture to accelerate the computation. Furthermore, a parallelized architecture consisting of block matching and highly parameterizable RANSAC processor modules show that the proposed system is able to achieve up to 30 frames per second processing and a good stabilization improvement up to 1.78 Interframe Transformation Fidelity value. Hence, it is concluded that the proposed system is suitable for real-time video stabilization for UAV application

    FPGA Implementation of Blob Recognition

    Get PDF
    Real-time embedded vision systems can be used in a wide range of applications and therefore the demand has been increasing for them. In this thesis, an FPGA-based embedded vision system capable of recognizing objects in real time is presented. The proposed system architecture consists of multiple Intellectual Properties (IPs), which are used as a set of complex instructions by an integrated 32-bit CPU Microblaze. Each IP is tailored specifically to meet the needs of the application and at the same time to consume the minimum FPGA logic resources. Integrating both hardware and software on a single FPGA chip, this system can achieve the real-time performance of full VGA video processing at 32 frames per second (fps). In addition, this work comes up with a new method called Dual Connected Component Labelling (DCCL) suitable for FPGA implementation

    Video Sensor Architecture for Surveillance Applications

    Get PDF
    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%. © 2012 by the authors; licensee MDPI, Basel, Switzerland.This work has been partially supported by SENSE project (Specific Targeted Research Project within the thematic priority IST 2.5.3 of the 6th Framework Program of the European Commission: IST Project 033279), and has been also co-funded by the Spanish research projects SIDIRELI: DPI2008-06737-C02-01/02 and COBAMI: DPI2011-28507-C02-02, both partially supported with European FEDER funds.Sánchez Peñarroja, J.; Benet Gilabert, G.; Simó Ten, JE. (2012). Video Sensor Architecture for Surveillance Applications. Sensors. 12(2):1509-1528. https://doi.org/10.3390/s120201509S15091528122Batlle, J. (2002). A New FPGA/DSP-Based Parallel Architecture for Real-Time Image Processing. Real-Time Imaging, 8(5), 345-356. doi:10.1006/rtim.2001.0273Foresti, G. L., Micheloni, C., Piciarelli, C., & Snidaro, L. (2009). Visual Sensor Technology for Advanced Surveillance Systems: Historical View, Technological Aspects and Research Activities in Italy. Sensors, 9(4), 2252-2270. doi:10.3390/s90402252Bramberger, M., Doblander, A., Maier, A., Rinner, B., & Schwabach, H. (2006). Distributed Embedded Smart Cameras for Surveillance Applications. Computer, 39(2), 68-75. doi:10.1109/mc.2006.55Foresti, G. L., Micheloni, C., Snidaro, L., Remagnino, P., & Ellis, T. (2005). Active video-based surveillance system: the low-level image and video processing techniques needed for implementation. IEEE Signal Processing Magazine, 22(2), 25-37. doi:10.1109/msp.2005.1406473Fuentes, L. M., & Velastin, S. A. (2003). Tracking People for Automatic Surveillance Applications. Lecture Notes in Computer Science, 238-245. doi:10.1007/978-3-540-44871-6_28García, J., Pérez, O., Berlanga, A., & Molina, J. M. (2007). Video tracking system optimization using evolution strategies. International Journal of Imaging Systems and Technology, 17(2), 75-90. doi:10.1002/ima.20100Xu, H., Lv, J., Chen, X., Gong, X., & Yang, C. (2007). Design of video processing and testing system based on DSP and FPGA. 3rd International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optical Test and Measurement Technology and Equipment. doi:10.1117/12.783790Sanfeliu, A., Andrade-Cetto, J., Barbosa, M., Bowden, R., Capitán, J., Corominas, A., … Spaan, M. T. J. (2010). Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas. Sensors, 10(3), 2274-2314. doi:10.3390/s100302274http://www.sense-ist.orgXu, H., Lv, J., Chen, X., Gong, X., & Yang, C. (2007). Design of video processing and testing system based on DSP and FPGA. 3rd International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optical Test and Measurement Technology and Equipment. doi:10.1117/12.78379

    P2IP: A novel low-latency Programmable Pipeline Image Processor

    Get PDF
    International audienceThis paper presents a novel systolic Coarse-Grained Reconfigurable Architecture for real-time image and video processing called P 2 IP. The P 2 IP is a scalable architecture that combines the low-latency characteristic of systolic array architectures with a runtime reconfigurable datapath. Reconfigurabil-ity of the P 2 IP enables it to perform a wide range of image pre-processing tasks directly on a pixel stream. The versatility of the P 2 IP is demonstrated through three image processing algorithms mapped onto the architecture, implemented in an FPGA-based platform. The obtained results show that the P 2 IP can achieve up to 129 fps in Full HD 1080p and 32 fps in 4K 2160p what makes it suitable for modern high-definition applications

    Integrated sorting, noise estimation, object detection and contour analysis on one FPGA for video object segmentation

    Get PDF
    Although solutions for robust video processing methods, such as compression or segmentation, have been considerably investigated using general-purpose processors (GPPs), these software implementations are too slow to achieve real-time performance due to the computational complexity and memory bandwidth involved in present complex video processing methods. As such, efficient hardware accelerations are inevitable for fast, video systems. The state-of-the-art, field programmable gate arrays (FPGAs) fill the gap between very inflexible, but high performance ASICs and flexible, yet performance-constrained GPPs. Thus, FPGAs are increasingly employed on hardware platforms in many signal and video processing applications. This thesis proposes an FPGA-based architecture that integrates four video processing methods (sorting, noise estimation, object detection, and contour analysis) on one FPGA, which takes a video signal and outputs a, contour filled video sequence along with the corresponding contour chain codes. The proposed architecture aims at segmenting moving objects in video signals. A video object segmentation consists of several steps: pre-processing (e.g., noise estimation), object detection (i.e., separation of objects and background), and contour analysis. The proposed architecture is simulated, synthesized and verified for its functionality, accuracy and performance on an actual hardware platform consisting of a Xilinx Virtex-4 SX35 FPGA. Compared to related work, our architecture obtains orders of magnitude performance improvements utilizing minimal hardware resources and power, and possesses key algorithmic features, which are inherently required in many video processing applications
    corecore