15 research outputs found

    Address generation for FPGA RAMs for efficient implementation of realtime video processing systems

    Get PDF
    ABSTRACT FPGA offers the potential of being a reliable, and highperformance reconfigurable platform for the implementation of real-time video processing systems. To utilize the full processing power of FPGA for video processing applications, optimization of memory accesses and the implementation of memory architecture are important issues. This paper presents two approaches, base pointer approach and distributed pointer approach, to implement accesses to onchip FPGA Block RAMs. A comparison of the experimental results obtained using the two approaches on realistic image processing systems design cases is presented. The results show that compared to the base pointer approach the distributed pointer approach increases the potential processing power of FPGA, as a reconfigurable platform for video processing systems

    А может получится как всегда

    Get PDF
    ОБРАЗОВАНИЕ МЕДИЦИНСКОЕМЕДИЦИНСКИЕ УЧЕБНЫЕ ЗАВЕДЕНИЯИНТЕГРАЦИЯ ОБРАЗОВАНИЯЕВРОПЕЙСКОЕ ОБРАЗОВАНИЕРЕФОРМЫ ВЫСШЕГО ОБРАЗОВАНИЯРЕФОРМЫИНТЕРНАЦИОНАЛИЗАЦИЯ ВЫСШЕГО ОБРАЗОВАНИ

    Memory Synthesis for FPGA Implementation of Real-Time Video Processing Systems

    No full text
    In this thesis, both a method and a tool to enable efficient memory synthesis for real-time video processing systems on field programmable logic array are presented. In real-time video processing system (RTVPS), a set of operations are repetitively performed on every image frame in a video stream. These operations are usually computationally intensive and, depending on the video resolution, can also be very data transfer dominated. These operations, which often require data from several consecutive frames and many rows of data within each frame, must be performed accurately and under real-time constraints as the results greatly affect the accuracy of application. Application domains of these systems include object recognition, object tracking and surveillance. Developments in field programmable gate array (FPGA) have been the motivation for choosing them as the platform for implementing RTVPS. Essential logic resources required in RTVPS operation are currently available optimized and embedded in modern FPGAs. One such resource is the embedded memory used for data buffering during real-time video processing. Each data buffer corresponds to a row of pixels in a video frame, which is allocated using a synthesis tool that performs the mapping of buffers to embedded memories. This approach has been investigated and proven to be inefficient. An efficient alternative employing resource sharing and allocation width pipelining will be discussed in this thesis. A method for the optimal use of these embedded memories and, additionally, a tool supporting automatic generation of hardware descriptions language (HDL) codes for the synthesis of the memories according to the developed method are the main focus of this thesis. This method consists of the memory architecture, allocation and addressing. The central objective of this method is the optimal use of embedded memories in the process of buffering data on-chip for an RVTPS operation. The developed software tool is an environment for generating HDL codes implementing the memory sub-components. The tool integrates with the Interface and Memory Modelling (IMEM) tools in such a way that the IMEM’s output - the memory requirements of a RTVPS - is imported and processed in order to generate the HDL codes. IMEM is based on the philosophy that the memory requirements of an RTVPS can be modelled and synthesized separately from the development of the core RTVPS algorithm thus freeing the designer to focus on the development of the algorithm while relying on IMEM for the implementation of memory sub-components.Sensible Things That Communicat

    Memory Synthesis for FPGA Implementation of Real-Time Video Processing Systems

    No full text
    In this thesis, both a method and a tool to enable efficient memory synthesis for real-time video processing systems on field programmable logic array are presented. In real-time video processing system (RTVPS), a set of operations are repetitively performed on every image frame in a video stream. These operations are usually computationally intensive and, depending on the video resolution, can also be very data transfer dominated. These operations, which often require data from several consecutive frames and many rows of data within each frame, must be performed accurately and under real-time constraints as the results greatly affect the accuracy of application. Application domains of these systems include machine vision, object recognition and tracking, visual enhancement and surveillance. Developments in field programmable gate arrays (FPGAs) have been the motivation for choosing them as the platform for implementing RTVPS. Essential logic resources required in RTVPS operations are currently available and are optimized and embedded in modern FPGAs. One such resource is the embedded memory used for data buffering during real-time video processing. Each data buffer corresponds to a row of pixels in a video frame, which is allocated using a synthesis tool that performs the mapping of buffers to embedded memories. This approach has been investigated and proven to be inefficient. An efficient alternative employing resource sharing and allocation width pipelining will be discussed in this thesis. A method for the optimised use of these embedded memories and, additionally, a tool supporting automatic generation of hardware descriptions language (HDL) modules for the synthesis of the memories according to the developed method are the main focus of this thesis. This method consists of the memory architecture, allocation and addressing. The central objective of this method is the optimised use of embedded memories in the process of buffering data on-chip for an RVTPS operation. The developed software tool is an environment for generating HDL codes implementing the memory sub-components. The tool integrates with the Interface and Memory Modelling (IMEM) tools in such a way that the IMEM’s output - the memory requirements of a RTVPS - is imported and processed in order to generate the HDL codes. IMEM is based on the philosophy that the memory requirements of an RTVPS can be modelled and synthesized separately from the development of the core RTVPS algorithm thus freeing the designer to focus on the development of the algorithm while relying on IMEM for the implementation of memory sub-components.Electronics design divisionSensible Things that Communicat

    A taxonomy of visual surveillance systems

    No full text
    The increased security risk in society and the availability of low cost sensors and processors has expedited the research in surveillance systems. Visual surveillance systems provide real time monitoring of the environment. Designing an optimized surveillance system for a given application is a challenging task. Moreover, the choice of components for a given surveillance application out of a wide spectrum of available products is not an easy job.   In this report, we formulate a taxonomy to ease the design and classification of surveillance systems by combining their main features. The taxonomy is based on three main models: behavioral model, implementation model, and actuation model. The behavioral model helps to understand the behavior of a surveillance problem. The model is a set of functions such as detection, positioning, identification, tracking, and content handling. The behavioral model can be used to pinpoint the functions which are necessary for a particular situation. The implementation model structures the decisions which are necessary to implement the surveillance functions, recognized by the behavioral model. It is a set of constructs such as sensor type, node connectivity and node fixture. The actuation model is responsible for taking precautionary measures when a surveillance system detects some abnormal situation.   A number of surveillance systems are investigated and analyzed on the basis of developed taxonomy. The taxonomy is general enough to handle a vast range of surveillance systems. It has organized the core features of surveillance systems at one place. It may be considered an important tool when designing surveillance systems. The designers can use this tool to design surveillance systems with reduced effort, cost, and time

    Automatic Generation of Spatial and Temporal Memory Architectures for Embedded Video Processing Systems

    Get PDF
    This paper presents a tool for automatic generation of the memory management implementation for spatial and temporal real-time video processing systems targeting field programmable gate arrays (FPGAs). The generator creates all the necessary memory and control functionality for a functional spatio-temporal video processing system. The required memory architecture is automatically optimized and mapped to the FPGAs' memory resources thus producing an efficient implementation in terms of used internal resources. The results in this paper show that the tool is able to efficiently and automatically generate all required memory management modules for both spatial and temporal real-time video processing systems.STC - Sensible Things that Communicat

    Exploration of Local and Central Processing for a Wireless Camera Based Sensor Node

    No full text
    Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires both higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor network has been based on two different assumptions involving either sending data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. In this paper we focus on determining an optimal point for intelligence partitioning between the sensor node and the central base station and by exploring compression methods. The lifetime of the visual sensor node is predicted by evaluating the energy consumption for different levels of intelligence partitioning at the sensor node. Our results show that sending compressed images after segmentation will result in a longer life for the sensor node

    Exploration of Target Architecture for aWireless Camera Based Sensor Node

    No full text
    The challenges associated with wireless vision sensor networks are low energy consumption, less bandwidth and limited processing capabilities. In order to meet these challenges different approaches are proposed. Research in wireless vision sensor networks has been focused on two different assumptions, first is sending all data to the central base station without local processing, second approach is based on conducting all processing locally at the sensor node and transmitting only the final results. Our research is focused on partitioning the vision processing tasks between Senor node and central base station. In this paper we have added the exploration dimension to perform some of the vision tasks such as image capturing, background subtraction, segmentation and Tiff Group4 compression on FPGA while communication on microcontroller. The remaining vision processing tasks i.e. morphology, labeling, bubble remover and classification are processed on central base station. Our results show that the introduction of FPGA for some of the visual tasks will result in a longer life time for the visual sensor node while the architecture is still programmable
    corecore