109 research outputs found

    ENERGY-EFFICIENT LIGHTWEIGHT ALGORITHMS FOR EMBEDDED SMART CAMERAS: DESIGN, IMPLEMENTATION AND PERFORMANCE ANALYSIS

    Get PDF
    An embedded smart camera is a stand-alone unit that not only captures images, but also includes a processor, memory and communication interface. Battery-powered, embedded smart cameras introduce many additional challenges since they have very limited resources, such as energy, processing power and memory. When camera sensors are added to an embedded system, the problem of limited resources becomes even more pronounced. Hence, computer vision algorithms running on these camera boards should be light-weight and efficient. This thesis is about designing and developing computer vision algorithms, which are aware and successfully overcome the limitations of embedded platforms (in terms of power consumption and memory usage). Particularly, we are interested in object detection and tracking methodologies and the impact of them on the performance and battery life of the CITRIC camera (embedded smart camera employed in this research). This thesis aims to prolong the life time of the Embedded Smart platform, without affecting the reliability of the system during surveillance tasks. Therefore, the reader is walked through the whole designing process, from the development and simulation, followed by the implementation and optimization, to the testing and performance analysis. The work presented in this thesis carries out not only software optimization, but also hardware-level operations during the stages of object detection and tracking. The performance of the algorithms introduced in this thesis are comparable to state-of-the-art object detection and tracking methods, such as Mixture of Gaussians, Eigen segmentation, color and coordinate tracking. Unlike the traditional methods, the newly-designed algorithms present notable reduction of the memory requirements, as well as the reduction of memory accesses per pixel. To accomplish the proposed goals, this work attempts to interconnect different levels of the embedded system architecture to make the platform more efficient in terms of energy and resource savings. Thus, the algorithms proposed are optimized at the API, middleware, and hardware levels to access the pixel information of the CMOS sensor directly. Only the required pixels are acquired in order to reduce the unnecessary communications overhead. Experimental results show that when exploiting the architecture capabilities of an embedded platform, 41.24% decrease in energy consumption, and 107.2% increase in battery-life can be accomplished. Compared to traditional object detection and tracking methods, the proposed work provides an additional 8 hours of continuous processing on 4 AA batteries, increasing the lifetime of the camera to 15.5 hours

    FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model

    Get PDF
    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W

    SYSTEM-ON-A-CHIP (SOC)-BASED HARDWARE ACCELERATION FOR HUMAN ACTION RECOGNITION WITH CORE COMPONENTS

    Get PDF
    Today, the implementation of machine vision algorithms on embedded platforms or in portable systems is growing rapidly due to the demand for machine vision in daily human life. Among the applications of machine vision, human action and activity recognition has become an active research area, and market demand for providing integrated smart security systems is growing rapidly. Among the available approaches, embedded vision is in the top tier; however, current embedded platforms may not be able to fully exploit the potential performance of machine vision algorithms, especially in terms of low power consumption. Complex algorithms can impose immense computation and communication demands, especially action recognition algorithms, which require various stages of preprocessing, processing and machine learning blocks that need to operate concurrently. The market demands embedded platforms that operate with a power consumption of only a few watts. Attempts have been mad to improve the performance of traditional embedded approaches by adding more powerful processors; this solution may solve the computation problem but increases the power consumption. System-on-a-chip eld-programmable gate arrays (SoC-FPGAs) have emerged as a major architecture approach for improving power eciency while increasing computational performance. In a SoC-FPGA, an embedded processor and an FPGA serving as an accelerator are fabricated in the same die to simultaneously improve power consumption and performance. Still, current SoC-FPGA-based vision implementations either shy away from supporting complex and adaptive vision algorithms or operate at very limited resolutions due to the immense communication and computation demands. The aim of this research is to develop a SoC-based hardware acceleration workflow for the realization of advanced vision algorithms. Hardware acceleration can improve performance for highly complex mathematical calculations or repeated functions. The performance of a SoC system can thus be improved by using hardware acceleration method to accelerate the element that incurs the highest performance overhead. The outcome of this research could be used for the implementation of various vision algorithms, such as face recognition, object detection or object tracking, on embedded platforms. The contributions of SoC-based hardware acceleration for hardware-software codesign platforms include the following: (1) development of frameworks for complex human action recognition in both 2D and 3D; (2) realization of a framework with four main implemented IPs, namely, foreground and background subtraction (foreground probability), human detection, 2D/3D point-of-interest detection and feature extraction, and OS-ELM as a machine learning algorithm for action identication; (3) use of an FPGA-based hardware acceleration method to resolve system bottlenecks and improve system performance; and (4) measurement and analysis of system specications, such as the acceleration factor, power consumption, and resource utilization. Experimental results show that the proposed SoC-based hardware acceleration approach provides better performance in terms of the acceleration factor, resource utilization and power consumption among all recent works. In addition, a comparison of the accuracy of the framework that runs on the proposed embedded platform (SoCFPGA) with the accuracy of other PC-based frameworks shows that the proposed approach outperforms most other approaches

    ADDRESSING THREE PROBLEMS IN EMBEDDED SYSTEMS VIA COMPRESSIVE SENSING BASED METHODS

    Full text link
    Compressive sensing is a mathematical theory concerning exact/approximate recovery of sparse/compressible vectors using the minimum number of measurements called projections. Its theory covers topics such as l1 optimisation, dimensionality reduction, information preserving projection matrices, random projection matrices and others. In this thesis we extend and use the theory of compressive sensing to address the challenges of limited computation power and energy supply in embedded systems. Three different problems are addressed. The first problem is to improve the efficiency of data gathering in wireless sensor networks. Many wireless sensor networks exhibit heterogeneity because of the environment. We leverage this heterogeneity and extend the theory of compressive sensing to cover non-uniform sampling to derive a new data collection protocol. We show that this protocol can realise a more accurate temporal-spatial profile for a given level of energy consumption. The second problem is to realise realtime background subtraction in embedded cameras. Background subtraction algorithms are normally computationally expensive because they use complex models to deal with subtle changes in background. Therefore existing background subtraction algorithms cannot provide realtime performance on embedded cameras which have limited processing power. By leveraging information preserving projection matrices, we derive a new background subtraction algorithm which is 4.6 times faster and more accurate than existing methods. We demonstrate that our background subtraction algorithm can realise realtime background subtraction and tracking in an embedded camera network. The third problem is to enable efficient and accurate face recognition on smartphones. The state-of-the-art face recognition algorithm is inspired by compressive sensing and is based on l1 optimisation. It also uses random projection matrices for dimensionality reduction. A key problem of using random projection matrices is that they give highly variable recognition accuracy. We propose an algorithm to optimise projection matrix to remove this performance variability. This means we can use fewer projections to achieve the same accuracy. This translates to a smaller l1 optimisation problem and reduces the computation time needed on smartphones, which have limited computation power. We demonstrate the performance of our proposed method on smartphones

    Evaluation of MoG Video Segmentation on GPU-based HPC System

    Get PDF
    Automated and intelligent video surveillance systems play an important role in the modern world. Since the number of various video streams that must be analyzed concurrently grows, such systems can assist humans in performing tiresome tasks. In order to be effective, video surveillance systems have to meet several requirements: they must be accurate and able to process the received video stream in real-time. A robust system should not depend on lighting conditions, illumination changes and other sources of scene variation. A common component of surveillance systems is a module that performs background estimation and foreground segmentation. The MoG (Mixture of Gaussians) algorithm is a widely used statistical technique of video segmentation. The estimation process is time-consuming, especially for complex mixture models containing many components. The work presented here focuses on the performance evaluation of MoG algorithm aiming to assess feasibility of OpenCL-based processing of high resolution video on GPU accelerated platforms

    Efficiently mapping high-performance early vision algorithms onto multicore embedded platforms

    Get PDF
    The combination of low-cost imaging chips and high-performance, multicore, embedded processors heralds a new era in portable vision systems. Early vision algorithms have the potential for highly data-parallel, integer execution. However, an implementation must operate within the constraints of embedded systems including low clock rate, low-power operation and with limited memory. This dissertation explores new approaches to adapt novel pixel-based vision algorithms for tomorrow's multicore embedded processors. It presents : - An adaptive, multimodal background modeling technique called Multimodal Mean that achieves high accuracy and frame rate performance with limited memory and a slow-clock, energy-efficient, integer processing core. - A new workload partitioning technique to optimize the execution of early vision algorithms on multi-core systems. - A novel data transfer technique called cat-tail dma that provides globally-ordered, non-blocking data transfers on a multicore system. By using efficient data representations, Multimodal Mean provides comparable accuracy to the widely used Mixture of Gaussians (MoG) multimodal method. However, it achieves a 6.2x improvement in performance while using 18% less storage than MoG while executing on a representative embedded platform. When this algorithm is adapted to a multicore execution environment, the new workload partitioning technique demonstrates an improvement in execution times of 25% with only a 125 ms system reaction time. It also reduced the overall number of data transfers by 50%. Finally, the cat-tail buffering technique reduces the data-transfer latency between execution cores and main memory by 32.8% over the baseline technique when executing Multimodal Mean. This technique concurrently performs data transfers with code execution on individual cores, while maintaining global ordering through low-overhead scheduling to prevent collisions.Ph.D.Committee Chair: Wills, Scott; Committee Co-Chair: Wills, Linda; Committee Member: Bader, David; Committee Member: Davis, Jeff; Committee Member: Hamblen, James; Committee Member: Lanterman, Aaro

    On-board three-dimensional object tracking: Software and hardware solutions

    Full text link
    We describe a real time system for recognition and tracking 3D objects such as UAVs, airplanes, fighters with the optical sensor. Given a 2D image, the system has to perform background subtraction, recognize relative rotation, scale and translation of the object to sustain a prescribed topology of the fleet. In the thesis a comparative study of different algorithms and performance evaluation is carried out based on time and accuracy constraints. For background subtraction task we evaluate frame differencing, approximate median filter, mixture of Gaussians and propose classification based on neural network methods. For object detection we analyze the performance of invariant moments, scale invariant feature transform and affine scale invariant feature transform methods. Various tracking algorithms such as mean shift with variable and a fixed sized windows, scale invariant feature transform, Harris and fast full search based on fast fourier transform algorithms are evaluated. We develop an algorithm for the relative rotations and the scale change calculation based on Zernike moments. Based on the design criteria the selection is made for on-board implementation. The candidate techniques have been implemented on the Texas Instrument TMS320DM642 EVM board. It is shown in the thesis that 14 frames per second can be processed; that supports the real time implementation of the tracking system under reasonable accuracy limits

    Hardware dedicado para sistemas empotrados de visión

    Get PDF
    La constante evolución de las Tecnologías de la Información y las Comunicaciones no solo ha permitido que más de la mitad de la población mundial esté actualmente interconectada a través de Internet, sino que ha sido el caldo de cultivo en el que han surgido nuevos paradigmas, como el ‘Internet de las cosas’ (IoT) o la ‘Inteligencia ambiental’ (AmI), que plantean la necesidad de interconectar objetos con distintas funcionalidades para lograr un entorno digital, sensible y adaptativo, que proporcione servicios de muy distinta índole a sus usuarios. La consecución de este entorno requiere el desarrollo de dispositivos electrónicos de bajo coste que, con tamaño y peso reducido, sean capaces de interactuar con el medio que los rodea, operar con máxima autonomía y proporcionar un elevado nivel de inteligencia. La funcionalidad de muchos de estos dispositivos incluirá la capacidad para adquirir, procesar y transmitir imágenes, extrayendo, interpretando o modificando la información visual que resulte de interés para una determinada aplicación. En el marco de este desafío surge la presente Tesis Doctoral, cuyo eje central es el desarrollo de hardware dedicado para la implementación de algoritmos de procesamiento de imágenes y secuencias de vídeo usados en sistemas empotrados de visión. El trabajo persigue una doble finalidad. Por una parte, la búsqueda de soluciones que, por sus prestaciones y rendimiento, puedan ser incorporadas en sistemas que satisfagan las estrictas exigencias de funcionalidad, tamaño, consumo de energía y velocidad de operación demandadas por las nuevas aplicaciones. Por otra, el diseño de una serie de bloques funcionales implementados como módulos de propiedad intelectual, que permitan aliviar la carga computacional de las unidades de procesado de los sistemas en los que se integren. En la Tesis se proponen soluciones específicas para la implementación de dos tipos de operaciones habitualmente presentes en muchos sistemas de visión artificial: la sustracción de fondo y el etiquetado de componentes conexos. Las distintas alternativas surgen como consecuencia de aplicar una adecuada relación de compromiso entre funcionalidad y coste, entendiendo este último criterio en términos de recursos de cómputo, velocidad de operación y potencia consumida, lo que permite cubrir un amplio espectro de aplicaciones. En algunas de las soluciones propuestas se han utilizado además, técnicas de inferencia basadas en Lógica Difusa con idea de mejorar la calidad de los sistemas de visión resultantes. Para la realización de los diferentes bloques funcionales se ha seguido una metodología de diseño basada en modelos, que ha permitido la realización de todo el ciclo de desarrollo en un único entorno de trabajo. Dicho entorno combina herramientas informáticas que facilitan las etapas de codificación algorítmica, diseño de circuitos, implementación física y verificación funcional y temporal de las distintas alternativas, acelerando con ello todas las fases del flujo de diseño y posibilitando una exploración más eficiente del espacio de posibles soluciones. Asimismo, con el objetivo de demostrar la funcionalidad de las distintas aportaciones de esta Tesis Doctoral, algunas de las soluciones propuestas han sido integradas en sistemas de vídeo reales, que emplean buses estándares de uso común. Los dispositivos seleccionados para llevar a cabo estos demostradores han sido FPGAs y SoPCs de Xilinx, ya que sus excelentes propiedades para el prototipado y la construcción de sistemas que combinan componentes software y hardware, los convierten en candidatos ideales para dar soporte a la implementación de este tipo de sistemas.The continuous evolution of the Information and Communication Technologies (ICT), not only has allowed more than half of the global population to be currently interconnected through Internet, but it has also been the breeding ground for new paradigms such as Internet of Things (ioT) or Ambient Intelligence (AmI). These paradigms expose the need of interconnecting elements with different functionalities in order to achieve a digital, sensitive, adaptive and responsive environment that provides services of distinct nature to the users. The development of low cost devices, with small size, light weight and a high level of autonomy, processing power and ability for interaction is required to obtain this environment. Attending to this last feature, many of these devices will include the capacity to acquire, process and transmit images, extracting, interpreting and modifying the visual information that could be of interest for a certain application. This PhD Thesis, focused on the development of dedicated hardware for the implementation of image and video processing algorithms used in embedded systems, attempts to response to this challenge. The work has a two-fold purpose: on one hand, the search of solutions that, for its performance and properties, could be integrated on systems with strict requirements of functionality, size, power consumption and speed of operation; on the other hand, the design of a set of blocks that, packaged and implemented as IP-modules, allow to alleviate the computational load of the processing units of the systems where they could be integrated. In this Thesis, specific solutions for the implementation of two kinds of usual operations in many computer vision systems are provided. These operations are background subtraction and connected component labelling. Different solutions are created as the result of applying a good performance/cost trade-off (approaching this last criteria in terms of area, speed and consumed power), able to cover a wide range of applications. Inference techniques based on Fuzzy Logic have been applied to some of the proposed solutions in order to improve the quality of the resulting systems. To obtain the mentioned solutions, a model based-design methodology has been applied. This fact has allowed us to carry out all the design flow from a single work environment. That environment combines CAD tools that facilitate the stages of code programming, circuit design, physical implementation and functional and temporal verification of the different algorithms, thus accelerating the overall processes and making it possible to explore the space of solutions. Moreover, aiming to demonstrate the functionality of this PhD Thesis’s contributions, some of the proposed solutions have been integrated on real video systems that employ common and standard buses. The devices selected to perform these demonstrators have been FPGA and SoPCs (manufactured by Xilinx) since, due to their excellent properties for prototyping and creating systems that combine software and hardware components, they are ideal to develop these applications
    corecore