406 research outputs found

    Parallel proccessing applied to object detection with a Jetson TX2 embedded system.

    Get PDF
    Video streams from panoramic cameras represent a powerful tool for automated surveillance systems, but naïve implementations typically require very intensive computational loads for applying deep learning models for automated detection and tracking of objects of interest, since these models require relatively high resolution to reliably perform object detection. In this paper, we report a host of improvements to our previous state-of-the-art software system to reliably detect and track objects in video streams from panoramic cameras, resulting in an increase in the processing framerate in a Jetson TX2 board, with respect to our previous results. Depending on the number of processes and the load profile, we observe up to a five-fold increase in the framerate.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Hardware dedicado para sistemas empotrados de visión

    Get PDF
    La constante evolución de las Tecnologías de la Información y las Comunicaciones no solo ha permitido que más de la mitad de la población mundial esté actualmente interconectada a través de Internet, sino que ha sido el caldo de cultivo en el que han surgido nuevos paradigmas, como el ‘Internet de las cosas’ (IoT) o la ‘Inteligencia ambiental’ (AmI), que plantean la necesidad de interconectar objetos con distintas funcionalidades para lograr un entorno digital, sensible y adaptativo, que proporcione servicios de muy distinta índole a sus usuarios. La consecución de este entorno requiere el desarrollo de dispositivos electrónicos de bajo coste que, con tamaño y peso reducido, sean capaces de interactuar con el medio que los rodea, operar con máxima autonomía y proporcionar un elevado nivel de inteligencia. La funcionalidad de muchos de estos dispositivos incluirá la capacidad para adquirir, procesar y transmitir imágenes, extrayendo, interpretando o modificando la información visual que resulte de interés para una determinada aplicación. En el marco de este desafío surge la presente Tesis Doctoral, cuyo eje central es el desarrollo de hardware dedicado para la implementación de algoritmos de procesamiento de imágenes y secuencias de vídeo usados en sistemas empotrados de visión. El trabajo persigue una doble finalidad. Por una parte, la búsqueda de soluciones que, por sus prestaciones y rendimiento, puedan ser incorporadas en sistemas que satisfagan las estrictas exigencias de funcionalidad, tamaño, consumo de energía y velocidad de operación demandadas por las nuevas aplicaciones. Por otra, el diseño de una serie de bloques funcionales implementados como módulos de propiedad intelectual, que permitan aliviar la carga computacional de las unidades de procesado de los sistemas en los que se integren. En la Tesis se proponen soluciones específicas para la implementación de dos tipos de operaciones habitualmente presentes en muchos sistemas de visión artificial: la sustracción de fondo y el etiquetado de componentes conexos. Las distintas alternativas surgen como consecuencia de aplicar una adecuada relación de compromiso entre funcionalidad y coste, entendiendo este último criterio en términos de recursos de cómputo, velocidad de operación y potencia consumida, lo que permite cubrir un amplio espectro de aplicaciones. En algunas de las soluciones propuestas se han utilizado además, técnicas de inferencia basadas en Lógica Difusa con idea de mejorar la calidad de los sistemas de visión resultantes. Para la realización de los diferentes bloques funcionales se ha seguido una metodología de diseño basada en modelos, que ha permitido la realización de todo el ciclo de desarrollo en un único entorno de trabajo. Dicho entorno combina herramientas informáticas que facilitan las etapas de codificación algorítmica, diseño de circuitos, implementación física y verificación funcional y temporal de las distintas alternativas, acelerando con ello todas las fases del flujo de diseño y posibilitando una exploración más eficiente del espacio de posibles soluciones. Asimismo, con el objetivo de demostrar la funcionalidad de las distintas aportaciones de esta Tesis Doctoral, algunas de las soluciones propuestas han sido integradas en sistemas de vídeo reales, que emplean buses estándares de uso común. Los dispositivos seleccionados para llevar a cabo estos demostradores han sido FPGAs y SoPCs de Xilinx, ya que sus excelentes propiedades para el prototipado y la construcción de sistemas que combinan componentes software y hardware, los convierten en candidatos ideales para dar soporte a la implementación de este tipo de sistemas.The continuous evolution of the Information and Communication Technologies (ICT), not only has allowed more than half of the global population to be currently interconnected through Internet, but it has also been the breeding ground for new paradigms such as Internet of Things (ioT) or Ambient Intelligence (AmI). These paradigms expose the need of interconnecting elements with different functionalities in order to achieve a digital, sensitive, adaptive and responsive environment that provides services of distinct nature to the users. The development of low cost devices, with small size, light weight and a high level of autonomy, processing power and ability for interaction is required to obtain this environment. Attending to this last feature, many of these devices will include the capacity to acquire, process and transmit images, extracting, interpreting and modifying the visual information that could be of interest for a certain application. This PhD Thesis, focused on the development of dedicated hardware for the implementation of image and video processing algorithms used in embedded systems, attempts to response to this challenge. The work has a two-fold purpose: on one hand, the search of solutions that, for its performance and properties, could be integrated on systems with strict requirements of functionality, size, power consumption and speed of operation; on the other hand, the design of a set of blocks that, packaged and implemented as IP-modules, allow to alleviate the computational load of the processing units of the systems where they could be integrated. In this Thesis, specific solutions for the implementation of two kinds of usual operations in many computer vision systems are provided. These operations are background subtraction and connected component labelling. Different solutions are created as the result of applying a good performance/cost trade-off (approaching this last criteria in terms of area, speed and consumed power), able to cover a wide range of applications. Inference techniques based on Fuzzy Logic have been applied to some of the proposed solutions in order to improve the quality of the resulting systems. To obtain the mentioned solutions, a model based-design methodology has been applied. This fact has allowed us to carry out all the design flow from a single work environment. That environment combines CAD tools that facilitate the stages of code programming, circuit design, physical implementation and functional and temporal verification of the different algorithms, thus accelerating the overall processes and making it possible to explore the space of solutions. Moreover, aiming to demonstrate the functionality of this PhD Thesis’s contributions, some of the proposed solutions have been integrated on real video systems that employ common and standard buses. The devices selected to perform these demonstrators have been FPGA and SoPCs (manufactured by Xilinx) since, due to their excellent properties for prototyping and creating systems that combine software and hardware components, they are ideal to develop these applications

    ISCR Annual Report: Fical Year 2004

    Full text link

    Pseudo-random number generators for Monte Carlo simulations on Graphics Processing Units

    Full text link
    Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed-up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is present.Comment: 31 pages, 9 figures, 3 table

    Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data

    Get PDF
    This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and

    Towards extending the SWITCH platform for time-critical, cloud-based CUDA applications: Job scheduling parameters influencing performance

    Get PDF
    SWITCH (Software Workbench for Interactive, Time Critical and Highly self-adaptive cloud applications) allows for the development and deployment of real-time applications in the cloud, but it does not yet support instances backed by Graphics Processing Units (GPUs). Wanting to explore how SWITCH might support CUDA (a GPU architecture) in the future, we have undertaken a review of time-critical CUDA applications, discovering that run-time requirements (which we call ‘wall time’) are in many cases regarded as the most important. We have performed experiments to investigate which parameters have the greatest impact on wall time when running multiple Amazon Web Services GPU-backed instances. Although a maximum of 8 single-GPU instances can be launched in a single Amazon Region, launching just 2 instances rather than 1 gives a 42% decrease in wall time. Also, instances are often wasted doing nothing, and there is a moderately-strong relationship between how problems are distributed across instances and wall time. These findings can be used to enhance the SWITCH provision for specifying Non-Functional Requirements (NFRs); in the future, GPU-backed instances could be supported. These findings can also be used more generally, to optimise the balance between the computational resources needed and the resulting wall time to obtain results

    Grid computing for the numerical reconstruction of digital holograms

    Get PDF
    Digital holography has the potential to greatly extend holography's applications and move it from the lab into the field: a single CCD or other solid-state sensor can capture any number of holograms while numerical reconstruction within a computer eliminates the need for chemical processing and readily allows further processing and visualisation of the holographic image. The steady increase in sensor pixel count and resolution leads to the possibilities of larger sample volumes and of higher spatial resolution sampling, enabling the practical use of digital off-axis holography. However this increase in pixel count also drives a corresponding expansion of the computational effort needed to numerically reconstruct such holograms to an extent where the reconstruction process for a single depth slice takes significantly longer than the capture process for each single hologram. Grid computing - a recent innovation in largescale distributed processing -provides a convenient means of harnessing significant computing resources in an ad-hoc fashion that might match the field deployment of a holographic instrument. In this paper we consider the computational needs of digital holography and discuss the deployment of numericals reconstruction software over an existing Grid testbed. The analysis of marine organisms is used as an exemplar for work flow and job execution of in-line digital holography

    Institute for Scientific Computing Research Annual Report: Fiscal Year 2004

    Full text link

    Profile-directed specialisation of custom floating-point hardware

    No full text
    We present a methodology for generating floating-point arithmetic hardware designs which are, for suitable applications, much reduced in size, while still retaining performance and IEEE-754 compliance. Our system uses three key parts: a profiling tool, a set of customisable floating-point units and a selection of system integration methods. We use a profiling tool for floating-point behaviour to identify arithmetic operations where fundamental elements of IEEE-754 floating-point may be compromised, without generating erroneous results in the common case. In the uncommon case, we use simple detection logic to determine when operands lie outside the range of capabilities of the optimised hardware. Out-of-range operations are handled by a separate, fully capable, floatingpoint implementation, either on-chip or by returning calculations to a host processor. We present methods of system integration to achieve this errorcorrection. Thus the system suffers no compromise in IEEE-754 compliance, even when the synthesised hardware would generate erroneous results. In particular, we identify from input operands the shift amounts required for input operand alignment and post-operation normalisation. For operations where these are small, we synthesise hardware with reduced-size barrel-shifters. We also propose optimisations to take advantage of other profile-exposed behaviours, including removing the hardware required to swap operands in a floating-point adder or subtractor, and reducing the exponent range to fit observed values. We present profiling results for a range of applications, including a selection of computational science programs, Spec FP 95 benchmarks and the FFMPEG media processing tool, indicating which would be amenable to our method. Selected applications which demonstrate potential for optimisation are then taken through to a hardware implementation. We show up to a 45% decrease in hardware size for a floating-point datapath, with a correctable error-rate of less then 3%, even with non-profiled datasets
    • …
    corecore