874 research outputs found

    Real time implementation of SURF algorithm on FPGA platform

    Get PDF
    Too many traffic accidents are caused by drivers’ failure of noticing buildings, traffic sign and other objects. Video based scene or object detection which can easily enhance drivers’ judgment performance by automatically detecting scene and signs. Two of the recent popular video detection algorithms are Background Differentiation and Feature based object detection. The background Differentiation is an efficient and fast way of observing a moving object in a relatively stationary background, which makes it easy to be implemented on a mobile platform and performs a swift processing speed. The Feature based scene detection such like the Speeded Up Robust Feature (SURF), is an appropriate way of detecting specific scene with accuracy and rotation and illumination invariance. By comparison, SURF computational expense is much higher, which remains the algorithm limited in real time mobile platform. In this thesis, I present two real time tracking algorithms, Differentiation based and SURF based scene detection systems on FPGA platform. The proposed hardware designs are able to process video of 800*600 resolution at 60 frames per second, the video clock rate is 40 MHz

    FPGA-based module for SURF extraction

    Get PDF
    We present a complete hardware and software solution of an FPGA-based computer vision embedded module capable of carrying out SURF image features extraction algorithm. Aside from image analysis, the module embeds a Linux distribution that allows to run programs specifically tailored for particular applications. The module is based on a Virtex-5 FXT FPGA which features powerful configurable logic and an embedded PowerPC processor. We describe the module hardware as well as the custom FPGA image processing cores that implement the algorithm's most computationally expensive process, the interest point detection. The module's overall performance is evaluated and compared to CPU and GPU based solutions. Results show that the embedded module achieves comparable disctinctiveness to the SURF software implementation running in a standard CPU while being faster and consuming significantly less power and space. Thus, it allows to use the SURF algorithm in applications with power and spatial constraints, such as autonomous navigation of small mobile robots

    Automatic Pipeline Surveillance Air-Vehicle

    Get PDF
    This thesis presents the developments of a vision-based system for aerial pipeline Right-of-Way surveillance using optical/Infrared sensors mounted on Unmanned Aerial Vehicles (UAV). The aim of research is to develop a highly automated, on-board system for detecting and following the pipelines; while simultaneously detecting any third-party interference. The proposed approach of using a UAV platform could potentially reduce the cost of monitoring and surveying pipelines when compared to manned aircraft. The main contributions of this thesis are the development of the image-analysis algorithms, the overall system architecture and validation of in hardware based on scaled down Test environment. To evaluate the performance of the system, the algorithms were coded using Python programming language. A small-scale test-rig of the pipeline structure, as well as expected third-party interference, was setup to simulate the operational environment and capture/record data for the algorithm testing and validation. The pipeline endpoints are identified by transforming the 16-bits depth data of the explored environment into 3D point clouds world coordinates. Then, using the Random Sample Consensus (RANSAC) approach, the foreground and background are separated based on the transformed 3D point cloud to extract the plane that corresponds to the ground. Simultaneously, the boundaries of the explored environment are detected based on the 16-bit depth data using a canny detector. Following that, these boundaries were filtered out, after being transformed into a 3D point cloud, based on the real height of the pipeline for fast and accurate measurements using a Euclidean distance of each boundary point, relative to the plane of the ground extracted previously. The filtered boundaries were used to detect the straight lines of the object boundary (Hough lines), once transformed into 16-bit depth data, using a Hough transform method. The pipeline is verified by estimating a centre line segment, using a 3D point cloud of each pair of the Hough line segments, (transformed into 3D). Then, the corresponding linearity of the pipeline points cloud is filtered within the width of the pipeline using Euclidean distance in the foreground point cloud. Then, the segment length of the detected centre line is enhanced to match the exact pipeline segment by extending it along the filtered point cloud of the pipeline. The third-party interference is detected based on four parameters, namely: foreground depth data; pipeline depth data; pipeline endpoints location in the 3D point cloud; and Right-of-Way distance. The techniques include detection, classification, and localization algorithms. Finally, a waypoints-based navigation system was implemented for the air- vehicle to fly over the course waypoints that were generated online by a heading angle demand to follow the pipeline structure in real-time based on the online identification of the pipeline endpoints relative to a camera frame

    Utilising the grid for augmented reality

    Get PDF

    Wavelet-Based Volume Rendering

    Get PDF
    Various biomedical technologies like CT, MRI and PET scanners provide detailed cross-sectional views of the human anatomy. The image information obtained from these scanning devices is typically represented as large data sets whose sizes vary from several hundred megabytes to about one hundred gigabytes. As these data sets cannot be stored on one\u27s local hard drive, SDSC provides a large data repository to store such data sets. These data sets need to be accessed by researchers around the world to collaborate in their research. But the size of these data sets make them difficult to be transmitted over the current network. This thesis presents a 3-D Haar wavelet algorithm which enables these data sets to be transformed into smaller hierarchical representations. These transformed data sets are transmitted over the network and reconstructed to a 3-D volume on the client\u27s side through progressive refinement of the images and 3-D texture mapping techniques

    Object Detection in Omnidirectional Images

    Get PDF
    Nowadays, computer vision (CV) is widely used to solve real-world problems, which pose increasingly higher challenges. In this context, the use of omnidirectional video in a growing number of applications, along with the fast development of Deep Learning (DL) algorithms for object detection, drives the need for further research to improve existing methods originally developed for conventional 2D planar images. However, the geometric distortion that common sphere-to-plane projections produce, mostly visible in objects near the poles, in addition to the lack of omnidirectional open-source labeled image datasets has made an accurate spherical image-based object detection algorithm a hard goal to achieve. This work is a contribution to develop datasets and machine learning models particularly suited for omnidirectional images, represented in planar format through the well-known Equirectangular Projection (ERP). To this aim, DL methods are explored to improve the detection of visual objects in omnidirectional images, by considering the inherent distortions of ERP. An experimental study was, firstly, carried out to find out whether the error rate and type of detection errors were related to the characteristics of ERP images. Such study revealed that the error rate of object detection using existing DL models with ERP images, actually, depends on the object spherical location in the image. Then, based on such findings, a new object detection framework is proposed to obtain a uniform error rate across the whole spherical image regions. The results show that the pre and post-processing stages of the implemented framework effectively contribute to reducing the performance dependency on the image region, evaluated by the above-mentioned metric

    Gesture tracking and neural activity segmentation in head-fixed behaving mice by deep learning methods

    Get PDF
    The typical approach used by neuroscientists is to study the response of laboratory animals to a stimulus while recording their neural activity at the same time. With the advent of calcium imaging technology, researchers can now study neural activity at sub-cellular resolutions in vivo. Similarly, recording the behaviour of laboratory animals is also becoming more affordable. Although it is now easier to record behavioural and neural data, this data comes with its own set of challenges. The biggest challenge, given the sheer volume of the data, is annotation. A traditional approach is to annotate the data manually, frame by frame. With behavioural data, manual annotation is done by looking at each frame and tracing the animals; with neural data, this is carried out by a trained neuroscientist. In this research, we propose automated tools based on deep learning that can aid in the processing of behavioural and neural data. These tools will help neuroscientists annotate and analyse the data they acquire in an automated and reliable way.La configuración típica empleada por los neurocientíficos consiste en estudiar la respuesta de los animales de laboratorio a un estímulo y registrar al mismo tiempo su actividad neuronal. Con la llegada de la tecnología de imágenes del calcio, los investigadores pueden ahora estudiar la actividad neuronal a resoluciones subcelulares in vivo. Del mismo modo, el registro del comportamiento de los animales de laboratorio también se está volviendo más asequible. Aunque ahora es más fácil registrar los datos del comportamiento y los datos neuronales, estos datos ofrecen su propio conjunto de desafíos. El mayor desafío es la anotación de los datos debido a su gran volumen. Un enfoque tradicional es anotar los datos manualmente, fotograma a fotograma. En el caso de los datos sobre el comportamiento, la anotación manual se hace mirando cada fotograma y rastreando los animales, mientras que, para los datos neuronales, la anotación la hace un neurocientífico capacitado. En esta investigación, proponemos herramientas automatizadas basadas en el aprendizaje profundo que pueden ayudar a procesar los datos de comportamiento y los datos neuronales.La configuració típica emprada pels neurocientífics consisteix a estudiar la resposta dels animals de laboratori a un estímul i registrar al mateix temps la seva activitat neuronal. Amb l'arribada de la tecnologia d'imatges basades en calci, els investigadors poden ara estudiar l'activitat neuronal a resolucions subcel·lulars in vivo. De la mateixa manera, el registre del comportament dels animals de laboratori també ha esdevingut molt més assequible. Tot i que ara és més fàcil registrar les dades del comportament i les dades neuronals, aquestes dades ofereixen el seu propi conjunt de reptes. El major desafiament és l'anotació de les dades, degut al seu gran volum. Un enfocament tradicional és anotar les dades manualment, fotograma a fotograma. En el cas de les dades sobre el comportament, l'anotació manual es fa mirant cada fotograma i rastrejant els animals, mentre que per a les dades neuronals, l'anotació la fa un neurocientífic capacitat. En aquesta investigació, proposem eines automatitzades basades en laprenentatge profund que poden ajudar a modelar les dades de comportament i les dades neuronals

    Efficient Object Detection in Mobile and Embedded Devices with Deep Neural Networks

    Get PDF
    [EN] Neural networks have become the standard for high accuracy computer vision. These algorithms can be built with arbitrarily large architectures to handle an ever growing complexity in the data they process. State of the art neural network architectures are primarily concerned with increasing the recognition accuracy when performing inference on an image, which creates an insatiable demand for energy and compute power. These models are primarily targeted to run on dense compute units such as GPUs. In recent years, demand to allow these models to execute in limited capacity environments such as smartphones, however even the most compact variations of these state of the art networks constantly push the boundaries of the power envelop under which they run. With the emergence of the Internet of Things, it is becoming a priority to enable mobile systems to perform image recognition at the edge, but with small energy requirements. This thesis focuses on the design and implementation of an object detection neural network that attempts to solve this problem, providing reasonable accuracy rates with extremely low compute power requirements. This is achieved by re-imagining the meta architecture of traditional object detection models and discovering a mechanism to classify and localize objects through a set of neural network based algorithms that are better aimed to mobile and embedded devices. The main contributions of this thesis are: (i) provide a better image processing algorithm that is more suitable at preparing data for consumption by taking advantage of the characteristics of the ISP available in these devices; (ii) provide a neural network architecture that maintains acceptable accuracy targets with minimal computational requirements by making efficient use of basic neural algorithms; and (iii) provide a programming framework for how these systems can be most efficiently implemented in a manner that is optimized for the underlying hardware units available in these devices by taking into account memory and computation restrictions

    Real-time Cinematic Design Of Visual Aspects In Computer-generated Images

    Get PDF
    Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines
    corecore