23,468 research outputs found

    Precise motion descriptors extraction from stereoscopic footage using DaVinci DM6446

    Get PDF
    A novel approach to extract target motion descriptors in multi-camera video surveillance systems is presented. Using two static surveillance cameras with partially overlapped field of view (FOV), control points (unique points from each camera) are identified in regions of interest (ROI) from both cameras footage. The control points within the ROI are matched for correspondence and a meshed Euclidean distance based signature is computed. A depth map is estimated using disparity of each control pair and the ROI is graded into number of regions with the help of relative depth information of the control points. The graded regions of different depths will help calculate accurately the pace of the moving target and also its 3D location. The advantage of estimating a depth map for background static control points over depth map of the target itself is its accuracy and robustness to outliers. The performance of the algorithm is evaluated in the paper using several test sequences. Implementation issues of the algorithm onto the TI DaVinci DM6446 platform are considered in the paper

    Challenges in using GPUs for the real-time reconstruction of digital hologram images

    Get PDF
    This is the pre-print version of the final published paper that is available from the link below.In-line holography has recently made the transition from silver-halide based recording media, with laser reconstruction, to recording with large-area pixel detectors and computer-based reconstruction. This form of holographic imaging is an established technique for the study of fine particulates, such as cloud or fuel droplets, marine plankton and alluvial sediments, and enables a true 3D object field to be recorded at high resolution over a considerable depth. The move to digital holography promises rapid, if not instantaneous, feedback as it avoids the need for the time-consuming chemical development of plates or film film and a dedicated replay system, but with the growing use of video-rate holographic recording, and the desire to reconstruct fully every frame, the computational challenge becomes considerable. To replay a digital hologram a 2D FFT must be calculated for every depth slice desired in the replayed image volume. A typical hologram of ~100 μm particles over a depth of a few hundred millimetres will require O(10^3) 2D FFT operations to be performed on a hologram of typically a few million pixels. In this paper we discuss the technical challenges in converting our existing reconstruction code to make efficient use of NVIDIA CUDA-based GPU cards and show how near real-time video slice reconstruction can be obtained with holograms as large as 4096 by 4096 pixels. Our performance to date for a number of different NVIDIA GPU running under both Linux and Microsoft Windows is presented. The recent availability of GPU on portable computers is discussed and a new code for interactive replay of digital holograms is presented

    Digital implementation of the cellular sensor-computers

    Get PDF
    Two different kinds of cellular sensor-processor architectures are used nowadays in various applications. The first is the traditional sensor-processor architecture, where the sensor and the processor arrays are mapped into each other. The second is the foveal architecture, in which a small active fovea is navigating in a large sensor array. This second architecture is introduced and compared here. Both of these architectures can be implemented with analog and digital processor arrays. The efficiency of the different implementation types, depending on the used CMOS technology, is analyzed. It turned out, that the finer the technology is, the better to use digital implementation rather than analog

    CMOS Vision Sensors: Embedding Computer Vision at Imaging Front-Ends

    Get PDF
    CMOS Image Sensors (CIS) are key for imaging technol-ogies. These chips are conceived for capturing opticalscenes focused on their surface, and for delivering elec-trical images, commonly in digital format. CISs may incor-porate intelligence; however, their smartness basicallyconcerns calibration, error correction and other similartasks. The term CVISs (CMOS VIsion Sensors) definesother class of sensor front-ends which are aimed at per-forming vision tasks right at the focal plane. They havebeen running under names such as computational imagesensors, vision sensors and silicon retinas, among others. CVIS and CISs are similar regarding physical imple-mentation. However, while inputs of both CIS and CVISare images captured by photo-sensors placed at thefocal-plane, CVISs primary outputs may not be imagesbut either image features or even decisions based on thespatial-temporal analysis of the scenes. We may hencestate that CVISs are more “intelligent” than CISs as theyfocus on information instead of on raw data. Actually,CVIS architectures capable of extracting and interpretingthe information contained in images, and prompting reac-tion commands thereof, have been explored for years inacademia, and industrial applications are recently ramp-ing up.One of the challenges of CVISs architects is incorporat-ing computer vision concepts into the design flow. Theendeavor is ambitious because imaging and computervision communities are rather disjoint groups talking dif-ferent languages. The Cellular Nonlinear Network Univer-sal Machine (CNNUM) paradigm, proposed by Profs.Chua and Roska, defined an adequate framework forsuch conciliation as it is particularly well suited for hard-ware-software co-design [1]-[4]. This paper overviewsCVISs chips that were conceived and prototyped at IMSEVision Lab over the past twenty years. Some of them fitthe CNNUM paradigm while others are tangential to it. Allthem employ per-pixel mixed-signal processing circuitryto achieve sensor-processing concurrency in the quest offast operation with reduced energy budget.Junta de Andalucía TIC 2012-2338Ministerio de Economía y Competitividad TEC 2015-66878-C3-1-R y TEC 2015-66878-C3-3-

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Four-dimensional dynamic flow measurement by holographic particle image velocimetry

    Get PDF
    The ultimate goal of holographic particle image velocimetry (HPIV) is to provide space- and time-resolved measurement of complex flows. Recent new understanding of holographic imaging of small particles, pertaining to intrinsic aberration and noise in particular, has enabled us to elucidate fundamental issues in HPIV and implement a new HPIV system. This system is based on our previously reported off-axis HPIV setup, but the design is optimized by incorporating our new insights of holographic particle imaging characteristics. Furthermore, the new system benefits from advanced data processing algorithms and distributed parallel computing technology. Because of its robustness and efficiency, for the first time to our knowledge, the goal of both temporally and spatially resolved flow measurements becomes tangible. We demonstrate its temporal measurement capability by a series of phase-locked dynamic measurements of instantaneous three-dimensional, three-component velocity fields in a highly three-dimensional vortical flow--the flow past a tab

    An information adaptive system study report and development plan

    Get PDF
    The purpose of the information adaptive system (IAS) study was to determine how some selected Earth resource applications may be processed onboard a spacecraft and to provide a detailed preliminary IAS design for these applications. Detailed investigations of a number of applications were conducted with regard to IAS and three were selected for further analysis. Areas of future research and development include algorithmic specifications, system design specifications, and IAS recommended time lines

    Vision-Based Road Detection in Automotive Systems: A Real-Time Expectation-Driven Approach

    Full text link
    The main aim of this work is the development of a vision-based road detection system fast enough to cope with the difficult real-time constraints imposed by moving vehicle applications. The hardware platform, a special-purpose massively parallel system, has been chosen to minimize system production and operational costs. This paper presents a novel approach to expectation-driven low-level image segmentation, which can be mapped naturally onto mesh-connected massively parallel SIMD architectures capable of handling hierarchical data structures. The input image is assumed to contain a distorted version of a given template; a multiresolution stretching process is used to reshape the original template in accordance with the acquired image content, minimizing a potential function. The distorted template is the process output.Comment: See http://www.jair.org/ for any accompanying file

    Numerical Methods for Obtaining Multimedia Graphical Effects

    Get PDF
    This paper is an explanatory document about how several animations effects can be obtained using different numerical methods, as well as investigating the possibility of implementing them on very simple yet powerful massive parallel machines. The methods are clearly described, containing graphical examples of the effects, as well as workflow for the algorithms. All of the methods presented in this paper use only numerical matrix manipulations, which usually are fast, and do not require the use of any other graphical software application.raster graphics, numerical matrix manipulation, animation effects

    Quantum Image Processing and Its Application to Edge Detection: Theory and Experiment

    Full text link
    Processing of digital images is continuously gaining in volume and relevance, with concomitant demands on data storage, transmission and processing power. Encoding the image information in quantum-mechanical systems instead of classical ones and replacing classical with quantum information processing may alleviate some of these challenges. By encoding and processing the image information in quantum-mechanical systems, we here demonstrate the framework of quantum image processing, where a pure quantum state encodes the image information: we encode the pixel values in the probability amplitudes and the pixel positions in the computational basis states. Our quantum image representation reduces the required number of qubits compared to existing implementations, and we present image processing algorithms that provide exponential speed-up over their classical counterparts. For the commonly used task of detecting the edge of an image, we propose and implement a quantum algorithm that completes the task with only one single-qubit operation, independent of the size of the image. This demonstrates the potential of quantum image processing for highly efficient image and video processing in the big data era.Comment: 13 pages, including 9 figures and 5 appendixe
    corecore