2,010 research outputs found

    Bit-plane stack filter algorithm for focal plane processors

    Get PDF
    This work presents a novel parallel technique to implement stack morphological filters for image processing. The method relies on applying the image bitwise decomposition to manipulate the grayscale image at a bit-plane level, while simple logical operations and Positive Boolean Functions (PBF’s) are executed in parallel to derive the transformed bit-planes. The relationship between the bitwise and threshold decomposition is closely investigated and analysed, which lead us to derive an algorithm whose control flow is full binary encoded. Furthermore, the algorithm exhibits an interesting performance, which depends on the image histogram thanks to its hierarchical processing and the study of the relationship among binary decompositions

    Hierarchical stack filtering : a bitplane-based algorithm for massively parallel processors

    Get PDF
    With the development of novel parallel architectures for image processing, the implementation of well-known image operators needs to be reformulated to take advantage of the so-called massive parallelism. In this work, we propose a general algorithm that implements a large class of nonlinear filters, called stack filters, with a 2D-array processor. The proposed method consists of decomposing an image into bitplanes with the bitwise decomposition, and then process every bitplane hierarchically. The filtered image is reconstructed by simply stacking the filtered bitplanes according to their order of significance. Owing to its hierarchical structure, our algorithm allows us to trade-off between image quality and processing time, and to significantly reduce the computation time of low-entropy images. Also, experimental tests show that the processing time of our method is substantially lower than that of classical methods when using large structuring elements. All these features are of interest to a variety of real-time applications based on morphological operations such as video segmentation and video enhancement

    Image Feature Extraction Acceleration

    Get PDF
    Image feature extraction is instrumental for most of the best-performing algorithms in computer vision. However, it is also expensive in terms of computational and memory resources for embedded systems due to the need of dealing with individual pixels at the earliest processing levels. In this regard, conventional system architectures do not take advantage of potential exploitation of parallelism and distributed memory from the very beginning of the processing chain. Raw pixel values provided by the front-end image sensor are squeezed into a high-speed interface with the rest of system components. Only then, after deserializing this massive dataflow, parallelism, if any, is exploited. This chapter introduces a rather different approach from an architectural point of view. We present two Application-Specific Integrated Circuits (ASICs) where the 2-D array of photo-sensitive devices featured by regular imagers is combined with distributed memory supporting concurrent processing. Custom circuitry is added per pixel in order to accelerate image feature extraction right at the focal plane. Specifically, the proposed sensing-processing chips aim at the acceleration of two flagships algorithms within the computer vision community: the Viola-Jones face detection algorithm and the Scale Invariant Feature Transform (SIFT). Experimental results prove the feasibility and benefits of this architectural solution.Ministerio de Economía y Competitividad TEC2012-38921-C02, IPT-2011- 1625-430000, IPC-20111009Junta de Andalucía TIC 2338-2013Xunta de Galicia EM2013/038Office of NavalResearch (USA) N00014141035

    Optical Flow in a Smart Sensor Based on Hybrid Analog-Digital Architecture

    Get PDF
    The purpose of this study is to develop a motion sensor (delivering optical flow estimations) using a platform that includes the sensor itself, focal plane processing resources, and co-processing resources on a general purpose embedded processor. All this is implemented on a single device as a SoC (System-on-a-Chip). Optical flow is the 2-D projection into the camera plane of the 3-D motion information presented at the world scenario. This motion representation is widespread well-known and applied in the science community to solve a wide variety of problems. Most applications based on motion estimation require work in real-time; hence, this restriction must be taken into account. In this paper, we show an efficient approach to estimate the motion velocity vectors with an architecture based on a focal plane processor combined on-chip with a 32 bits NIOS II processor. Our approach relies on the simplification of the original optical flow model and its efficient implementation in a platform that combines an analog (focal-plane) and digital (NIOS II) processor. The system is fully functional and is organized in different stages where the early processing (focal plane) stage is mainly focus to pre-process the input image stream to reduce the computational cost in the post-processing (NIOS II) stage. We present the employed co-design techniques and analyze this novel architecture. We evaluate the system’s performance and accuracy with respect to the different proposed approaches described in the literature. We also discuss the advantages of the proposed approach as well as the degree of efficiency which can be obtained from the focal plane processing capabilities of the system. The final outcome is a low cost smart sensor for optical flow computation with real-time performance and reduced power consumption that can be used for very diverse application domains

    The Visible Imaging System (VIS) for the Polar Spacecraft

    Get PDF
    The Visible Imaging System (VIS) is a set of three low-light-level cameras to be flown on the POLAR spacecraft of the Global Geospace Science (GGS) program which is an element of the International Solar-Terrestrial Physics (ISTP) campaign. Two of these cameras share primary and some secondary optics and are designed to provide images of the nighttime auroral oval at visible wavelengths. A third camera is used to monitor the directions of the fields-of-view of these sensitive auroral cameras with respect to sunlit Earth. The auroral emissions of interest include those from N+2 at 391.4 nm, 0 I at 557.7 and 630.0 nm, H I at 656.3 nm, and 0 II at 732.0 nm. The two auroral cameras have different spatial resolutions. These resolutions are about 10 and 20 km from a spacecraft altitude of 8 R(sub e). The time to acquire and telemeter a 256 x 256-pixel image is about 12 s. The primary scientific objectives of this imaging instrumentation, together with the in-situ observations from the ensemble of ISTP spacecraft, are (1) quantitative assessment of the dissipation of magnetospheric energy into the auroral ionosphere, (2) an instantaneous reference system for the in-situ measurements, (3) development of a substantial model for energy flow within the magnetosphere, (4) investigation of the topology of the magnetosphere, and (5) delineation of the responses of the magnetosphere to substorms and variable solar wind conditions

    CMOS-3D smart imager architectures for feature detection

    Get PDF
    This paper reports a multi-layered smart image sensor architecture for feature extraction based on detection of interest points. The architecture is conceived for 3-D integrated circuit technologies consisting of two layers (tiers) plus memory. The top tier includes sensing and processing circuitry aimed to perform Gaussian filtering and generate Gaussian pyramids in fully concurrent way. The circuitry in this tier operates in mixed-signal domain. It embeds in-pixel correlated double sampling, a switched-capacitor network for Gaussian pyramid generation, analog memories and a comparator for in-pixel analog-to-digital conversion. This tier can be further split into two for improved resolution; one containing the sensors and another containing a capacitor per sensor plus the mixed-signal processing circuitry. Regarding the bottom tier, it embeds digital circuitry entitled for the calculation of Harris, Hessian, and difference-of-Gaussian detectors. The overall system can hence be configured by the user to detect interest points by using the algorithm out of these three better suited to practical applications. The paper describes the different kind of algorithms featured and the circuitry employed at top and bottom tiers. The Gaussian pyramid is implemented with a switched-capacitor network in less than 50 μs, outperforming more conventional solutions.Xunta de Galicia 10PXIB206037PRMinisterio de Ciencia e Innovación TEC2009-12686, IPT-2011-1625-430000Office of Naval Research N00014111031

    SoDaCam: Software-defined Cameras via Single-Photon Imaging

    Full text link
    Reinterpretable cameras are defined by their post-processing capabilities that exceed traditional imaging. We present "SoDaCam" that provides reinterpretable cameras at the granularity of photons, from photon-cubes acquired by single-photon devices. Photon-cubes represent the spatio-temporal detections of photons as a sequence of binary frames, at frame-rates as high as 100 kHz. We show that simple transformations of the photon-cube, or photon-cube projections, provide the functionality of numerous imaging systems including: exposure bracketing, flutter shutter cameras, video compressive systems, event cameras, and even cameras that move during exposure. Our photon-cube projections offer the flexibility of being software-defined constructs that are only limited by what is computable, and shot-noise. We exploit this flexibility to provide new capabilities for the emulated cameras. As an added benefit, our projections provide camera-dependent compression of photon-cubes, which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single-photon imaging.Comment: Accepted at ICCV 2023 (oral). Project webpage can be found at https://wisionlab.com/project/sodacam

    Computer vision

    Get PDF
    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed

    A Computational Framework for the Structural Change Analysis of 3D Volumes of Microscopic Specimens

    Get PDF
    Glaucoma, commonly observed with an elevation in the intraocular pressure level (IOP), is one of the leading causes of blindness. The lamina cribrosa is a mesh-like structure that provides axonal support for the optic nerves leaving the eye. The changes in the laminar structure under IOP elevations may result in the deaths of retinal ganglion cells, leading to vision degradation and loss. We have developed a comprehensive computational framework that can assist the study of structural changes in microscopic structures such as lamina cribrosa. The optical sectioning property of a confocal microscope facilitates imaging thick microscopic specimen at various depths without physical sectioning. The confocal microscope images are referred to as optical sections. The computational framework developed includes: 1) a multi-threaded system architecture for tracking a volume-of-interest within a microscopic specimen in a parallel computation environment using a reliable-multicast for collective-communication operations 2) a Karhunen-Loève (KL) expansion based adaptive noise prefilter for the restoration of the optical sections using an inverse restoration method 3) a morphological operator based ringing metric to quantify the ringing artifacts introduced during iterative restoration of optical sections 4) a l2 norm based error metric to evaluate the performance of optical flow algorithms without a priori knowledge of the true motion field and 5) a Compute-and-Propagate (CNP) framework for iterative optical flow algorithms. The realtime tracking architecture can convert a 2D-confocal microscope into a 4D-confocal microscope with tracking. The adaptive KL filter is suitable for realtime restoration of optical sections. The CNP framework significantly improves the speed and convergence of the iterative optical flow algorithms. Also, the CNP framework can reduce the errors in the motion field estimates due to the aperture problem. The performance of the proposed framework is demonstrated on real-life image sequences and on z-Stack datasets of random cotton fibers and lamina cribrosa of a cow retina with an experimentally induced glaucoma. The proposed framework can be used for routine laboratory and clinical investigation of microstructures such as cells and tissues, for the evaluation of complex structures such as cornea and has potential use as a surgical guidance tool

    PyZebrascope: An Open-Source Platform for Brain-Wide Neural Activity Imaging in Zebrafish

    Get PDF
    Understanding how neurons interact across the brain to control animal behaviors is one of the central goals in neuroscience. Recent developments in fluorescent microscopy and genetically-encoded calcium indicators led to the establishment of whole-brain imaging methods in zebrafish, which record neural activity across a brain-wide volume with single-cell resolution. Pioneering studies of whole-brain imaging used custom light-sheet microscopes, and their operation relied on commercially developed and maintained software not available globally. Hence it has been challenging to disseminate and develop the technology in the research community. Here, we present PyZebrascope, an open-source Python platform designed for neural activity imaging in zebrafish using light-sheet microscopy. PyZebrascope has intuitive user interfaces and supports essential features for whole-brain imaging, such as two orthogonal excitation beams and eye damage prevention. Its camera module can handle image data throughput of up to 800 MB/s from camera acquisition to file writing while maintaining stable CPU and memory usage. Its modular architecture allows the inclusion of advanced algorithms for microscope control and image processing. As a proof of concept, we implemented a novel automatic algorithm for maximizing the image resolution in the brain by precisely aligning the excitation beams to the image focal plane. PyZebrascope enables whole-brain neural activity imaging in fish behaving in a virtual reality environment. Thus, PyZebrascope will help disseminate and develop light-sheet microscopy techniques in the neuroscience community and advance our understanding of whole-brain neural dynamics during animal behaviors.Peer Reviewe
    corecore