268,399 research outputs found

    Optical Flow in a Smart Sensor Based on Hybrid Analog-Digital Architecture

    Get PDF
    The purpose of this study is to develop a motion sensor (delivering optical flow estimations) using a platform that includes the sensor itself, focal plane processing resources, and co-processing resources on a general purpose embedded processor. All this is implemented on a single device as a SoC (System-on-a-Chip). Optical flow is the 2-D projection into the camera plane of the 3-D motion information presented at the world scenario. This motion representation is widespread well-known and applied in the science community to solve a wide variety of problems. Most applications based on motion estimation require work in real-time; hence, this restriction must be taken into account. In this paper, we show an efficient approach to estimate the motion velocity vectors with an architecture based on a focal plane processor combined on-chip with a 32 bits NIOS II processor. Our approach relies on the simplification of the original optical flow model and its efficient implementation in a platform that combines an analog (focal-plane) and digital (NIOS II) processor. The system is fully functional and is organized in different stages where the early processing (focal plane) stage is mainly focus to pre-process the input image stream to reduce the computational cost in the post-processing (NIOS II) stage. We present the employed co-design techniques and analyze this novel architecture. We evaluate the system’s performance and accuracy with respect to the different proposed approaches described in the literature. We also discuss the advantages of the proposed approach as well as the degree of efficiency which can be obtained from the focal plane processing capabilities of the system. The final outcome is a low cost smart sensor for optical flow computation with real-time performance and reduced power consumption that can be used for very diverse application domains

    GPU accelerated real-time multi-functional spectral-domain optical coherence tomography system at 1300 nm.

    Get PDF
    We present a GPU accelerated multi-functional spectral domain optical coherence tomography system at 1300 nm. The system is capable of real-time processing and display of every intensity image, comprised of 512 pixels by 2048 A-lines acquired at 20 frames per second. The update rate for all four images with size of 512 pixels by 2048 A-lines simultaneously (intensity, phase retardation, flow and en face view) is approximately 10 frames per second. Additionally, we report for the first time the characterization of phase retardation and diattenuation by a sample comprised of a stacked set of polarizing film and wave plate. The calculated optic axis orientation, phase retardation and diattenuation match well with expected values. The speed of each facet of the multi-functional OCT CPU-GPU hybrid acquisition system, intensity, phase retardation, and flow, were separately demonstrated by imaging a horseshoe crab lateral compound eye, a non-uniformly heated chicken muscle, and a microfluidic device. A mouse brain with thin skull preparation was imaged in vivo and demonstrated the capability of the system for live multi-functional OCT visualization

    Real-time Visual Flow Algorithms for Robotic Applications

    Get PDF
    Vision offers important sensor cues to modern robotic platforms. Applications such as control of aerial vehicles, visual servoing, simultaneous localization and mapping, navigation and more recently, learning, are examples where visual information is fundamental to accomplish tasks. However, the use of computer vision algorithms carries the computational cost of extracting useful information from the stream of raw pixel data. The most sophisticated algorithms use complex mathematical formulations leading typically to computationally expensive, and consequently, slow implementations. Even with modern computing resources, high-speed and high-resolution video feed can only be used for basic image processing operations. For a vision algorithm to be integrated on a robotic system, the output of the algorithm should be provided in real time, that is, at least at the same frequency as the control logic of the robot. With robotic vehicles becoming more dynamic and ubiquitous, this places higher requirements to the vision processing pipeline. This thesis addresses the problem of estimating dense visual flow information in real time. The contributions of this work are threefold. First, it introduces a new filtering algorithm for the estimation of dense optical flow at frame rates as fast as 800 Hz for 640x480 image resolution. The algorithm follows a update-prediction architecture to estimate dense optical flow fields incrementally over time. A fundamental component of the algorithm is the modeling of the spatio-temporal evolution of the optical flow field by means of partial differential equations. Numerical predictors can implement such PDEs to propagate current estimation of flow forward in time. Experimental validation of the algorithm is provided using high-speed ground truth image dataset as well as real-life video data at 300 Hz. The second contribution is a new type of visual flow named structure flow. Mathematically, structure flow is the three-dimensional scene flow scaled by the inverse depth at each pixel in the image. Intuitively, it is the complete velocity field associated with image motion, including both optical flow and scale-change or apparent divergence of the image. Analogously to optic flow, structure flow provides a robotic vehicle with perception of the motion of the environment as seen by the camera. However, structure flow encodes the full 3D image motion of the scene whereas optic flow only encodes the component on the image plane. An algorithm to estimate structure flow from image and depth measurements is proposed based on the same filtering idea used to estimate optical flow. The final contribution is the spherepix data structure for processing spherical images. This data structure is the numerical back-end used for the real-time implementation of the structure flow filter. It consists of a set of overlapping patches covering the surface of the sphere. Each individual patch approximately holds properties such as orthogonality and equidistance of points, thus allowing efficient implementations of low-level classical 2D convolution based image processing routines such as Gaussian filters and numerical derivatives. These algorithms are implemented on GPU hardware and can be integrated to future Robotic Embedded Vision systems to provide fast visual information to robotic vehicles

    Intelligent Motion Detection and Tracking System

    Get PDF
    The rapid development in the field of digital image processing made motion detection and tracking an attractive research topic. Until recent years, real-time video applications were inapplicable due to the expense computational time. An intelligent method to analyze the motion in a stream video line using the methods of background subtraction, temporal differencing, and optical flow, methods are proposed. The new method solves the computational time problem by using a reliable technique that is called Fast Pixels Selection. A low cost tracking system is proposed. This tracking system consist of camera, PC, motor and data acquisition card. This system is designed to detect and track any moving target automatically

    Ultra-Fast Displaying Spectral Domain Optical Doppler Tomography System Using a Graphics Processing Unit

    Get PDF
    We demonstrate an ultrafast displaying Spectral Domain Optical Doppler Tomography system using Graphics Processing Unit (GPU) computing. The calculation of FFT and the Doppler frequency shift is accelerated by the GPU. Our system can display processed OCT and ODT images simultaneously in real time at 120 fps for 1,024 pixels x 512 lateral A-scans. The computing time for the Doppler information was dependent on the size of the moving average window, but with a window size of 32 pixels the ODT computation time is only 8.3 ms, which is comparable to the data acquisition time. Also the phase noise decreases significantly with the window size. Since the performance of a real-time display for OCT/ODT is very important for clinical applications that need immediate diagnosis for screening or biopsy. Intraoperative surgery can take much benefit from the real-time display flow rate information from the technology. Moreover, the GPU is an attractive tool for clinical and commercial systems for functional OCT features as well.open131

    Performance Analysis for Visual Planetary Landing Navigation Using Optical Flow and DEM Matching

    Full text link
    Visual navigation for planetary landing vehicles shows many scientific and technical challenges due to inclined and rather high velocity approach trajectories, complex 3D environment and high computational requirements for real-time image processing. High relative navigation accuracy at landing site is required for obstacle avoidance and operational constraints. The current paper discusses detailed performance analysis results for a recently published concept of a visual navigation system, based on a mono camera as vision sensor and matching of the recovered and reference 3D models of the landing site. The recovered 3D models are being produced by real-time, instantaneous optical flow processing of the navigation camera images. An embedded optical correlator is introduced, which allows a robust and ultra high-speed optical flow processing under different and even unfavorable illumination conditions. The performance analysis is based on a detailed software simulation model of the visual navigation system, including the optical correlator as the key component for ultra-high speed image processing. The paper recalls the general structure of the navigation system and presents detailed end-to-end visual navigation performance results for a Mercury landing reference mission in terms of different visual navigation entry conditions, reference DEM resolution, navigation camera configuration and auxiliary sensor information. I

    Vision System Measures Motions of Robot and External Objects

    Get PDF
    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels

    Dual-display laparoscopic laser speckle contrast imaging for real-time surgical assistance

    Get PDF
    Laser speckle contrast imaging (LSCI) utilizes the speckle pattern of a laser to determine the blood flow in tissues. The current approaches for its use in a clinical setting require a camera system with a laser source on a separate optical axis making it unsuitable for minimally invasive surgery (MIS). With blood flow visualization, bowel viability, for example, can be determined. Thus, LSCI can be a valuable tool in gastrointestinal surgery. In this work, we develop the first-of-its-kind dual-display laparoscopic vision system integrating LSCI with a commercially available 10mm rigid laparoscope where the laser has the same optical axis as the laparoscope. Designed for MIS, our system permits standard color RGB, label-free vasculature imaging, and fused display modes. A graphics processing unit accelerated algorithm enables the real-time display of three different modes at the surgical site. We demonstrate the capability of our system for imaging relative flow rates in a microfluidic phantom with channels as small as 200 μm at a working distance of 1–5 cm from the laparoscope tip to the phantom surface. Using our system, we reveal early changes in bowel perfusion, which are invisible to standard color vision using a rat bowel occlusion model. Furthermore, we apply our system for the first time for imaging intestinal vasculature during MIS in a swine

    Real time mass flow rate measurement using multiple fan beam optical tomography

    Get PDF
    This paper presents the implementing multiple fan beam projection technique using optical fibre sensors for a tomography system. From the dynamic experiment of solid/gas flow using plastic beads in a gravity flow rig, the designed optical fibre sensors are reliable in measuring the mass flow rate below 40% of flow. Another important matter that has been discussed is the image processing rate or IPR. Generally, the applied image reconstruction algorithms, the construction of the sensor and also the designed software are considered to be reliable and suitable to perform real-time image reconstruction and mass flow rate measurements
    corecore