261 research outputs found

    Smart cmos image sensor for 3d measurement

    Get PDF
    3D measurements are concerned with extracting visual information from the geometry of visible surfaces and interpreting the 3D coordinate data thus obtained, to detect or track the position or reconstruct the profile of an object, often in real time. These systems necessitate image sensors with high accuracy of position estimation and high frame rate of data processing for handling large volumes of data. A standard imager cannot address the requirements of fast image acquisition and processing, which are the two figures of merit for 3D measurements. Hence, dedicated VLSI imager architectures are indispensable for designing these high performance sensors. CMOS imaging technology provides potential to integrate image processing algorithms on the focal plane of the device, resulting in smart image sensors, capable of achieving better processing features in handling massive image data. The objective of this thesis is to present a new architecture of smart CMOS image sensor for real time 3D measurement using the sheet-beam projection methods based on active triangulation. Proposing the vision sensor as an ensemble of linear sensor arrays, all working in parallel and processing the entire image in slices, the complexity of the image-processing task shifts from O (N 2 ) to O (N). Inherent also in the design is the high level of parallelism to achieve massive parallel processing at high frame rate, required in 3D computation problems. This work demonstrates a prototype of the smart linear sensor incorporating full testability features to test and debug both at device and system levels. The salient features of this work are the asynchronous position to pulse stream conversion, multiple images binarization, high parallelism and modular architecture resulting in frame rate and sub-pixel resolution suitable for real time 3D measurements

    A Survey on FPGA-Based Sensor Systems: Towards Intelligent and Reconfigurable Low-Power Sensors for Computer Vision, Control and Signal Processing

    Get PDF
    The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.The research leading to these results has received funding from the Spanish Government and European FEDER funds (DPI2012-32390), the Valencia Regional Government (PROMETEO/2013/085) and the University of Alicante (GRE12-17)

    Ros-Drill Automation: Visual Feedback Control And Rotational Motion Tracking

    Get PDF
    ICSI (intra-cytoplasmic sperm injection) has attracted research interest from both biological and engineering groups. The technology is constantly evolving to perform this procedure with precision and speed. One such development is the contribution of this thesis. We focus on a relatively recent procedure called Ros-Drill© (rotationally oscillating drill), of which the early versions have already been effectively utilized for the mice. In the first part, we present a procedure to automate a critical part of the operation: initiation of the rotational oscillation, Visual feedback is used to track the pipette tip. Predetermined species-specific penetration depth is successfully utilized to initiate the rotational oscillation command. Penetration-depth-based decisions concur with a curvature-based approach. In the second part for the automation we improve the performance of the rotational motion tracking. Ros-Drill© is an inexpensive set-up, which creates high-frequency rotational oscillations at the tip of an injection pipette tracking a harmonic motion profile. These rotational oscillations enable the pipette to drill into cell membranes with minimum biological damage. Such a motion control procedure presents no particular difficulty when it uses sufficiently precise motion sensors. However, size, costs and accessibility of technology on hardware components may severely constrain the sensory capabilities. Then the trajectory tracking is adversely affected. In this thesis we handle such a practical case, and present hardware and software improvements using a commonly available microcontroller and extremely low-resolution position measurements. Biological tests are performed and it is confirmed that the mechanical structure plays a crucial role for success

    Object distance measurement using a single camera for robotic applications

    Get PDF
    Visual servoing is defined as controlling robots by extracting data obtained from the vision system, such as the distance of an object with respect to a reference frame, or the length and width of the object. There are three image-based object distance measurement techniques: i) using two cameras, i.e., stereovision; ii) using a single camera, i.e., monovision; and iii) time-of-flight camera. The stereovision method uses two cameras to find the object’s depth and is highly accurate. However, it is costly compared to the monovision technique due to the higher computational burden and the cost of two cameras (rather than one) and related accessories. In addition, in stereovision, a larger number of images of the object need to be processed in real-time, and by increasing the distance of the object from cameras, the measurement accuracy decreases. In the time-of-flight distance measurement technique, distance information is obtained by measuring the total time for the light to transmit to and reflect from the object. The shortcoming of this technique is that it is difficult to separate the incoming signal, since it depends on many parameters such as the intensity of the reflected light, the intensity of the background light, and the dynamic range of the sensor. However, for applications such as rescue robot or object manipulation by a robot in a home and office environment, the high accuracy distance measurement provided by stereovision is not required. Instead, the monovision approach is attractive for some applications due to: i) lower cost and lower computational burden; and ii) lower complexity due to the use of only one camera. Using a single camera for distance measurement, object detection and feature extraction (i.e., finding the length and width of an object) is not yet well researched and there are very few published works on the topic in the literature. Therefore, using this technique for real-world robotics applications requires more research and improvements. This thesis mainly focuses on the development of object distance measurement and feature extraction algorithms using a single fixed camera and a single camera with variable pitch angle based on image processing techniques. As a result, two different improved and modified object distance measurement algorithms were proposed for cases where a camera is fixed at a given angle in the vertical plane and when it is rotating in a vertical plane. In the proposed algorithms, as a first step, the object distance and dimension such as length and width were obtained using existing image processing techniques. Since the results were not accurate due to lens distortion, noise, variable light intensity and other uncertainties such as deviation of the position of the object from the optical axes of camera, in the second step, the distance and dimension of the object obtained from existing techniques were modified in the X- and Y-directions and for the orientation of the object about the Z-axis in the object plane by using experimental data and identification techniques such as the least square method. Extensive experimental results confirmed that the accuracy increased for measured distance from 9.4 mm to 2.95 mm, for length from 11.6 mm to 2.2 mm, and for width from 18.6 mm to 10.8 mm. In addition, the proposed algorithm is significantly improved with proposed corrections compared to existing methods. Furthermore, the improved distance measurement method is computationally efficient and can be used for real-time robotic application tasks such as pick and place and object manipulation in a home or office environment.Master's Thesi

    Affine multi-view modelling for close range object measurement

    Get PDF
    In photogrammetry, sensor modelling with 3D point estimation is a fundamental topic of research. Perspective frame cameras offer the mathematical basis for close range modelling approaches. The norm is to employ robust bundle adjustments for simultaneous parameter estimation and 3D object measurement. In 2D to 3D modelling strategies image resolution, scale, sampling and geometric distortion are prior factors. Non-conventional image geometries that implement uncalibrated cameras are established in computer vision approaches; these aim for fast solutions at the expense of precision. The projective camera is defined in homogeneous terms and linear algorithms are employed. An attractive sensor model disembodied from projective distortions is the affine. Affine modelling has been studied in the contexts of geometry recovery, feature detection and texturing in vision, however multi-view approaches for precise object measurement are not yet widely available. This project investigates affine multi-view modelling from a photogrammetric standpoint. A new affine bundle adjustment system has been developed for point-based data observed in close range image networks. The system allows calibration, orientation and 3D point estimation. It is processed as a least squares solution with high redundancy providing statistical analysis. Starting values are recovered from a combination of implicit perspective and explicit affine approaches. System development focuses on retrieval of orientation parameters, 3D point coordinates and internal calibration with definition of system datum, sensor scale and radial lens distortion. Algorithm development is supported with method description by simulation. Initialization and implementation are evaluated with the statistical indicators, algorithm convergence and correlation of parameters. Object space is assessed with evaluation of the 3D point correlation coefficients and error ellipsoids. Sensor scale is checked with comparison of camera systems utilizing quality and accuracy metrics. For independent method evaluation, testing is implemented over a perspective bundle adjustment tool with similar indicators. Test datasets are initialized from precise reference image networks. Real affine image networks are acquired with an optical system (~1M pixel CCD cameras with 0.16x telecentric lens). Analysis of tests ascertains that the affine method results in an RMS image misclosure at a sub-pixel level and precisions of a few tenths of microns in object space

    Real-time multispectral fluorescence and reflectance imaging for intraoperative applications

    Get PDF
    Fluorescence guided surgery supports doctors by making unrecognizable anatomical or pathological structures become recognizable. For instance, cancer cells can be targeted with one fluorescent dye whereas muscular tissue, nerves or blood vessels can be targeted by other dyes to allow distinction beyond conventional color vision. Consequently, intraoperative imaging devices should combine multispectral fluorescence with conventional reflectance color imaging over the entire visible and near-infrared spectral range at video rate, which remains a challenge. In this work, the requirements for such a fluorescence imaging device are analyzed in detail. A concept based on temporal and spectral multiplexing is developed, and a prototype system is build. Experiments and numerical simulations show that the prototype fulfills the design requirements and suggest future improvements. The multispectral fluorescence image stream is processed to present fluorescent dye images to the surgeon using linear unmixing. However, artifacts in the unmixed images may not be noticed by the surgeon. A tool is developed in this work to indicate unmixing inconsistencies on a per pixel and per frame basis. In-silico optimization and a critical review suggest future improvements and provide insight for clinical translation

    Fast widefield techniques for fluorescence and phase endomicroscopy

    Full text link
    Thesis (Ph.D.)--Boston UniversityEndomicroscopy is a recent development in biomedical optics which gives researchers and physicians microscope-resolution views of intact tissue to complement macroscopic visualization during endoscopy screening. This thesis presents HiLo endomicroscopy and oblique back-illumination endomicroscopy, fast widefield imaging techniques with fluorescence and phase contrast, respectively. Fluorescence imaging in thick tissue is often hampered by strong out-of-focus background signal. Laser scanning confocal endomicroscopy has been developed for optically-sectioned imaging free from background, but reliance on mechanical scanning fundamentally limits the frame rate and represents significant complexity and expense. HiLo is a fast, simple, widefield fluorescence imaging technique which rejects out-of-focus background signal without the need for scanning. It works by acquiring two images of the sample under uniform and structured illumination and synthesizing an optically sectioned result with real-time image processing. Oblique back-illumination microscopy (OBM) is a label-free technique which allows, for the first time, phase gradient imaging of sub-surface morphology in thick scattering tissue with a reflection geometry. OBM works by back-illuminating the sample with the oblique diffuse reflectance from light delivered via off-axis optical fibers. The use of two diametrically opposed illumination fibers allows simultaneous and independent measurement of phase gradients and absorption contrast. Video-rate single-exposure operation using wavelength multiplexing is demonstrated

    Remote refocusing light-sheet fluorescence microscopy for high-speed 2D and 3D imaging of calcium dynamics in cardiomyocytes

    Get PDF
    The high prevalence and poor prognosis of heart failure are two key drivers for research into cardiac electrophysiology and regeneration. Dyssynchrony in calcium release and loss of structural organization within individual cardiomyocytes (CM) has been linked to reduced contractile strength and arrhythmia. Correlating calcium dynamics and cell microstructure requires multidimensional imaging with high spatiotemporal resolution. In light-sheet fluorescence microscopy (LSFM), selective plane illumination enables fast optically sectioned imaging with lower phototoxicity, making it suitable for imaging subcellular dynamics. In this work, a custom remote refocusing LSFM system is applied to studying calcium dynamics in isolated CM, cardiac cell cultures and tissue slices. The spatial resolution of the LSFM system was modelled and experimentally characterized. Simulation of the illumination path in Zemax was used to estimate the light-sheet beam waist and confocal parameter. Automated MATLAB-based image analysis was used to quantify the optical sectioning and the 3D point spread function using Gaussian fitting of bead image intensity distributions. The results demonstrated improved and more uniform axial resolution and optical sectioning with the tighter focused beam used for axially swept light-sheet microscopy. High-speed dual-channel LSFM was used for 2D imaging of calcium dynamics in correlation with the t-tubule structure in left and right ventricle cardiomyocytes at 395 fps. The high spatio-temporal resolution enabled the characterization of calcium sparks. The use of para-nitro-blebbistatin (NBleb), a non-phototoxic, low fluorescence contraction uncoupler, allowed 2D-mapping of the spatial dyssynchrony of calcium transient development across the cell. Finally, aberration-free remote refocusing was used for high-speed volumetric imaging of calcium dynamics in human induced pluripotent stem-cell derived cardiomyocytes (hiPSC-CM) and their co-culture with adult-CM. 3D-imaging at up to 8 Hz demonstrated the synchronization of calcium transients in co-culture, with increased coupling with longer co-culture duration, uninhibited by motion uncoupling with NBleb.Open Acces

    Volumetric measurements of the transitional backward facing step flow

    Get PDF
    The thesis describes state of the art volumetric measurement techniques and applies a 3D measurement technique, 3D Scanning Particle Tracking Velocimetry, to the transitional backward facing step flow. The measurement technique allows the spatial and temporal analysis of coherent structures apparent at the backward facing step. The thesis focusses on the extraction and interaction of coherent flow structures like shear layers or vortical structures
    • …
    corecore