2,424 research outputs found
Smart cmos image sensor for 3d measurement
3D measurements are concerned with extracting visual information from the geometry of visible surfaces and interpreting the 3D coordinate data thus obtained, to detect or track the position or reconstruct the profile of an object, often in real time. These systems necessitate image sensors with high accuracy of position estimation and high frame rate of data processing for handling large volumes of data. A standard imager cannot address the requirements of fast image acquisition and processing, which are the two figures of merit for 3D measurements. Hence, dedicated VLSI imager architectures are indispensable for designing these high performance sensors. CMOS imaging technology provides potential to integrate image processing algorithms on the focal plane of the device, resulting in smart image sensors, capable of achieving better processing features in handling massive image data. The objective of this thesis is to present a new architecture of smart CMOS image sensor for real time 3D measurement using the sheet-beam projection methods based on active triangulation. Proposing the vision sensor as an ensemble of linear sensor arrays, all working in parallel and processing the entire image in slices, the complexity of the image-processing task shifts from O (N 2 ) to O (N). Inherent also in the design is the high level of parallelism to achieve massive parallel processing at high frame rate, required in 3D computation problems. This work demonstrates a prototype of the smart linear sensor incorporating full testability features to test and debug both at device and system levels. The salient features of this work are the asynchronous position to pulse stream conversion, multiple images binarization, high parallelism and modular architecture resulting in frame rate and sub-pixel resolution suitable for real time 3D measurements
Recommended from our members
LUVMI: an innovative payload for the sampling of volatiles at the Lunar poles
The ISECG identifies one of the first exploration steps as in situ investigations of the moon or asteroids. Europe is developing payload concepts for drilling and sample analysis, a contribution to a 250kg rover as well as for sample return. To achieve these missions, ESA depends on international partnerships.
Such missions will be seldom, expensive and the drill/sample site selected will be based on observations from orbit not calibrated with ground truth data. Many of the international science community’s objectives can be met at lower cost, or the chances of mission success improved and the quality of the science increased by making use of an innovative, low mass, mobile robotic payload following the LEAG
recommendations.
LUVMI provides a smart, low mass, innovative, modular mobile payload comprising surface and subsurface sensing with an in-situ sampling technology capable of depth-resolved extraction of volatiles, combined with a volatile analyser (mass spectrometer) capable of identifying the chemical composition of the most important volatiles. This will allow LUVMI to: traverse the lunar surface prospecting for volatiles; sample subsurface up to a depth of 10 cm (with a goal of 20 cm); extract water and other loosely bound volatiles; identify the chemical species extracted; access and sample permanently shadowed regions (PSR).
The main innovation of LUVMI is to develop an in situ sampling technology capable of depth-resolved extraction of volatiles, and then to package within this tool, the analyser itself, so as to maximise transfer
efficiency and minimise sample handling and its attendant mass requirements and risk of sample alteration. By building on national, EC and ESA funded research and developments, this project will develop to TRL6 instruments that together form a smart modular mobile payload that could be flight ready in 2020.
The LUVMI sampling instrument will be tested in a highly representative environment including thermal, vacuum and regolith simulant and the integrated payload demonstrated in a representative environment
Discovering user mobility and activity in smart lighting environments
"Smart lighting" environments seek to improve energy efficiency, human productivity and health by combining sensors, controls, and Internet-enabled lights with emerging “Internet-of-Things” technology. Interesting and potentially impactful applications involve adaptive lighting that responds to individual occupants' location, mobility and activity. In this dissertation, we focus on the recognition of user mobility and activity using sensing modalities and analytical techniques. This dissertation encompasses prior work using body-worn inertial sensors in one study, followed by smart-lighting inspired infrastructure sensors deployed with lights.
The first approach employs wearable inertial sensors and body area networks that monitor human activities with a user's smart devices. Real-time algorithms are developed to (1) estimate angles of excess forward lean to prevent risk of falls, (2) identify functional activities, including postures, locomotion, and transitions, and (3) capture gait parameters. Two human activity datasets are collected from 10 healthy young adults and 297 elder subjects, respectively, for laboratory validation and real-world evaluation. Results show that these algorithms can identify all functional activities accurately with a sensitivity of 98.96% on the 10-subject dataset, and can detect walking activities and gait parameters consistently with high test-retest reliability (p-value < 0.001) on the 297-subject dataset.
The second approach leverages pervasive "smart lighting" infrastructure to track human location and predict activities. A use case oriented design methodology is considered to guide the design of sensor operation parameters for localization performance metrics from a system perspective. Integrating a network of low-resolution time-of-flight sensors in ceiling fixtures, a recursive 3D location estimation formulation is established that links a physical indoor space to an analytical simulation framework. Based on indoor location information, a label-free clustering-based method is developed to learn user behaviors and activity patterns. Location datasets are collected when users are performing unconstrained and uninstructed activities in the smart lighting testbed under different layout configurations. Results show that the activity recognition performance measured in terms of CCR ranges from approximately 90% to 100% throughout a wide range of spatio-temporal resolutions on these location datasets, insensitive to the reconfiguration of environment layout and the presence of multiple users.2017-02-17T00:00:00
Compact Real-Time Inter-Frame Histogram Builder for 15-Bits High-Speed ToF-Imagers Based on Single-Photon Detection
Time-of-flight (ToF) image sensors based on single-photon detection, i.e., SPADs, require some filtering of pixel readings. Accurate depth measurements are only possible if the jitter of the detector is mitigated. Moreover, the time stamp needs to be effectively separated from uncorrelated noise, such as dark counts and background illumination. A powerful tool for this is building a histogram of a number of pixel readings. Future generation of ToF imagers are seeking to increase spatial and temporal resolution along with the dynamic range and frame rate. Under these circumstances, storing the complete histogram for every pixel becomes practically impossible. Considering that most of the information contained by the histogram represents noise, we propose a highly efficient method to store just the relevant data required for the ToF computation. This method makes use of the shifted inter-frame histogram. It requires a memory as low as 128 times smaller than storing the complete histogram if the pixel values are coded on up to 15 bits. Moreover, a fixed 2 8 words memory is enough to process histograms containing up to 2 15 bins. In exchange, the overall frame rate only decreases to one half. The hardware implementation of this algorithm is presented. Its remarkable robustness for a low SNR of the ToF estimation is demonstrated by Matlab simulations and FPGA implementation using input data from a SPAD camera prototype.Office of Naval Research (USA) N000141410355Ministerio de Economía y Competitividad TEC2015-66878-C3-1-RJunta de Andalucía TIC 2338-2013European Union H2020 76586
Advanced photon counting techniques for long-range depth imaging
The Time-Correlated Single-Photon Counting (TCSPC) technique has emerged as a
candidate approach for Light Detection and Ranging (LiDAR) and active depth imaging
applications. The work of this Thesis concentrates on the development and
investigation of functional TCSPC-based long-range scanning time-of-flight (TOF)
depth imaging systems. Although these systems have several different configurations
and functions, all can facilitate depth profiling of remote targets at low light levels and
with good surface-to-surface depth resolution. Firstly, a Superconducting Nanowire
Single-Photon Detector (SNSPD) and an InGaAs/InP Single-Photon Avalanche Diode
(SPAD) module were employed for developing kilometre-range TOF depth imaging
systems at wavelengths of ~1550 nm. Secondly, a TOF depth imaging system at a
wavelength of 817 nm that incorporated a Complementary Metal-Oxide-Semiconductor
(CMOS) 32×32 Si-SPAD detector array was developed. This system was used with
structured illumination to examine the potential for covert, eye-safe and high-speed
depth imaging. In order to improve the light coupling efficiency onto the detectors, the
arrayed CMOS Si-SPAD detector chips were integrated with microlens arrays using
flip-chip bonding technology. This approach led to the improvement in the fill factor by
up to a factor of 15. Thirdly, a multispectral TCSPC-based full-waveform LiDAR
system was developed using a tunable broadband pulsed supercontinuum laser source
which can provide simultaneous multispectral illumination, at wavelengths of 531, 570,
670 and ~780 nm. The investigated multispectral reflectance data on a tree was used to
provide the determination of physiological parameters as a function of the tree depth
profile relating to biomass and foliage photosynthetic efficiency. Fourthly, depth
images were estimated using spatial correlation techniques in order to reduce the
aggregate number of photon required for depth reconstruction with low error. A depth
imaging system was characterised and re-configured to reduce the effects of scintillation
due to atmospheric turbulence. In addition, depth images were analysed in terms of
spatial and depth resolution
A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel
Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition–also dubbed snapshot imaging–has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications
- …