21 research outputs found

    Count-Free Single-Photon 3D Imaging with Race Logic

    Full text link
    Single-photon cameras (SPCs) have emerged as a promising technology for high-resolution 3D imaging. A single-photon 3D camera determines the round-trip time of a laser pulse by capturing the arrival of individual photons at each camera pixel. Constructing photon-timestamp histograms is a fundamental operation for a single-photon 3D camera. However, in-pixel histogram processing is computationally expensive and requires large amount of memory per pixel. Digitizing and transferring photon timestamps to an off-sensor histogramming module is bandwidth and power hungry. Here we present an online approach for distance estimation without explicitly storing photon counts. The two key ingredients of our approach are (a) processing photon streams using race logic, which maintains photon data in the time-delay domain, and (b) constructing count-free equi-depth histograms. Equi-depth histograms are a succinct representation for ``peaky'' distributions, such as those obtained by an SPC pixel from a laser pulse reflected by a surface. Our approach uses a binner element that converges on the median (or, more generally, to another quantile) of a distribution. We cascade multiple binners to form an equi-depth histogrammer that produces multi-bin histograms. Our evaluation shows that this method can provide an order of magnitude reduction in bandwidth and power consumption while maintaining similar distance reconstruction accuracy as conventional processing methods.Comment: Accepted for presentation at the 2023 International Conference on Computational Photograph

    Occluder-aided non-line-of-sight imaging

    Full text link
    Non-line-of-sight (NLOS) imaging is the inference of the properties of objects or scenes outside of the direct line-of-sight of the observer. Such inferences can range from a 2D photograph-like image of a hidden area, to determining the position, motion or number of hidden objects, to 3D reconstructions of a hidden volume. NLOS imaging has many enticing potential applications, such as leveraging the existing hardware in many automobiles to identify hidden pedestrians, vehicles or other hazards and hence plan safer trajectories. Other potential application areas include improving navigation for robots or drones by anticipating occluded hazards, peering past obstructions in medical settings, or in surveying unreachable areas in search-and-rescue operations. Most modern NLOS imaging methods fall into one of two categories: active imaging methods that have some control of the illumination of the hidden area, and passive methods that simply measure light that already exists. This thesis introduces two NLOS imaging methods, one of each category, along with modeling and data processing techniques that are more broadly applicable. The methods are linked by their use of objects (‘occluders’) that reside somewhere between the observer and the hidden scene and block some possible light paths. Computational periscopy, a passive method, can recover the unknown position of an occluding object in the hidden area and then recover an image of the hidden scene behind it. It does so using only a single photograph of a blank relay wall taken by an ordinary digital camera. We develop also a framework using an optimized preconditioning matrix to improve the speed at which these reconstructions can be made and greatly improve the robustness to ambient light. Lastly, we develop tools necessary to demonstrate recovery of scenes at multiple unknown depths – paving the way towards three-dimensional reconstructions. Edge-resolved transient imaging, an active method, enables the formation of 2.5D representations – a plan view plus heights – of large-scale scenes. A pulsed laser illuminates spots along a small semi-circle on the floor, centered on the edge of a vertical wall such as in a doorway. The wall edge occludes some light paths, only allowing the laser light reflecting off of the floor to illuminate certain portions of the hidden area beyond the wall, depending on where along the semi-circle it is illuminating. The time at which photons return following a laser pulse is recorded. The occluding wall edge provides angular resolution, and time-resolved sensing provides radial resolution. This novel acquisition strategy, along with a scene response model and reconstruction algorithm, allow for 180° field of view reconstructions of large-scale scenes unlike other active imaging methods. Lastly, we introduce a sparsity penalty named mutually exclusive group sparsity (MEGS), that can be used as a constraint or regularization in optimization problems to promote solutions in which certain components are mutually exclusive. We explore how this penalty relates to other similar penalties, develop fast algorithms to solve MEGS-regularized problems, and demonstrate how enforcing mutual exclusivity structure can provide great utility in NLOS imaging problems

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging

    Bayesian methods for inverse problems with point clouds : applications to single-photon lidar

    Get PDF
    Single-photon light detection and ranging (lidar) has emerged as a prime candidate technology for depth imaging through challenging environments. This modality relies on constructing, for each pixel, a histogram of time delays between emitted light pulses and detected photon arrivals. The problem of estimating the number of imaged surfaces, their reflectivity and position becomes very challenging in the low-photon regime (which equates to short acquisition times) or relatively high background levels (i.e., strong ambient illumination). In a general setting, a variable number of surfaces can be observed per imaged pixel. The majority of existing methods assume exactly one surface per pixel, simplifying the reconstruction problem so that standard image processing techniques can be easily applied. However, this assumption hinders practical three-dimensional (3D) imaging applications, being restricted to controlled indoor scenarios. Moreover, other existing methods that relax this assumption achieve worse reconstructions, suffering from long execution times and large memory requirements. This thesis presents novel approaches to 3D reconstruction from single-photon lidar data, which are capable of identifying multiple surfaces in each pixel. The resulting algorithms obtain new state-of-the-art reconstructions without strong assumptions about the sensed scene. The models proposed here differ from standard image processing tools, being designed to capture correlations of manifold-like structures. Until now, a major limitation has been the significant amount of time required for the analysis of the recorded data. By combining statistical models with highly scalable computational tools from the computer graphics community, we demonstrate 3D reconstruction of complex outdoor scenes with processing times of the order of 20 ms, where the lidar data was acquired in broad daylight from distances up to 320 m. This has enabled robust, real-time target reconstruction of complex moving scenes, paving the way for single-photon lidar at video rates for practical 3D imaging applications

    Fractional Calculus and the Future of Science

    Get PDF
    Newton foresaw the limitations of geometry’s description of planetary behavior and developed fluxions (differentials) as the new language for celestial mechanics and as the way to implement his laws of mechanics. Two hundred years later Mandelbrot introduced the notion of fractals into the scientific lexicon of geometry, dynamics, and statistics and in so doing suggested ways to see beyond the limitations of Newton’s laws. Mandelbrot’s mathematical essays suggest how fractals may lead to the understanding of turbulence, viscoelasticity, and ultimately to end of dominance of the Newton’s macroscopic world view.Fractional Calculus and the Future of Science examines the nexus of these two game-changing contributions to our scientific understanding of the world. It addresses how non-integer differential equations replace Newton’s laws to describe the many guises of complexity, most of which lay beyond Newton’s experience, and many had even eluded Mandelbrot’s powerful intuition. The book’s authors look behind the mathematics and examine what must be true about a phenomenon’s behavior to justify the replacement of an integer-order with a noninteger-order (fractional) derivative. This window into the future of specific science disciplines using the fractional calculus lens suggests how what is seen entails a difference in scientific thinking and understanding

    Exploring long duration gravitational-wave transients with second generation detectors

    Get PDF
    Minute-long gravitational-wave (GW) transients are currently a little-explored regime, mainly due to a lack of robust models. As searches for long-duration GW transients must rely on minimal assumptions about the signal properties, they are also sensitive to GWs emitted from unpredicted sources. The detection of such sources offers exciting and strong potential for new science. Because of the large parameter space covered, all-sky long-duration transient searches require model-independant processing and fast analysis techniques. For my PhD thesis, I integrated a set of fast cross-correlation routines in the spherical harmonic domain (SphRad) [50] into X-pipeline [95], a targeted GW search pipeline commonly used to search for GW counterparts of short and long duration GRBs & core-collapse supernovae. Spherical harmonic decomposition allows for the sky position dependancy of the coherent analysis to be isolated from the data [40] and cached for re-use, saving both time and processing units. Moreover, the spherical harmonic approach offers a fundamentally different view of the data, allowing for new possibilities for rejecting non-Gaussian background noise that could be mistaken for a GW signal. The combined search pipeline, X-SphRad, underwent a thorough internal review within the LIGO collaboration, which I led. The pipeline good functioning was assessed by rigorous tests including comparing a test data set with a standard sky grid-based analysis. I have developed a novel pixel clustering method that does not depend on the amplitude of potential signals. By using an edge detection algorithm, I quantify each pixel in the spectrogram by its similarity with its neighbours then extract features of sharply changing intensity (or ‘edge’). The method has shown promising results in preliminary tests. A simplified version of the algorithm was implemented in X-SphRad and large-scale testings are currently being processed.

    Background-Source separation in astronomical images with Bayesian Probability Theory

    Get PDF

    Background-Source separation in astronomical images with Bayesian Probability Theory

    Get PDF
    In this work a new method for the detection of faint, both point-like and extended, astronomical objects based on the integrated treatment of source and background signals is described. This technique is applied to public data obtained by imaging methods of high-energy observational astronomy in the X-ray spectral regime. These data are usually employed to address current astrophysical problems, e.g. in the fields of stellar and galaxy evolution and the large-scale structure of the universe. The typical problems encountered during the analysis of these data are: spatially varying cosmic background, large variety of source morphologies and intensities, data incompleteness, steep gradients in the data, and few photon counts per pixel. These problems are addressed with the developed technique. Previous methods extensively employed for the analysis of these data are, e.g., the sliding window and the wavelet based techniques. Both methods are known to suffer from: describing large variations in the background, detection of faint and extended sources and sources with complex morphologies. Large systematic errors in object photometry and loss of faint sources may occur with these techniques. The developed algorithm is based on Bayesian probability theory, which is a consistent probabilistic tool to solve an inverse problem for a given state of information. The information is given by a parameterized model for the background and prior information about source intensity distributions quantified by probability distributions. For the background estimation, the image data are not censored. The background rate is described by a two-dimensional thin-plate spline function. The background model is given by the product of the background rate and the exposure time which accounts for the variations of the integration time. Therefore, the background as well as effects like vignetting, variations of detector quantum efficiency and strong gradients in the exposure time are being handled properly which results in improved detections with respect to previous methods. Source probabilities are provided for individual pixels as well as for correlations of neighboring pixels in a multi-resolution analysis. Consequently, the technique is able of detecting point-like and extended sources and their complex morphologies. Furthermore, images of different spectral bands can be combined probabilistically to further increase the resolution in crowded regions. The developed method characterizes all detected sources in terms of position, number of source counts, and shape including uncertainties. The comparison with previous techniques shows that the developed method allows for an improved determination of background and source parameters. The method is applied to data obtained by the ROSAT and Chandra X-ray observatories whereas particularly the detection of faint and extended sources is improved with respect to previous analyses. This lead to the discovery of new galaxy clusters and quasars in the X-ray band which are confirmed in the optical regime using additional observational data. The new technique developed in this work is particularly suited to the identification of objects featuring extended emission like clusters of galaxies
    corecore