37 research outputs found
Skin perfusion photography
The separation of global and direct light components of a scene is highly useful for scene analysis, as each component offers different information about illumination-scene-detector interactions. Relying on ray optics, the technique is important in computational photography, but it is often under appreciated in the biomedical imaging community, where wave interference effects are utilized. Nevertheless, such coherent optical systems lend themselves naturally to global-direct separation methods because of the high spatial frequency nature of speckle interference patterns. Here, we extend global-direct separation to laser speckle contrast imaging (LSCI) system to reconstruct speed maps of blood flow in skin. We compare experimental results with a speckle formation model of moving objects and show that the reconstructed map of skin perfusion is improved over the conventional case
Resolving Multi-path Interference in Time-of-Flight Imaging via Modulation Frequency Diversity and Sparse Regularization
Time-of-flight (ToF) cameras calculate depth maps by reconstructing phase
shifts of amplitude-modulated signals. For broad illumination or transparent
objects, reflections from multiple scene points can illuminate a given pixel,
giving rise to an erroneous depth map. We report here a sparsity regularized
solution that separates K-interfering components using multiple modulation
frequency measurements. The method maps ToF imaging to the general framework of
spectral estimation theory and has applications in improving depth profiles and
exploiting multiple scattering.Comment: 11 Pages, 4 figures, appeared with minor changes in Optics Letter
Summary of the 2017 Blockage Test in the 10- by 10-Foot Supersonic Wind Tunnel
A limited blockage study was performed in December 2017 to explore exceeding the current published blockage curve for the NASA Glenn 10- by 10-Foot (10x10) Supersonic Wind Tunnel (SWT) at two discrete operating conditions. For the two points tested, the tunnel was found to start outside of the published starting limitations curve, above a certain threshold of Mach and stagnation pressure. Blockage theory was reviewed to further understand these results. In order to gain a firm understanding of the aerodynamic effects a more-detailed follow up blockage study is recommended
Time-resolved reconstruction of scene reflectance hidden by a diffuser
We use time-of-flight information in an iterative non-linear optimization algorithm to recover reflectance properties of a three-dimensional scene hidden behind a diffuser. We demonstrate reconstruction of wide-field images without relying on diffuser correlation properties
Relativistic Effects for Time-Resolved Light Transport
We present a real-time framework which allows interactive visualization of relativistic effects for time-resolved light transport. We leverage data from two different sources: real-world data acquired with an effective exposure time of less than 2 picoseconds, using an ultra-fast imaging technique termed femto-photography, and a transient renderer based on ray-tracing. We explore the effects of time dilation, light aberration, frequency shift and radiance accumulation by modifying existing models of these relativistic effects to take into account the time-resolved nature of light propagation. Unlike previous works, we do not impose limiting constraints in the visualization, allowing the virtual camera to explore freely a reconstructed 3D scene depicting dynamic illumination. Moreover, we consider not only linear motion, but also acceleration and rotation of the camera. We further introduce, for the first time, a pinhole camera model into our relativistic rendering framework, and account for subsequent changes in focal length and field of view as the camera moves through the scene
Ultra-fast Lensless Computational Imaging through 5D Frequency Analysis of Time-resolved Light Transport
Light transport has been analyzed extensively, in both the primal domain and the frequency domain. Frequency analyses often provide intuition regarding effects introduced by light propagation and interaction with optical elements; such analyses encourage optimal designs of computational cameras that efficiently capture tailored visual information. However, previous analyses have relied on instantaneous propagation of light, so that the measurement of the time dynamics of light–scene interaction, and any resulting information transfer, is precluded. In this paper, we relax the common assumption that the speed of light is infinite. We analyze free space light propagation in the frequency domain considering spatial, temporal, and angular light variation. Using this analysis, we derive analytic expressions for information transfer between these dimensions and show how this transfer can be exploited for designing a new lensless imaging system. With our frequency analysis, we also derive performance bounds for the proposed computational camera architecture and provide a mathematical framework that will also be useful for future ultra-fast computational imaging systems.MIT Media Lab ConsortiumNatural Sciences and Engineering Research Council of Canad
Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles
Time of flight cameras produce real-time range maps at a relatively low cost using continuous wave amplitude modulation and demodulation. However, they are geared to measure range (or phase) for a single reflected bounce of light and suffer from systematic errors due to multipath interference.
We re-purpose the conventional time of flight device for a new goal: to recover per-pixel sparse time profiles expressed as a sequence of impulses. With this modification, we show that we can not only address multipath interference but also enable new applications such as recovering depth of near-transparent surfaces, looking through diffusers and creating time-profile movies of sweeping light.
Our key idea is to formulate the forward amplitude modulated light propagation as a convolution with custom codes, record samples by introducing a simple sequence of electronic time delays, and perform sparse deconvolution to recover sequences of Diracs that correspond to multipath returns. Applications to computer vision include ranging of near-transparent objects and subsurface imaging through diffusers. Our low cost prototype may lead to new insights regarding forward and inverse problems in light transport.United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award)Alfred P. Sloan Foundation (Fellowship)Massachusetts Institute of Technology. Media Laboratory. Camera Culture Grou
Sub-pixel Layout for Super-Resolution with Images in the Octic Group
13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IThis paper presents a novel super-resolution framework by exploring the properties of non-conventional pixel layouts and shapes. We show that recording multiple images, transformed in the octic group, with a sensor of asymmetric sub-pixel layout increases the spatial sampling compared to a conventional sensor with a rectilinear grid of pixels and hence increases the image resolution. We further prove a theoretical bound for achieving well-posed super-resolution with a designated magnification factor w.r.t. the number and distribution of sub-pixels. We also propose strategies for selecting good sub-pixel layouts and effective super-resolution algorithms for our setup. The experimental results validate the proposed theory and solution, which have the potential to guide the future CCD layout design with super-resolution functionality.United States. Air Force (Assistant Secretary of Defense for Research & Engineering Contract #FA8721-05-C-0002)SUTD-MIT International Design Centre (Joint Postdoctoral Programme)Singapore University of Technology and Design (SUTD StartUp Grant ISTD 2011 016)Singapore. Ministry of Education (MOE Academic Research Fund MOE2013-T2-1-159
Institutional determinants of construction safety management strategies of contractors in Hong Kong
published_or_final_versio
Femto-photography: capturing and visualizing the propagation of light
We present femto-photography, a novel imaging technique to capture and visualize the propagation of light. With an effective exposure time of 1.85 picoseconds (ps) per frame, we reconstruct movies of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed do not exist, we re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor's spatial dimensions. We introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through macroscopic scenes; at such fast resolution, we must consider the notion of time-unwarping between the camera's and the world's space-time coordinate systems to take into account effects associated with the finite speed of light. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique may motivate new forms of computational photography.MIT Media Lab ConsortiumLincoln LaboratoryMassachusetts Institute of Technology. Institute for Soldier NanotechnologiesAlfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (Young Faculty Award