6,064 research outputs found

    Computational periscopy with an ordinary digital camera

    Full text link
    Computing the amounts of light arriving from different directions enables a diffusely reflecting surface to play the part of a mirror in a periscope—that is, perform non-line-of-sight imaging around an obstruction. Because computational periscopy has so far depended on light-travel distances being proportional to the times of flight, it has mostly been performed with expensive, specialized ultrafast optical systems^1,2,3,4,5,6,7,8,9,10,11,12. Here we introduce a two-dimensional computational periscopy technique that requires only a single photograph captured with an ordinary digital camera. Our technique recovers the position of an opaque object and the scene behind (but not completely obscured by) the object, when both the object and scene are outside the line of sight of the camera, without requiring controlled or time-varying illumination. Such recovery is based on the visible penumbra of the opaque object having a linear dependence on the hidden scene that can be modelled through ray optics. Non-line-of-sight imaging using inexpensive, ubiquitous equipment may have considerable value in monitoring hazardous environments, navigation and detecting hidden adversaries.We thank F. Durand, W. T. Freeman, Y. Ma, J. Rapp, J. H. Shapiro, A. Torralba, F. N. C. Wong and G. W. Wornell for discussions. This work was supported by the Defense Advanced Research Projects Agency (DARPA) REVEAL Program contract number HR0011-16-C-0030. (HR0011-16-C-0030 - Defense Advanced Research Projects Agency (DARPA) REVEAL Program)Accepted manuscrip

    The Rogue Alpha and Beta Mission: Operations, Infrared Remote Sensing, LEO Data Processing, and Lessons Learned From Three Years on Orbit With Two Laser Communication-Equipped 3U CubeSats

    Get PDF
    The Aerospace Corporation\u27s Rogue-alpha, beta program was a rapid prototyping demonstration aimed at building and deploying an infrared remote sensing capability into low Earth orbit within 18 months. The two satellites and their data were then used for three years as an experimental testbed for future proliferated low Earth orbit (pLEO) constellations. Their launch took place on November 2, 2019, followed by boost and deployment of two identical spacecraft (Rogue-alpha and beta) by the Cygnus ISS cargo vessel into circular 460-km, 52° inclined orbits on January31, 2020. The primary sensors were 1.4-micron band, InGaAs short wavelength infrared (SWIR) cameras with640x512 pixels and a 28° field-of-view. The IR sensors were accompanied by 10-megapixel visible context cameras with a 37° field-of-view. Star sensors were also tested as nighttime imaging sensors. Three years of spacecraft and sensor operations were achieved, allowing a variety of experiments to be conducted. The first year focused on alignment and checkout of the laser communication systems, sensor calibration, and priority IR remote sensing objectives, including the study of Earth backgrounds, observation of natural gas flares, and detection of rocket launches. The second year of operations added study of environmental remote sensing targets, including severe storms, wildfires, and volcanic eruptions, while continuing to gather Earth backgrounds and rocket launch observations. The final year emphasized advanced data processing and exploitation techniques applied to collected data, using machine learning and artificial intelligence for tasks such as target tracking, frame co-registration, and stereo data exploitation. Mission operations continued in the final year, with an emphasis on collecting additional rocket launch data, and higher frame rate backgrounds data. This report summarizes the Rogue alpha, beta mission’s outcomes and presents processed IR data, including the detection and tracking of rocket launches with dynamic Earth backgrounds, embedded moving targets in background scenes, and the use of pointing-based registration to create fire line videos of severe wildfires and 3D scenes of pyrocumulonimbus clouds. Lessons learned from the experimental ConOps, data exploitation, and database curation are also summarized for application to future pLEO constellation missions

    Computational multi-depth single-photon imaging

    Full text link
    We present an imaging framework that is able to accurately reconstruct multiple depths at individual pixels from single-photon observations. Our active imaging method models the single-photon detection statistics from multiple reflectors within a pixel, and it also exploits the fact that a multi-depth profile at each pixel can be expressed as a sparse signal. We interpret the multi-depth reconstruction problem as a sparse deconvolution problem using single-photon observations, create a convex problem through discretization and relaxation, and use a modified iterative shrinkage-thresholding algorithm to efficiently solve for the optimal multi-depth solution. We experimentally demonstrate that the proposed framework is able to accurately reconstruct the depth features of an object that is behind a partially-reflecting scatterer and 4 m away from the imager with root mean-square error of 11 cm, using only 19 signal photon detections per pixel in the presence of moderate background light. In terms of root mean-square error, this is a factor of 4.2 improvement over the conventional method of Gaussian-mixture fitting for multi-depth recovery.This material is based upon work supported in part by a Samsung Scholarship, the US National Science Foundation under Grant No. 1422034, and the MIT Lincoln Laboratory Advanced Concepts Committee. We thank Dheera Venkatraman for his assistance with the experiments. (Samsung Scholarship; 1422034 - US National Science Foundation; MIT Lincoln Laboratory Advanced Concepts Committee)Accepted manuscrip

    Automated identification of river hydromorphological features using UAV high resolution aerial imagery

    Get PDF
    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management

    A Compact, High Resolution Hyperspectral Imager for Remote Sensing of Soil Moisture

    Get PDF
    Measurement of soil moisture content is a key challenge across a variety of fields, ranging from civil engineering through to defence and agriculture. While dedicated satellite platforms like SMAP and SMOS provide high spatial coverage, their low spatial resolution limits their application to larger regional studies. The advent of compact, high lift capacity UAVs has enabled small scale surveys of specific farmland cites. This thesis presents work on the development of a compact, high spatial and spectral resolution hyperspectral imager, designed for remote measurement of soil moisture content. The optical design of the system incorporates a bespoke freeform blazed diffraction grating, providing higher optical performance at a similar aperture to conventional Offner-Chrisp designs. The key challenges of UAV-borne hyperspectral imaging relate to using only solar illumination, with both intermittent cloud cover and atmospheric water absorption creating challenges in obtaining accurate reflectance measurements. A hardware based calibration channel for mitigating cloud cover effects is introduced, along with a comparison of methods for recovering soil moisture content from reflectance data under varying illumination conditions. The data processing pipeline required to process the raw pushbroom data into georectified images is also discussed. Finally, preliminary work on applying soil moisture techniques to leaf imaging are presented
    • …
    corecore