232,751 research outputs found

    Self-coherent camera as a focal plane wavefront sensor: simulations

    Full text link
    Direct detection of exoplanets requires high dynamic range imaging. Coronagraphs could be the solution, but their performance in space is limited by wavefront errors (manufacturing errors on optics, temperature variations, etc.), which create quasi-static stellar speckles in the final image. Several solutions have been suggested for tackling this speckle noise. Differential imaging techniques substract a reference image to the coronagraphic residue in a post-processing imaging. Other techniques attempt to actively correct wavefront errors using a deformable mirror. In that case, wavefront aberrations have to be measured in the science image to extremely high accuracy. We propose the self-coherent camera sequentially used as a focal-plane wavefront sensor for active correction and differential imaging. For both uses, stellar speckles are spatially encoded in the science image so that differential aberrations are strongly minimized. The encoding is based on the principle of light incoherence between the hosting star and its environment. In this paper, we first discuss one intrinsic limitation of deformable mirrors. Then, several parameters of the self-coherent camera are studied in detail. We also propose an easy and robust design to associate the self-coherent camera with a coronagraph that uses a Lyot stop. Finally, we discuss the case of the association with a four-quadrant phase mask and numerically demonstrate that such a device enables the detection of Earth-like planets under realistic conditions. The parametric study of the technique lets us believe it can be implemented quite easily in future instruments dedicated to direct imaging of exoplanets.Comment: 15 pages, 14 figures, accepted in A&A (here is the final version

    High-speed Video from Asynchronous Camera Array

    Get PDF
    This paper presents a method for capturing high-speed video using an asynchronous camera array. Our method sequentially fires each sensor in a camera array with a small time offset and assembles captured frames into a high-speed video according to the time stamps. The resulting video, however, suffers from parallax jittering caused by the viewpoint difference among sensors in the camera array. To address this problem, we develop a dedicated novel view synthesis algorithm that transforms the video frames as if they were captured by a single reference sensor. Specifically, for any frame from a non-reference sensor, we find the two temporally neighboring frames captured by the reference sensor. Using these three frames, we render a new frame with the same time stamp as the non-reference frame but from the viewpoint of the reference sensor. Specifically, we segment these frames into super-pixels and then apply local content-preserving warping to warp them to form the new frame. We employ a multi-label Markov Random Field method to blend these warped frames. Our experiments show that our method can produce high-quality and high-speed video of a wide variety of scenes with large parallax, scene dynamics, and camera motion and outperforms several baseline and state-of-the-art approaches.Comment: 10 pages, 82 figures, Published at IEEE WACV 201

    Monitoring wild animal communities with arrays of motion sensitive camera traps

    Get PDF
    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a broad range of species providing location -specific information on movement and behavior. Modern digital camera traps that record video present new analytical opportunities, but also new data management challenges. This paper describes our experience with a terrestrial animal monitoring system at Barro Colorado Island, Panama. Our camera network captured the spatio-temporal dynamics of terrestrial bird and mammal activity at the site - data relevant to immediate science questions, and long-term conservation issues. We believe that the experience gained and lessons learned during our year long deployment and testing of the camera traps as well as the developed solutions are applicable to broader sensor network applications and are valuable for the advancement of the sensor network research. We suggest that the continued development of these hardware, software, and analytical tools, in concert, offer an exciting sensor-network solution to monitoring of animal populations which could realistically scale over larger areas and time spans

    Rank-based camera spectral sensitivity estimation

    Get PDF
    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab—a difficult and lengthy procedure—or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have “raw mode.” Experiments validate our method

    A multi-modal event detection system for river and coastal marine monitoring applications

    Get PDF
    Abstract—This work is investigating the use of a multi-modal sensor network where visual sensors such as cameras and satellite imagers, along with context information can be used to complement and enhance the usefulness of a traditional in-situ sensor network in measuring and tracking some feature of a river or coastal location. This paper focuses on our work in relation to the use of an off the shelf camera as part of a multi-modal sensor network for monitoring a river environment. It outlines our results in relation to the estimation of water level using a visual sensor. It also outlines the benefits of a multi-modal sensor network for marine environmental monitoring and how this can lead to a smarter, more efficient sensing network
    • 

    corecore