23,983 research outputs found

    Encoding of arbitrary micrometric complex illumination patterns with reduced speckle

    Get PDF
    In nonlinear microscopy, phase-only spatial light modulators (SLMs) allow achieving simultaneous two-photon excitation and fluorescence emission from specific regionof-interests (ROIs). However, as iterative Fourier transform algorithms (IFTAs) can only approximate the illumination of selected ROIs, both image formation and/or signal acquisition can be largely affected by the spatial irregularities of the illumination patterns and the speckle noise. To overcome these limitations, we propose an alternative complex illumination method (CIM) able to generate simultaneous excitation of large-area ROIs with full control over the amplitude and phase of light and reduced speckle. As a proof-of-concept we experimentally demonstrate single-photon and second harmonic generation (SHG) with structured illumination over large-area ROIs

    Tracking icebergs with time-lapse photography and sparse optical flow, LeConte Bay, Alaska, 2016–2017

    Get PDF
    We present a workflow to track icebergs in proglacial fjords using oblique time-lapse photos and the Lucas-Kanade optical flow algorithm. We employ the workflow at LeConte Bay, Alaska, where we ran five time-lapse cameras between April 2016 and September 2017, capturing more than 400 000 photos at frame rates of 0.5–4.0 min−1. Hourly to daily average velocity fields in map coordinates illustrate dynamic currents in the bay, with dominant downfjord velocities (exceeding 0.5 m s−1 intermittently) and several eddies. Comparisons with simultaneous Acoustic Doppler Current Profiler (ADCP) measurements yield best agreement for the uppermost ADCP levels (∌ 12 m and above), in line with prevalent small icebergs that trace near-surface currents. Tracking results from multiple cameras compare favorably, although cameras with lower frame rates (0.5 min−1) tend to underestimate high flow speeds. Tests to determine requisite temporal and spatial image resolution confirm the importance of high image frame rates, while spatial resolution is of secondary importance. Application of our procedure to other fjords will be successful if iceberg concentrations are high enough and if the camera frame rates are sufficiently rapid (at least 1 min−1 for conditions similar to LeConte Bay).This work was funded by the U.S. National Science Foundation (OPP-1503910, OPP-1504288, OPP-1504521 and OPP-1504191).Ye

    Three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT).

    Get PDF
    Optical methods capable of manipulating neural activity with cellular resolution and millisecond precision in three dimensions will accelerate the pace of neuroscience research. Existing approaches for targeting individual neurons, however, fall short of these requirements. Here we present a new multiphoton photo-excitation method, termed three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT), which allows precise, simultaneous photo-activation of arbitrary sets of neurons anywhere within the addressable volume of a microscope. This technique uses point-cloud holography to place multiple copies of a temporally focused disc matching the dimensions of a neurons cell body. Experiments in cultured cells, brain slices, and in living mice demonstrate single-neuron spatial resolution even when optically targeting randomly distributed groups of neurons in 3D. This approach opens new avenues for mapping and manipulating neural circuits, allowing a real-time, cellular resolution interface to the brain

    Astrometry with the Wide-Field InfraRed Space Telescope

    Get PDF
    The Wide-Field InfraRed Space Telescope (WFIRST) will be capable of delivering precise astrometry for faint sources over the enormous field of view of its main camera, the Wide-Field Imager (WFI). This unprecedented combination will be transformative for the many scientific questions that require precise positions, distances, and velocities of stars. We describe the expectations for the astrometric precision of the WFIRST WFI in different scenarios, illustrate how a broad range of science cases will see significant advances with such data, and identify aspects of WFIRST's design where small adjustments could greatly improve its power as an astrometric instrument.Comment: version accepted to JATI

    Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard

    Full text link
    This paper presents a novel method for fully automatic and convenient extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally printed chessboard. The proposed method is based on the 3D corner estimation of the chessboard from the sparse point cloud generated by one frame scan of the LiDAR. To estimate the corners, we formulate a full-scale model of the chessboard and fit it to the segmented 3D points of the chessboard. The model is fitted by optimizing the cost function under constraints of correlation between the reflectance intensity of laser and the color of the chessboard's patterns. Powell's method is introduced for resolving the discontinuity problem in optimization. The corners of the fitted model are considered as the 3D corners of the chessboard. Once the corners of the chessboard in the 3D point cloud are estimated, the extrinsic calibration of the two sensors is converted to a 3D-2D matching problem. The corresponding 3D-2D points are used to calculate the absolute pose of the two sensors with Unified Perspective-n-Point (UPnP). Further, the calculated parameters are regarded as initial values and are refined using the Levenberg-Marquardt method. The performance of the proposed corner detection method from the 3D point cloud is evaluated using simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR and a Ladybug3 camera under the proposed re-projection error metric, qualitatively and quantitatively demonstrate the accuracy and stability of the final extrinsic calibration parameters.Comment: 20 pages, submitted to the journal of Remote Sensin

    Sensor node localisation using a stereo camera rig

    Get PDF
    In this paper, we use stereo vision processing techniques to detect and localise sensors used for monitoring simulated environmental events within an experimental sensor network testbed. Our sensor nodes communicate to the camera through patterns emitted by light emitting diodes (LEDs). Ultimately, we envisage the use of very low-cost, low-power, compact microcontroller-based sensing nodes that employ LED communication rather than power hungry RF to transmit data that is gathered via existing CCTV infrastructure. To facilitate our research, we have constructed a controlled environment where nodes and cameras can be deployed and potentially hazardous chemical or physical plumes can be introduced to simulate environmental pollution events in a controlled manner. In this paper we show how 3D spatial localisation of sensors becomes a straightforward task when a stereo camera rig is used rather than a more usual 2D CCTV camera
    • 

    corecore