3,280 research outputs found
MegaPipe: the MegaCam image stacking pipeline at the Canadian Astronomical Data Centre
This paper describes the MegaPipe image processing pipeline at the Canadian
Astronomical Data Centre. The pipeline combines multiple images from the
MegaCam mosaic camera on CFHT and combines them into a single output image.
MegaPipe takes as input detrended MegaCam images and does a careful astrometric
and photometric calibration on them. The calibrated images are then resampled
and combined into image stacks. The astrometric calibration of the output
images is accurate to within 0.15 arcseconds relative to external reference
frames and 0.04 arcseconds internally. The photometric calibration is good to
within 0.03 magnitudes. The stacked images and catalogues derived from these
images are available through the CADC website:Comment: Data available at
http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/megapipe/index.htm
A Novel Framework for Highlight Reflectance Transformation Imaging
We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa
Surface-Enhanced Raman Scattering Holography
Nanometric probes based on surface-enhanced Raman scattering (SERS) are promising candidates for all-optical environmental, biological and technological sensing applications with intrinsic quantitative molecular specificity. However, the effectiveness of SERS probes depends on a delicate trade-off between particle size, stability and brightness that has so far hindered their wide application in SERS imaging methodologies. In this Article, we introduce holographic Raman microscopy, which allows single-shot three-dimensional single-particle localization. We validate our approach by simultaneously performing Fourier transform Raman spectroscopy of individual SERS nanoparticles and Raman holography, using shearing interferometry to extract both the phase and the amplitude of wide-field Raman images and ultimately localize and track single SERS nanoparticles inside living cells in three dimensions. Our results represent a step towards multiplexed single-shot three-dimensional concentration mapping in many different scenarios, including live cell and tissue interrogation and complex anti-counterfeiting applications.Peer ReviewedPostprint (author's final draft
Recommended from our members
Reproducing reality with a high-dynamic-range multi-focal stereo display
With well-established methods for producing photo-realistic results, the next big challenge of graphics and display technologies is to achieve perceptual realism --- producing imagery indistinguishable from real-world 3D scenes. To deliver all necessary visual cues for perceptual realism, we built a High-Dynamic-Range Multi-Focal Stereo Display that achieves high resolution, accurate color, a wide dynamic range, and most depth cues, including binocular presentation and a range of focal depth. The display and associated imaging system have been designed to capture and reproduce a small near-eye three-dimensional object and to allow for a direct comparison between virtual and real scenes. To assess our reproduction of realism and demonstrate the capability of the display and imaging system, we conducted an experiment in which the participants were asked to discriminate between a virtual object and its physical counterpart. Our results indicate that the participants can only detect the discrepancy with a probability of 0.44. With such a level of perceptual realism, our display apparatus can facilitate a range of visual experiments that require the highest fidelity of reproduction while allowing for the full control of the displayed stimuli.</jats:p
- …