8 research outputs found

    Deep Appearance Maps

    Get PDF
    We propose a deep representation of appearance, i.e. the relation of color, surface orientation, viewer position, material and illumination. Previous approaches have used deep learning to extract classic appearance representations relating to reflectance model parameters (e.g. Phong) or illumination (e.g. HDR environment maps). We suggest to directly represent appearance itself as a network we call a deep appearance map (DAM). This is a 4D generalization over 2D reflectance maps, which held the view direction fixed. First, we show how a DAM can be learned from images or video frames and later be used to synthesize appearance, given new surface orientations and viewer positions. Second, we demonstrate how another network can be used to map from an image or video frames to a DAM network to reproduce this appearance, without using a lengthy optimization such as stochastic gradient descent (learning-to-learn). Finally, we show the example of an appearance estimation-and-segmentation task, mapping from an image showing multiple materials to multiple deep appearance maps

    HOLISMOKES -- II. Identifying galaxy-scale strong gravitational lenses in Pan-STARRS using convolutional neural networks

    Full text link
    We present a systematic search for wide-separation (Einstein radius >1.5"), galaxy-scale strong lenses in the 30 000 sq.deg of the Pan-STARRS 3pi survey on the Northern sky. With long time delays of a few days to weeks, such systems are particularly well suited for catching strongly lensed supernovae with spatially-resolved multiple images and open new perspectives on early-phase supernova spectroscopy and cosmography. We produce a set of realistic simulations by painting lensed COSMOS sources on Pan-STARRS image cutouts of lens luminous red galaxies with known redshift and velocity dispersion from SDSS. First of all, we compute the photometry of mock lenses in gri bands and apply a simple catalog-level neural network to identify a sample of 1050207 galaxies with similar colors and magnitudes as the mocks. Secondly, we train a convolutional neural network (CNN) on Pan-STARRS gri image cutouts to classify this sample and obtain sets of 105760 and 12382 lens candidates with scores pCNN>0.5 and >0.9, respectively. Extensive tests show that CNN performances rely heavily on the design of lens simulations and choice of negative examples for training, but little on the network architecture. Finally, we visually inspect all galaxies with pCNN>0.9 to assemble a final set of 330 high-quality newly-discovered lens candidates while recovering 23 published systems. For a subset, SDSS spectroscopy on the lens central regions proves our method correctly identifies lens LRGs at z~0.1-0.7. Five spectra also show robust signatures of high-redshift background sources and Pan-STARRS imaging confirms one of them as a quadruply-imaged red source at z_s = 1.185 strongly lensed by a foreground LRG at z_d = 0.3155. In the future, we expect that the efficient and automated two-step classification method presented in this paper will be applicable to the deeper gri stacks from the LSST with minor adjustments.Comment: 18 pages and 11 figures (plus appendix), submitted to A&

    A Quantitative 3D Motility Analysis of Trypanosoma brucei by Use of Digital In-line Holographic Microscopy

    Get PDF
    We present a quantitative 3D analysis of the motility of the blood parasite Trypanosoma brucei. Digital in-line holographic microscopy has been used to track single cells with high temporal and spatial accuracy to obtain quantitative data on their behavior. Comparing bloodstream form and insect form trypanosomes as well as mutant and wildtype cells under varying external conditions we were able to derive a general two-state-run-and-tumble-model for trypanosome motility. Differences in the motility of distinct strains indicate that adaption of the trypanosomes to their natural environments involves a change in their mode of swimming

    HOLISMOKES - VI. New galaxy-scale strong lens candidates from the HSC-SSP imaging survey

    No full text
    We have carried out a systematic search for galaxy-scale strong lenses in multiband imaging from the Hyper Suprime-Cam (HSC) survey. Our automated pipeline, based on realistic strong-lens simulations, deep neural network classification, and visual inspection, is aimed at efficiently selecting systems with wide image separations (Einstein radii θE ∼ 1.0–3.0″), intermediate redshift lenses (z ∼ 0.4–0.7), and bright arcs for galaxy evolution and cosmology. We classified gri images of all 62.5 million galaxies in HSC Wide with i-band Kron radius ≥0.8″ to avoid strict preselections and to prepare for the upcoming era of deep, wide-scale imaging surveys with Euclid and Rubin Observatory. We obtained 206 newly-discovered candidates classified as definite or probable lenses with either spatially-resolved multiple images or extended, distorted arcs. In addition, we found 88 high-quality candidates that were assigned lower confidence in previous HSC searches, and we recovered 173 known systems in the literature. These results demonstrate that, aided by limited human input, deep learning pipelines with false positive rates as low as ≃0.01% can be very powerful tools for identifying the rare strong lenses from large catalogs, and can also largely extend the samples found by traditional algorithms. We provide a ranked list of candidates for future spectroscopic confirmation

    A survey for the applications of content-based microscopic image analysis in microorganism classification domains

    No full text
    corecore