18 research outputs found

    HOLISMOKES -- IV. Efficient mass modeling of strong lenses through deep learning

    Full text link
    Modelling the mass distributions of strong gravitational lenses is often necessary to use them as astrophysical and cosmological probes. With the high number of lens systems (>105>10^5) expected from upcoming surveys, it is timely to explore efficient modeling approaches beyond traditional MCMC techniques that are time consuming. We train a CNN on images of galaxy-scale lenses to predict the parameters of the SIE mass model (x,y,ex,eyx,y,e_x,e_y, and θE\theta_E). To train the network, we simulate images based on real observations from the HSC Survey for the lens galaxies and from the HUDF as lensed galaxies. We tested different network architectures, the effect of different data sets, and using different input distributions of θE\theta_E. We find that the CNN performs well and obtain with the network trained with a uniform distribution of θE\theta_E >0.5">0.5" the following median values with 1σ1\sigma scatter: Δx=(0.00−0.30+0.30)"\Delta x=(0.00^{+0.30}_{-0.30})", Δy=(0.00−0.29+0.30)"\Delta y=(0.00^{+0.30}_{-0.29})" , ΔθE=(0.07−0.12+0.29)"\Delta \theta_E=(0.07^{+0.29}_{-0.12})", Δex=−0.01−0.09+0.08\Delta e_x = -0.01^{+0.08}_{-0.09} and Δey=0.00−0.09+0.08\Delta e_y = 0.00^{+0.08}_{-0.09}. The bias in θE\theta_E is driven by systems with small θE\theta_E. Therefore, when we further predict the multiple lensed image positions and time delays based on the network output, we apply the network to the sample limited to θE>0.8"\theta_E>0.8". In this case, the offset between the predicted and input lensed image positions is (0.00−0.29+0.29)"(0.00_{-0.29}^{+0.29})" and (0.00−0.31+0.32)"(0.00_{-0.31}^{+0.32})" for xx and yy, respectively. For the fractional difference between the predicted and true time delay, we obtain 0.04−0.05+0.270.04_{-0.05}^{+0.27}. Our CNN is able to predict the SIE parameters in fractions of a second on a single CPU and with the output we can predict the image positions and time delays in an automated way, such that we are able to process efficiently the huge amount of expected lens detections in the near future.Comment: 17 pages, 14 Figure

    HOLISMOKES -- IX. Neural network inference of strong-lens parameters and uncertainties from ground-based images

    Full text link
    Modeling of strong gravitational lenses is a necessity for further applications in astrophysics and cosmology. Especially with the large number of detections in current and upcoming surveys such as the Rubin Legacy Survey of Space and Time (LSST), it is timely to investigate in automated and fast analysis techniques beyond the traditional and time consuming Markov chain Monte Carlo sampling methods. Building upon our convolutional neural network (CNN) presented in Schuldt et al. (2021b), we present here another CNN, specifically a residual neural network (ResNet), that predicts the five mass parameters of a Singular Isothermal Ellipsoid (SIE) profile (lens center xx and yy, ellipticity exe_x and eye_y, Einstein radius θE\theta_E) and the external shear (γext,1\gamma_{ext,1}, γext,2\gamma_{ext,2}) from ground-based imaging data. In contrast to our CNN, this ResNet further predicts a 1σ\sigma uncertainty for each parameter. To train our network, we use our improved pipeline from Schuldt et al. (2021b) to simulate lens images using real images of galaxies from the Hyper Suprime-Cam Survey (HSC) and from the Hubble Ultra Deep Field as lens galaxies and background sources, respectively. We find overall very good recoveries for the SIE parameters, while differences remain in predicting the external shear. From our tests, most likely the low image resolution is the limiting factor for predicting the external shear. Given the run time of milli-seconds per system, our network is perfectly suited to predict the next appearing image and time delays of lensed transients in time. Therefore, we also present the performance of the network on these quantities in comparison to our simulations. Our ResNet is able to predict the SIE and shear parameter values in fractions of a second on a single CPU such that we are able to process efficiently the huge amount of expected galaxy-scale lenses in the near future.Comment: 16 pages, including 11 figures, accepted for publication by A&

    Three dimensional tracking of exploratory behavior of barnacle cyprids using stereoscopy

    Get PDF
    Surface exploration is a key step in the colonization of surfaces by sessile marine biofoulers. As many biofouling organisms can delay settlement until a suitable surface is encountered, colonization can comprise surface exploration and intermittent swimming. As such, the process is best followed in three dimensions. Here we present a low-cost transportable stereoscopic system consisting of two consumer camcorders. We apply this novel apparatus to behavioral analysis of barnacle larvae (? 800 lm length) during surface exploration and extract and analyze the three-dimensional patterns of movement. The resolution of the system and the accuracy of position determination are characterized. As a first practical result, three-dimensional swimming trajectories of the cypris larva of the barnacle Semibalanus balanoides are recorded in the vicinity of a glass surface and close to PEG2000-OH and C11NMe3 +Cl- terminated self-assembled monolayers. Although less frequently used in biofouling experiments due to its short reproductive season, the selected model species [Marechal and Hellio (2011), Int Biodeterior Biodegrad, 65(1):92-101] has been used following a number of recent investigations on the settlement behavior on chemically different surfaces [Aldred et al. (2011), ACS Appl Mater Interfaces, 3(6):2085-2091]. Experiments were scheduled to match the availability of cyprids off the north east coast of England so that natural material could be used. In order to demonstrate the biological applicability of the system, analysis of parameters such as swimming direction, swimming velocity and swimming angle are performed.DFG/Ro 2524/2-2DFG/Ro 2497/7-2ONR/N00014-08-1-1116ONR/N00014-12-1-0498EC/FP7/2007-2013/23799

    Joint tracking and segmentation of multiple targets

    No full text
    Tracking-by-detection has proven to be the most successful strategy to address the task of tracking multiple targets in unconstrained scenarios [e.g. 40, 53, 55]. Traditionally, a set of sparse detections, generated in a preprocessing step, serves as input to a high-level tracker whose goal is to correctly associate these "dots" over time. An obvious shortcoming of this approach is that most information available in image sequences is simply ignored by thresholding weak detection responses and applying non-maximum suppression. We propose a multi-target tracker that exploits low level image information and associates every (super)-pixel to a specific target or classifies it as background. As a result, we obtain a video segmentation in addition to the classical bounding-box representation in unconstrained, realworld videos. Our method shows encouraging results on many standard benchmark sequences and significantly outperforms state-of-the-art tracking-by-detection approaches in crowded scenes with long-term partial occlusions.Anton Milan, Laura Leal-Taixé, Konrad Schindler, Ian Rei

    Outdoor Human Motion Capture using Inverse Kinematics and von {Mises}-{Fisher} Sampling

    No full text

    HOTA: A higher order metric for evaluating multi-object tracking

    No full text
    Multi-object tracking (MOT) has been notoriously difficult to evaluate. Previous metrics overemphasize the importance of either detection or association. To address this, we present a novel MOT evaluation metric, higher order tracking accuracy (HOTA), which explicitly balances the effect of performing accurate detection, association and localization into a single unified metric for comparing trackers. HOTA decomposes into a family of sub-metrics which are able to evaluate each of five basic error types separately, which enables clear analysis of tracking performance. We evaluate the effectiveness of HOTA on the MOTChallenge benchmark, and show that it is able to capture important aspects of MOT performance not previously taken into account by established metrics. Furthermore, we show HOTA scores better align with human visual evaluation of tracking performance

    Photometric redshift estimation with a convolutional neural network: NetZ

    No full text
    Galaxy redshifts are a key characteristic for nearly all extragalactic studies. Since spectroscopic redshifts require additional telescope and human resources, millions of galaxies are known without spectroscopic redshifts. Therefore, it is crucial to have methods for estimating the redshift of a galaxy based on its photometric properties, the so-called photo-z. We have developed NetZ, a new method using a convolutional neural network (CNN) to predict the photo-z based on galaxy images, in contrast to previous methods that often used only the integrated photometry of galaxies without their images. We use data from the Hyper Suprime-Cam Subaru Strategic Program (HSC SSP) in five different filters as the training data. The network over the whole redshift range between 0 and 4 performs well overall and especially in the high-z range, where it fares better than other methods on the same data. We obtained a precision |zpred − zref| of σ = 0.12 (68% confidence interval) with a CNN working for all galaxy types averaged over all galaxies in the redshift range of 0 to ∼4. We carried out a comparison with a network trained on point-like sources, highlighting the importance of morphological information for our redshift estimation. By limiting the scope to smaller redshift ranges or to luminous red galaxies, we find a further notable improvement. We have published more than 34 million new photo-z values predicted with NetZ. This shows that the new method is very simple and swift in application, and, importantly, it covers a wide redshift range that is limited only by the available training data. It is broadly applicable, particularly with regard to upcoming surveys such as the Rubin Observatory Legacy Survey of Space and Time, which will provide images of billions of galaxies with similar image quality as HSC. Our HSC photo-z estimates are also beneficial to the Euclid survey, given the overlap in the footprints of the HSC and Euclid

    Learn to Predict Sets Using Feed-Forward Neural Networks

    No full text
    This paper addresses the task of set prediction using deep feed-forward neural networks. A set is a collection of elements which is invariant under permutation and the size of a set is not fixed in advance. Many real-world problems, such as image tagging and object detection, have outputs that are naturally expressed as sets of entities. This creates a challenge for traditional deep neural networks which naturally deal with structured outputs such as vectors, matrices or tensors. We present a novel approach for learning to predict sets with unknown permutation and cardinality using deep neural networks. In our formulation we define a likelihood for a set distribution represented by a) two discrete distributions defining the set cardinally and permutation variables, and b) a joint distribution over set elements with a fixed cardinality. Depending on the problem under consideration, we define different training models for set prediction using deep neural networks. We demonstrate the validity of our set formulations on relevant vision problems such as: 1) multi-label image classification where we outperform the other competing methods on the PASCAL VOC and MS COCO datasets, 2) object detection, for which our formulation outperforms popular state-of-the-art detectors, and 3) a complex CAPTCHA test, where we observe that, surprisingly, our set-based network acquired the ability of mimicking arithmetics without any rules being coded.Hamid Rezatofighi, Tianyu Zhu, Roman Kaskman, Farbod T. Motlagh, Javen Qinfeng Shi, Anton Milan, Daniel Cremers, Laura Leal-Taixe, and Ian Rei
    corecore