461 research outputs found

    Non-Line-of-Sight Imaging from iToF data

    Get PDF
    The master's thesis will be about recovering information regarding an object out of line of sight. In the target set-up, multi-frequency images acquired by an iToF camera looking at an intermediate wall are used in combination with a neural network for direct-global light separation. The first part consists on, using analytical approaches, similar to "Fermat Flow", to estimate the location of the object around the corner. While the second part consists in using a Deep Learning model to perform the same task. The method will be tested on synthetic scene, simulated with Mitsuba-v2, and real scenes.The master's thesis will be about recovering information regarding an object out of line of sight. In the target set-up, multi-frequency images acquired by an iToF camera looking at an intermediate wall are used in combination with a neural network for direct-global light separation. The first part consists on, using analytical approaches, similar to "Fermat Flow", to estimate the location of the object around the corner. While the second part consists in using a Deep Learning model to perform the same task. The method will be tested on synthetic scene, simulated with Mitsuba-v2, and real scenes

    Dissecting the Gravitational Lens B1608+656. II. Precision Measurements of the Hubble Constant, Spatial Curvature, and the Dark Energy Equation of State

    Get PDF
    Strong gravitational lens systems with measured time delays between the multiple images provide a method for measuring the "time-delay distance" to the lens, and thus the Hubble constant. We present a Bayesian analysis of the strong gravitational lens system B1608+656, incorporating (i) new, deep Hubble Space Telescope (HST) observations, (ii) a new velocity dispersion measurement of 260+/-15 km/s for the primary lens galaxy, and (iii) an updated study of the lens' environment. When modeling the stellar dynamics of the primary lens galaxy, the lensing effect, and the environment of the lens, we explicitly include the total mass distribution profile logarithmic slope gamma' and the external convergence kappa_ext; we marginalize over these parameters, assigning well-motivated priors for them, and so turn the major systematic errors into statistical ones. The HST images provide one such prior, constraining the lens mass density profile logarithmic slope to be gamma'=2.08+/-0.03; a combination of numerical simulations and photometric observations of the B1608+656 field provides an estimate of the prior for kappa_ext: 0.10 +0.08/-0.05. This latter distribution dominates the final uncertainty on H_0. Compared with previous work on this system, the new data provide an increase in precision of more than a factor of two. In combination with the WMAP 5-year data set, we find that the B1608+656 data set constrains the curvature parameter to be -0.031 < Omega_k < 0.009 (95% CL), a level of precision comparable to that afforded by the current Type Ia SNe sample. Asserting a flat spatial geometry, we find that, in combination with WMAP, H_0 = 69.7 +4.9/-5.0 km/s/Mpc and w=-0.94 +0.17/-0.19 (68% CL), suggesting that the observations of B1608+656 constrain w as tightly as do the current Baryon Acoustic Oscillation data. (abridged)Comment: 24 pages, 8 figures, revisions based on referee's comments, accepted for publication in Ap

    Structure-aware parametric representations for time-resolved light transport

    Get PDF
    Time-resolved illumination provides rich spatiotemporal information for applications such as accurate depth sensing or hidden geometry reconstruction, becoming a useful asset for prototyping and as input for data-driven approaches. However, time-resolved illumination measurements are high-dimensional and have a low signal-to-noise ratio, hampering their applicability in real scenarios. We propose a novel method to compactly represent time-resolved illumination using mixtures of exponentially modified Gaussians that are robust to noise and preserve structural information. Our method yields representations two orders of magnitude smaller than discretized data, providing consistent results in such applications as hidden-scene reconstruction and depth estimation, and quantitative improvements over previous approaches

    A model-independent characterisation of strong gravitational lensing by observables

    Full text link
    When light from a distant source object, like a galaxy or a supernova, travels towards us, it is deflected by massive objects that lie on its path. When the mass density of the deflecting object exceeds a certain threshold, multiple, highly distorted images of the source are observed. This strong gravitational lensing effect has so far been treated as a model-fitting problem. Using the observed multiple images as constraints yields a self-consistent model of the deflecting mass density and the source object. As several models meet the constraints equally well, we develop a lens characterisation that separates data-based information from model assumptions. The observed multiple images allow us to determine local properties of the deflecting mass distribution on any mass scale from one simple set of equations. Their solution is unique and free of model-dependent degeneracies. The reconstruction of source objects can be performed completely model-independently, enabling us to study galaxy evolution without a lens-model bias. Our approach reduces the lens and source description to its data-based evidence that all models agree upon, simplifies an automated treatment of large datasets, and allows for an extrapolation to a global description resembling model-based descriptions.Comment: Invited review-paper submitted to "Observing Gravitational Lenses: Present and Future" in Universe, comments very welcom

    The Hubble Constant

    Get PDF
    I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H0H_0 values of around 72-74km/s/Mpc , with typical errors of 2-3km/s/Mpc. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68km/s/Mpc and typical errors of 1-2km/s/Mpc. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.Comment: Extensively revised and updated since the 2007 version: accepted by Living Reviews in Relativity as a major (2014) update of LRR 10, 4, 200
    • …
    corecore