1,566 research outputs found
Inferring the photometric and size evolution of galaxies from image simulations
Current constraints on models of galaxy evolution rely on morphometric
catalogs extracted from multi-band photometric surveys. However, these catalogs
are altered by selection effects that are difficult to model, that correlate in
non trivial ways, and that can lead to contradictory predictions if not taken
into account carefully. To address this issue, we have developed a new approach
combining parametric Bayesian indirect likelihood (pBIL) techniques and
empirical modeling with realistic image simulations that reproduce a large
fraction of these selection effects. This allows us to perform a direct
comparison between observed and simulated images and to infer robust
constraints on model parameters. We use a semi-empirical forward model to
generate a distribution of mock galaxies from a set of physical parameters.
These galaxies are passed through an image simulator reproducing the
instrumental characteristics of any survey and are then extracted in the same
way as the observed data. The discrepancy between the simulated and observed
data is quantified, and minimized with a custom sampling process based on
adaptive Monte Carlo Markov Chain methods. Using synthetic data matching most
of the properties of a CFHTLS Deep field, we demonstrate the robustness and
internal consistency of our approach by inferring the parameters governing the
size and luminosity functions and their evolutions for different realistic
populations of galaxies. We also compare the results of our approach with those
obtained from the classical spectral energy distribution fitting and
photometric redshift approach.Our pipeline infers efficiently the luminosity
and size distribution and evolution parameters with a very limited number of
observables (3 photometric bands). When compared to SED fitting based on the
same set of observables, our method yields results that are more accurate and
free from systematic biases.Comment: 24 pages, 12 figures, accepted for publication in A&
Occlusion-Robust MVO: Multimotion Estimation Through Occlusion Via Motion Closure
Visual motion estimation is an integral and well-studied challenge in
autonomous navigation. Recent work has focused on addressing multimotion
estimation, which is especially challenging in highly dynamic environments.
Such environments not only comprise multiple, complex motions but also tend to
exhibit significant occlusion.
Previous work in object tracking focuses on maintaining the integrity of
object tracks but usually relies on specific appearance-based descriptors or
constrained motion models. These approaches are very effective in specific
applications but do not generalize to the full multimotion estimation problem.
This paper presents a pipeline for estimating multiple motions, including the
camera egomotion, in the presence of occlusions. This approach uses an
expressive motion prior to estimate the SE (3) trajectory of every motion in
the scene, even during temporary occlusions, and identify the reappearance of
motions through motion closure. The performance of this occlusion-robust
multimotion visual odometry (MVO) pipeline is evaluated on real-world data and
the Oxford Multimotion Dataset.Comment: To appear at the 2020 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS). An earlier version of this work first
appeared at the Long-term Human Motion Planning Workshop (ICRA 2019). 8
pages, 5 figures. Video available at
https://www.youtube.com/watch?v=o_N71AA6FR
The Luminosity Function at z~8 from 97 Y-band dropouts: Inferences About Reionization
[Abbreviated] We present the largest search to date for Lyman break
galaxies (LBGs) based on 350 arcmin of HST observations in the V-, Y-, J-
and H-bands from the Brightest of Reionizing Galaxies (BoRG) survey. The BoRG
dataset includes 50 arcmin of new data and deeper observations of two
previous BoRG pointings, from which we present 9 new LBG candidates,
bringing the total number of BoRG LBGs to 38 with (AB system). We introduce a new Bayesian formalism for
estimating the galaxy luminosity function (LF), which does not require binning
(and thus smearing) of the data and includes a likelihood based on the formally
correct binomial distribution as opposed to the often used approximate Poisson
distribution. We demonstrate the utility of the new method on a sample of
LBGs that combines the bright BoRG galaxies with the fainter sources published
in Bouwens et al. (2012) from the HUDF and ERS programs. We show that the
LF is well described by a Schechter function with a characteristic
magnitude , a faint-end slope of , and a number density of . Integrated down to this
LF yields a luminosity density, . Our LF analysis
is consistent with previously published determinations within 1. We
discuss the implication of our study for the physics of reionization. By
assuming theoretically motivated priors on the clumping factor and the photon
escape fraction we show that the UV LF from galaxy samples down to
can ionize only 10-50% of the neutral hydrogen at . Full reionization
would require extending the LF down to .Comment: Accepted for publication in ApJ, 22 pages, 15 figure
Continuous Modeling of 3D Building Rooftops From Airborne LIDAR and Imagery
In recent years, a number of mega-cities have provided 3D photorealistic virtual models to support the decisions making process for maintaining the cities' infrastructure and environment more effectively. 3D virtual city models are static snap-shots of the environment and represent the status quo at the time of their data acquisition. However, cities are dynamic system that continuously change over time. Accordingly, their virtual representation need to be regularly updated in a timely manner to allow for accurate analysis and simulated results that decisions are based upon. The concept of "continuous city modeling" is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. However, developing a universal intelligent machine enabling continuous modeling still remains a challenging task. Therefore, this thesis proposes a novel research framework for continuously reconstructing 3D building rooftops using multi-sensor data. For achieving this goal, we first proposes a 3D building rooftop modeling method using airborne LiDAR data. The main focus is on the implementation of an implicit regularization method which impose a data-driven building regularity to noisy boundaries of roof planes for reconstructing 3D building rooftop models. The implicit regularization process is implemented in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). Secondly, we propose a context-based geometric hashing method to align newly acquired image data with existing building models. The novelty is the use of context features to achieve robust and accurate matching results. Thirdly, the existing building models are refined by newly proposed sequential fusion method. The main advantage of the proposed method is its ability to progressively refine modeling errors frequently observed in LiDAR-driven building models. The refinement process is conducted in the framework of MDL combined with HAT. Markov Chain Monte Carlo (MDMC) coupled with Simulated Annealing (SA) is employed to perform a global optimization. The results demonstrates that the proposed continuous rooftop modeling methods show a promising aspects to support various critical decisions by not only reconstructing 3D rooftop models accurately, but also by updating the models using multi-sensor data
Inferring Latent States and Refining Force Estimates via Hierarchical Dirichlet Process Modeling in Single Particle Tracking Experiments
Optical microscopy provides rich spatio-temporal information characterizing
in vivo molecular motion. However, effective forces and other parameters used
to summarize molecular motion change over time in live cells due to latent
state changes, e.g., changes induced by dynamic micro-environments,
photobleaching, and other heterogeneity inherent in biological processes. This
study focuses on techniques for analyzing Single Particle Tracking (SPT) data
experiencing abrupt state changes. We demonstrate the approach on GFP tagged
chromatids experiencing metaphase in yeast cells and probe the effective forces
resulting from dynamic interactions that reflect the sum of a number of
physical phenomena. State changes are induced by factors such as microtubule
dynamics exerting force through the centromere, thermal polymer fluctuations,
etc. Simulations are used to demonstrate the relevance of the approach in more
general SPT data analyses. Refined force estimates are obtained by adopting and
modifying a nonparametric Bayesian modeling technique, the Hierarchical
Dirichlet Process Switching Linear Dynamical System (HDP-SLDS), for SPT
applications. The HDP-SLDS method shows promise in systematically identifying
dynamical regime changes induced by unobserved state changes when the number of
underlying states is unknown in advance (a common problem in SPT applications).
We expand on the relevance of the HDP-SLDS approach, review the relevant
background of Hierarchical Dirichlet Processes, show how to map discrete time
HDP-SLDS models to classic SPT models, and discuss limitations of the approach.
In addition, we demonstrate new computational techniques for tuning
hyperparameters and for checking the statistical consistency of model
assumptions directly against individual experimental trajectories; the
techniques circumvent the need for "ground-truth" and subjective information.Comment: 25 pages, 6 figures. Differs only typographically from PLoS One
publication available freely as an open-access article at
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.013763
Improved image analysis by maximised statistical use of geometry-shape constraints
Identifying the underlying models in a set of data points contaminated by noise and outliers, leads to a highly complex multi-model fitting problem. This problem can be posed as a clustering problem by the construction of higher order affinities between data points into a hypergraph, which can then be partitioned using spectral clustering. Calculating the weights of all hyperedges is computationally expensive. Hence an approximation is required. In this thesis, the aim is to find an efficient and effective approximation that produces an excellent segmentation outcome. Firstly, the effect of hyperedge sizes on the speed and accuracy of the clustering is investigated. Almost all previous work on hypergraph clustering in computer vision, has considered the smallest possible hyperedge size, due to the lack of research into the potential benefits of large hyperedges and effective algorithms to generate them. In this thesis, it is shown that large hyperedges are better from both theoretical and empirical standpoints. The efficiency of this technique on various higher-order grouping problems is investigated. In particular, we show that our approach improves the accuracy and efficiency of motion segmentation from dense, long-term, trajectories. A shortcoming of the above approach is that the probability of a generated sample being impure increases as the size of the sample increases. To address this issue, a novel guided sampling strategy for large hyperedges, based on the concept of minimizing the largest residual, is also included. It is proposed to guide each sample by optimizing over a \textsuperscript{th} order statistics based cost function. Samples are generated using a greedy algorithm coupled with a data sub-sampling strategy. The experimental analysis shows that this proposed step is both accurate and computationally efficient compared to state-of-the-art robust multi-model fitting techniques. However, the optimization method for guiding samples involves hard-to-tune parameters. Thus a sampling method is eventually developed that significantly facilitates solving the segmentation problem using a new form of the Markov-Chain-Monte-Carlo (MCMC) method to efficiently sample from hyperedge distribution. To sample from the above distribution effectively, the proposed Markov Chain includes new types of long and short jumps to perform exploration and exploitation of all structures. Unlike common sampling methods, this method does not require any specific prior knowledge about the distribution of models. The output set of samples leads to a clustering solution by which the final model parameters for each segment are obtained. The overall method competes favorably with the state-of-the-art both in terms of computation power and segmentation accuracy
- …