3,911 research outputs found
Statistical library characterization using belief propagation across multiple technology nodes
In this paper, we propose a novel flow to enable computationally efficient statistical characterization of delay and slew in standard cell libraries. The distinguishing feature of the proposed method is the usage of a limited combination of output capacitance, input slew rate and supply voltage for the extraction of statistical timing metrics of an individual logic gate. The efficiency of the proposed flow stems from the introduction of a novel, ultra-compact, nonlinear, analytical timing model, having only four universal regression parameters. This novel model facilitates the use of maximum-a-posteriori belief propagation to learn the prior parameter distribution for the parameters of the target technology from past characterizations of library cells belonging to various other technologies, including older ones. The framework then utilises Bayesian inference to extract the new timing model parameters using an ultra-small set of additional timing measurements from the target technology. The proposed method is validated and benchmarked on several production-level cell libraries including a state-of-the-art 14-nm technology node and a variation-aware, compact transistor model. For the same accuracy as the conventional lookup-table approach, this new method achieves at least 15x reduction in simulation runs.Masdar Institute of Science and Technology (Massachusetts Institute of Technology Cooperative Agreement
Powellsnakes II: a fast Bayesian approach to discrete object detection in multi-frequency astronomical data sets
Powellsnakes is a Bayesian algorithm for detecting compact objects embedded
in a diffuse background, and was selected and successfully employed by the
Planck consortium in the production of its first public deliverable: the Early
Release Compact Source Catalogue (ERCSC). We present the critical foundations
and main directions of further development of PwS, which extend it in terms of
formal correctness and the optimal use of all the available information in a
consistent unified framework, where no distinction is made between point
sources (unresolved objects), SZ clusters, single or multi-channel detection.
An emphasis is placed on the necessity of a multi-frequency, multi-model
detection algorithm in order to achieve optimality
Inferring the photometric and size evolution of galaxies from image simulations
Current constraints on models of galaxy evolution rely on morphometric
catalogs extracted from multi-band photometric surveys. However, these catalogs
are altered by selection effects that are difficult to model, that correlate in
non trivial ways, and that can lead to contradictory predictions if not taken
into account carefully. To address this issue, we have developed a new approach
combining parametric Bayesian indirect likelihood (pBIL) techniques and
empirical modeling with realistic image simulations that reproduce a large
fraction of these selection effects. This allows us to perform a direct
comparison between observed and simulated images and to infer robust
constraints on model parameters. We use a semi-empirical forward model to
generate a distribution of mock galaxies from a set of physical parameters.
These galaxies are passed through an image simulator reproducing the
instrumental characteristics of any survey and are then extracted in the same
way as the observed data. The discrepancy between the simulated and observed
data is quantified, and minimized with a custom sampling process based on
adaptive Monte Carlo Markov Chain methods. Using synthetic data matching most
of the properties of a CFHTLS Deep field, we demonstrate the robustness and
internal consistency of our approach by inferring the parameters governing the
size and luminosity functions and their evolutions for different realistic
populations of galaxies. We also compare the results of our approach with those
obtained from the classical spectral energy distribution fitting and
photometric redshift approach.Our pipeline infers efficiently the luminosity
and size distribution and evolution parameters with a very limited number of
observables (3 photometric bands). When compared to SED fitting based on the
same set of observables, our method yields results that are more accurate and
free from systematic biases.Comment: 24 pages, 12 figures, accepted for publication in A&
Machine-learning nonstationary noise out of gravitational-wave detectors
Signal extraction out of background noise is a common challenge in high-precision physics experiments, where the measurement output is often a continuous data stream. To improve the signal-to-noise ratio of the detection, witness sensors are often used to independently measure background noises and subtract them from the main signal. If the noise coupling is linear and stationary, optimal techniques already exist and are routinely implemented in many experiments. However, when the noise coupling is nonstationary, linear techniques often fail or are suboptimal. Inspired by the properties of the background noise in gravitational wave detectors, this work develops a novel algorithm to efficiently characterize and remove nonstationary noise couplings, provided there exist witnesses of the noise source and of the modulation. In this work, the algorithm is described in its most general formulation, and its efficiency is demonstrated with examples from the data of the Advanced LIGO gravitational-wave observatory, where we could obtain an improvement of the detector gravitational-wave reach without introducing any bias on the source parameter estimation
Recommended from our members
‘Powellsnakes’, a fast Bayesian approach to discrete object detection in multi-frequency astronomical data sets
In this work we introduce a fast Bayesian algorithm designed for detecting compact objects immersed in a diffuse background.
A general methodology is presented in terms of formal correctness and optimal use of all the available information in a consistent unified framework, where no distinction is made between point sources (unresolved objects), SZ clusters, single or multi-channel detection. An emphasis is placed on the necessity of a multi-frequency, multi-model detection algorithm in order to achieve optimality.
We have chosen to use the Bayes/Laplace probability theory as it grants a fully consistent extension of formal deductive logic to a more general inferential system with optimal inclusion of all ancillary information [Jaynes, 2004].
Nonetheless, probability theory only informs us about the plausibility, a ‘degree-of-belief ’, of a proposition given the data, the model that describes it and all ancillary (prior) information. However, detection or classification is mostly about making educated choices and a wrong decision always carries a cost/loss. Only resorting to ‘Decision Theory’, supported by probability theory, one can take the best decisions in terms of maximum yield at minimal cost.
Despite the rigorous and formal approach employed, practical efficiency and applicability have always been kept as primary design goals. We have attempted to select and employ the relevant tools to explore a likelihood form and its manifold symmetries to achieve the very high computational performance required not only by our ‘decision machine’ but mostly to tackle large realistic contemporary cosmological data sets.
As an illustration, we successfully applied the methodology to ESA’s (European Space Agency) Planck satellite data [Planck Collaboration et al., 2011d]. This data set is large, complex and typical of the contemporary precision observational cosmology state-of-the-art.
Two catalogue products are already released: (i) A point sources catalogue [Planck Collaboration et al., 2011e], (ii) A catalogue of galaxy clusters [Planck Collaboration et al., 2011f]. Many other contributions, in science products, as an estimation device, have recently been issued [Planck et al., 2012; Planck Collaboration et al., 2011g,i, 2012a,b,c]. This new method is called ‘PowellSnakes’ (PwS)
Approaches for Analyzing Multivariate Mixed Endpoints With High-Dimensional Covariates.
Approaches for Analyzing Multivariate Mixed Endpoints With High-Dimensional Covariates
Multiple Chemodynamic Stellar Populations of the Ursa Minor Dwarf Spheroidal Galaxy
We present a Bayesian method to identify multiple (chemodynamic) stellar
populations in dwarf spheroidal galaxies (dSphs) using velocity, metallicity,
and positional stellar data without the assumption of spherical symmetry. We
apply this method to a new Keck/DEIMOS spectroscopic survey of the Ursa Minor
(UMi) dSph. We identify 892 likely members, making this the largest UMi sample
with line-of-sight velocity and metallicity measurements. Our Bayesian method
detects two distinct chemodynamic populations with high significance
(). The metal-rich () population is
kinematically colder (radial velocity dispersion of ) and more centrally concentrated than the metal-poor () and kinematically hotter population (). Furthermore, we apply the same analysis to
an independent MMT/Hectochelle data set and confirm the existence of two
chemodynamic populations in UMi. In both data sets, the metal-rich population
is significantly flattened () and the metal-poor
population is closer to spherical (). Despite
the presence of two populations, we are unable to robustly estimate the slope
of the dynamical mass profile. We found hints for prolate rotation of order
in the MMT data set, but further observations
are required to verify this. The flattened metal-rich population invalidates
assumptions built into simple dynamical mass estimators, so we computed new
astrophysical dark matter annihilation (J) and decay profiles based on the
rounder, hotter metal-poor population and inferred
for the Keck
data set. Our results paint a more complex picture of the evolution of Ursa
Minor than previously discussed.Comment: 20 pages, 11 figures, data included. Comments welcome. Accepted to
MNRA
- …