8,632 research outputs found
Ultrafast effective multi-level atom method for primordial hydrogen recombination
Cosmological hydrogen recombination has recently been the subject of renewed
attention because of its importance for predicting the power spectrum of cosmic
microwave background anisotropies. It has become clear that it is necessary to
account for a large number n >~ 100 of energy shells of the hydrogen atom,
separately following the angular momentum substates in order to obtain
sufficiently accurate recombination histories. However, the multi-level atom
codes that follow the populations of all these levels are computationally
expensive, limiting recent analyses to only a few points in parameter space. In
this paper, we present a new method for solving the multi-level atom
recombination problem, which splits the problem into a computationally
expensive atomic physics component that is independent of the cosmology, and an
ultrafast cosmological evolution component. The atomic physics component
follows the network of bound-bound and bound-free transitions among excited
states and computes the resulting effective transition rates for the small set
of "interface" states radiatively connected to the ground state. The
cosmological evolution component only follows the populations of the interface
states. By pre-tabulating the effective rates, we can reduce the recurring cost
of multi-level atom calculations by more than 5 orders of magnitude. The
resulting code is fast enough for inclusion in Markov Chain Monte Carlo
parameter estimation algorithms. It does not yet include the radiative transfer
or high-n two-photon processes considered in some recent papers. Further work
on analytic treatments for these effects will be required in order to produce a
recombination code usable for Planck data analysis.Comment: Version accepted by Phys. Rev. D. Proof of equivalence of effective
and standard MLA methods moved to the main text. Some rewording
A reverse predictive model towards design automation of microfluidic droplet generators
This work has been presented in the 10th IWBDA workshop.Droplet-based microfluidic devices in comparison to test tubes can reduce reaction volumes 10^9 times and more due to the encapsulation of reactions in micro-scale droplets [4]. This volume reduction, alongside higher accuracy, higher sensitivity and faster reaction time made droplet microfluidics a superior platform particularly in biology, biomedical, and chemical engineering. However, a high barrier of entry prevents most of life science laboratories to exploit the advantages of microfluidics. There are two main obstacles to the widespread adoption of microfluidics, high fabrication costs, and lack of design automation tools. Recently, low-cost fabrication methods have reduced the cost of fabrication significantly [7]. Still, even with a low-cost fabrication method, due to lack of automation tools, life science research groups are still reliant on a microfluidic expert to develop any new microfluidic device [3, 5]. In this work, we report a framework to develop reverse predictive models that can accurately automate the design process of microfluidic droplet generators. This model takes prescribed performance metrics of droplet generators as the input and provides the geometry of the microfluidic device and the fluid and flow settings that result in the desired performance. We hope this automation tool makes droplet-based microfluidics more accessible, by reducing the time, cost, and knowledge needed for developing a microfluidic droplet generator that meets certain performance requirement
High-dimensional Sparse Inverse Covariance Estimation using Greedy Methods
In this paper we consider the task of estimating the non-zero pattern of the
sparse inverse covariance matrix of a zero-mean Gaussian random vector from a
set of iid samples. Note that this is also equivalent to recovering the
underlying graph structure of a sparse Gaussian Markov Random Field (GMRF). We
present two novel greedy approaches to solving this problem. The first
estimates the non-zero covariates of the overall inverse covariance matrix
using a series of global forward and backward greedy steps. The second
estimates the neighborhood of each node in the graph separately, again using
greedy forward and backward steps, and combines the intermediate neighborhoods
to form an overall estimate. The principal contribution of this paper is a
rigorous analysis of the sparsistency, or consistency in recovering the
sparsity pattern of the inverse covariance matrix. Surprisingly, we show that
both the local and global greedy methods learn the full structure of the model
with high probability given just samples, which is a
\emph{significant} improvement over state of the art -regularized
Gaussian MLE (Graphical Lasso) that requires samples. Moreover,
the restricted eigenvalue and smoothness conditions imposed by our greedy
methods are much weaker than the strong irrepresentable conditions required by
the -regularization based methods. We corroborate our results with
extensive simulations and examples, comparing our local and global greedy
methods to the -regularized Gaussian MLE as well as the Neighborhood
Greedy method to that of nodewise -regularized linear regression
(Neighborhood Lasso).Comment: Accepted to AI STAT 2012 for Oral Presentatio
A Generative Model for Parts-based Object Segmentation
The Shape Boltzmann Machine (SBM) [1] has recently been introduced as a stateof-the-art model of foreground/background object shape. We extend the SBM to account for the foreground object’s parts. Our new model, the Multinomial SBM (MSBM), can capture both local and global statistics of part shapes accurately. We combine the MSBM with an appearance model to form a fully generative model of images of objects. Parts-based object segmentations are obtained simply by performing probabilistic inference in the model. We apply the model to two challenging datasets which exhibit significant shape and appearance variability, and find that it obtains results that are comparable to the state-of-the-art. There has been significant focus in computer vision on object recognition and detection e.g. [2], but a strong desire remains to obtain richer descriptions of objects than just their bounding boxes. One such description is a parts-based object segmentation, in which an image is partitioned into multiple sets of pixels, each belonging to either a part of the object of interest, or its background. The significance of parts in computer vision has been recognized since the earliest days of th
Metals at the surface of last scatter
Standard big-bang nucleosynthesis (BBN) predicts only a trace abundance of lithium and no heavier elements, but some alternatives predict a nonzero primordial metallicity. Here we explore whether CMB measurements may set useful constraints to the primordial metallicity and/or whether the standard CMB calculations are robust, within the tolerance of forthcoming CMB maps, to the possibility of primordial metals. Metals would affect the recombination history (and thus CMB power spectra) in three ways: (1) Lyα photons can be removed (and recombination thus accelerated) by photoionizing metals; (2) The Bowen resonance-fluorescence mechanism may degrade Lyβ photons and thus enhance the Lyβ escape probability and speed up recombination; (3) Metals could affect the low-redshift tail of the CMB visibility function by providing additional free electrons. The last two of these provide the strongest CMB signal. However, the effects are detectable in the Planck satellite only if the primordial metal abundance is at least a few hundredths of solar for (2) and a few tenths of solar for (3). We thus conclude that Planck will not be able to improve upon current constraints to primordial metallicity, at the level of a thousandth of solar, from the Lyman-α forest and ultra-metal-poor halo stars, and that the CMB power-spectrum predictions for Planck suffer no uncertainty arising from the possibility that there may be primordial metals
Divina Frau-Meigs, Media Matters in the Cultural Contradictions of the ‘Information Society’ – Towards a Human Rights-Based Governance
Divina Frau-Meigs’ Media Matters in the Cultural Contradictions of the ‘Information Society’ – Towards a human rights-based governance is one of a number of recent monographs to grapple with the changing nature of communication regulation, policy and legislation, at the national, regional and supranational levels. In doing so, Frau-Meigs does not just comment on emerging regimes of global communication governance, but rather attempts to reinsert a human element into a discourse that has becom..
Virginias Connected Future: A guide for funders and philanthropists to address digital divides in the Commonwealth
In the last three years, Virginia has made significant strides to curtail the many facets of the digital divide that exist throughout the Commonwealth. Examples include a 65 billion commitment to broadband deployment, access, and equity is the largest public investment in telecommunications in the nation's history, and Virginia funders can help the Commonwealth prepare for an influx of capital funds and lay the groundwork for crucial connectivity work in the next five years.As part of the Virginia Funders Network's (VFN) efforts to support the Commonwealth's commitment to achieve universal connectivity by 2024, this memo serves three functions. First, it provides funders with a high-level, plain-language overview of broadband developments in the Commonwealth of Virginia with a focus on the pandemic years of 2020-2022. Second, it highlights the efforts of a few Virginia funders who have supported broadband deployment. Third, it serves as an invitation and welcome to funders – of all shapes and sizes - who are considering investments in broadband or who are just starting to think about the critical role broadband plays in areas such as education, health care, economic development, workforce, and more
- …