3,527 research outputs found
A New Estimate of the Hubble Time with Improved Modeling of Gravitational Lenses
This paper examines free-form modeling of gravitational lenses using Bayesian
ensembles of pixelated mass maps. The priors and algorithms from previous work
are clarified and significant technical improvements are made. Lens
reconstruction and Hubble Time recovery are tested using mock data from simple
analytic models and recent galaxy-formation simulations. Finally, using
published data, the Hubble Time is inferred through the simultaneous
reconstruction of eleven time-delay lenses. The result is
H_0^{-1}=13.7^{+1.8}_{-1.0} Gyr.Comment: 24 pages, 9 figures. Accepted to Ap
Updated Constraints on Large Field Hybrid Inflation
We revisit the status of hybrid inflation in the light of Planck and recent
BICEP2 results, taking care of possible transient violations of the slow-roll
conditions as the field passes from the large field to the vacuum dominated
phase. The usual regime where observable scales exit the Hubble radius in the
vacuum dominated phase predicts a blue scalar spectrum, which is ruled out. But
whereas assuming slow-roll one expects this regime to be generic, by solving
the exact dynamics we identify the parameter space for which the small field
phase is naturally avoided due to slow-roll violations at the end of the large
field phase. When the number of e-folds generated at small field is negligible,
the model predictions are degenerated with those of a quadratic potential.
There exists also a transitory case for which the small field phase is
sufficiently long to affect importantly the observable predictions.
Interestingly, in this case the spectral index and the tensor to scalar ratio
agree respectively with the best fit of Planck and BICEP2. This results in a
\Delta \chi^2 \simeq 5.0 in favor of hybrid inflation for Planck+BICEP2 (\Delta
\chi^2 \simeq 0.9 for Planck only). The last considered regime is when the
critical point at which inflation ends is located in the large field phase. It
is constrained to be lower than about ten times the reduced Planck mass. The
analysis has been conducted with the use of Markov-Chain-Monte-Carlo bayesian
method, in a reheating consistent way, and we present the posterior probability
distributions for all the model parameters.Comment: 13 pages, 9 figures, comments welcom
Constraining Inflation
Slow roll reconstruction is derived from the Hamilton-Jacobi formulation of
inflationary dynamics. It automatically includes information from sub-leading
terms in slow roll, and facilitatesthe inclusion of priors based on the
duration on inflation. We show that at low inflationary scales the
Hamilton-Jacobi equations simplify considerably. We provide a new
classification scheme for inflationary models, based solely on the number of
parameters needed to specify the potential, and provide forecasts for likely
bounds on the slow roll parameters from future datasets. A minimal running of
the spectral index, induced solely by the first two slow roll parameters
(\epsilon and \eta) appears to be effectively undetectable by realistic Cosmic
Microwave Background experiments. However, we show that the ability to detect
this signal increases with the lever arm in comoving wavenumber, and we
conjecture that high redshift 21 cm data may allow tests of second order
consistency conditions on inflation. Finally, we point out that the second
order corrections to the spectral index are correlated with the inflationary
scale, and thus the amplitude of the CMB B-mode.Comment: 32 pages. v
Linear response strength functions with iterative Arnoldi diagonalization
We report on an implementation of a new method to calculate RPA strength
functions with iterative non-hermitian Arnoldi diagonalization method, which
does not explicitly calculate and store the RPA matrix. We discuss the
treatment of spurious modes, numerical stability, and how the method scales as
the used model space is enlarged. We perform the particle-hole RPA benchmark
calculations for double magic nucleus 132Sn and compare the resulting
electromagnetic strength functions against those obtained within the standard
RPA.Comment: 9 RevTeX pages, 11 figures, submitted to Physical Review
Index to 1981 NASA Tech Briefs, volume 6, numbers 1-4
Short announcements of new technology derived from the R&D activities of NASA are presented. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This index for 1981 Tech Briefs contains abstracts and four indexes: subject, personal author, originating center, and Tech Brief Number. The following areas are covered: electronic components and circuits, electronic systems, physical sciences, materials, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences
Data Structures & Algorithm Analysis in C++
This is the textbook for CSIS 215 at Liberty University.https://digitalcommons.liberty.edu/textbooks/1005/thumbnail.jp
Design of large scale applications of secure multiparty computation : secure linear programming
Secure multiparty computation is a basic concept of growing interest in modern cryptography. It allows a set of mutually distrusting parties to perform a computation on their private information in such a way that as little as possible is revealed about each private input. The early results of multiparty computation have only theoretical signi??cance since they are not able to solve computationally complex functions in a reasonable amount of time. Nowadays, e??ciency of secure multiparty computation is an important topic of cryptographic research. As a case study we apply multiparty computation to solve the problem of secure linear programming. The results enable, for example in the context of the EU-FP7 project SecureSCM, collaborative supply chain management. Collaborative supply chain management is about the optimization of the supply and demand con??guration of a supply chain. In order to optimize the total bene??t of the entire chain, parties should collaborate by pooling their sensitive data. With the focus on e??ciency we design protocols that securely solve any linear program using the simplex algorithm. The simplex algorithm is well studied and there are many variants of the simplex algorithm providing a simple and e??cient solution to solving linear programs in practice. However, the cryptographic layer on top of any variant of the simplex algorithm imposes restrictions and new complexity measures. For example, hiding the number of iterations of the simplex algorithm has the consequence that the secure implementations have a worst case number of iterations. Then, since the simplex algorithm has exponentially many iterations in the worst case, the secure implementations have exponentially many iterations in all cases. To give a basis for understanding the restrictions, we review the basic theory behind the simplex algorithm and we provide a set of cryptographic building blocks used to implement secure protocols evaluating basic variants of the simplex algorithm. We show how to balance between privacy and e??ciency; some protocols reveal data about the internal state of the simplex algorithm, such as the number of iterations, in order to improve the expected running times. For the sake of simplicity and e??ciency, the protocols are based on Shamir's secret sharing scheme. We combine and use the results from the literature on secure random number generation, secure circuit evaluation, secure comparison, and secret indexing to construct e??cient building blocks for secure simplex. The solutions for secure linear programming in this thesis can be split into two categories. On the one hand, some protocols evaluate the classical variants of the simplex algorithm in which numbers are truncated, while the other protocols evaluate the variants of the simplex algorithms in which truncation is avoided. On the other hand, the protocols can be separated by the size of the tableaus. Theoretically there is no clear winner that has both the best security properties and the best performance
Entropic lattice Boltzmann methods
We present a general methodology for constructing lattice Boltzmann models of
hydrodynamics with certain desired features of statistical physics and kinetic
theory. We show how a methodology of linear programming theory, known as
Fourier-Motzkin elimination, provides an important tool for visualizing the
state space of lattice Boltzmann algorithms that conserve a given set of
moments of the distribution function. We show how such models can be endowed
with a Lyapunov functional, analogous to Boltzmann's H, resulting in
unconditional numerical stability. Using the Chapman-Enskog analysis and
numerical simulation, we demonstrate that such entropically stabilized lattice
Boltzmann algorithms, while fully explicit and perfectly conservative, may
achieve remarkably low values for transport coefficients, such as viscosity.
Indeed, the lowest such attainable values are limited only by considerations of
accuracy, rather than stability. The method thus holds promise for
high-Reynolds number simulations of the Navier-Stokes equations.Comment: 54 pages, 16 figures. Proc. R. Soc. London A (in press
- …