5,996 research outputs found

    Counting Black Holes: The Cosmic Stellar Remnant Population and Implications for LIGO

    Full text link
    We present an empirical approach for interpreting gravitational wave signals of binary black hole mergers under the assumption that the underlying black hole population is sourced by remnants of stellar evolution. Using the observed relationship between galaxy mass and stellar metallicity, we predict the black hole count as a function of galaxy stellar mass. We show, for example, that a galaxy like the Milky Way should host millions of 30 M\sim 30~M_\odot black holes and dwarf satellite galaxies like Draco should host 100\sim 100 such remnants, with weak dependence on the assumed IMF and stellar evolution model. Most low-mass black holes (10M\sim10 M_\odot) typically reside within massive galaxies (M1011MM_\star \simeq 10^{11} M_\odot) while massive black holes (50 M\sim 50~M_\odot) typically reside within dwarf galaxies (M109MM_\odot \simeq 10^9 M_\odot) today. If roughly 1%1\% of black holes are involved in a binary black hole merger, then the reported merger rate densities from Advanced LIGO can be accommodated for a range of merger timescales, and the detection of mergers with >50 M> 50~M_\odot black holes should be expected within the next decade. Identifying the host galaxy population of the mergers provides a way to constrain both the binary neutron star or black hole formation efficiencies and the merger timescale distributions; these events would be primarily localized in dwarf galaxies if the merger timescale is short compared to the age of the universe and in massive galaxies otherwise. As more mergers are detected, the prospect of identifying the host galaxy population, either directly through the detection of electromagnetic counterparts of binary neutron star mergers or indirectly through the anisotropy of the events, will become a realistic possibility.Comment: 10 pages, 8 figures. Accepted by MNRA

    Asymptotically Optimal Quantum Circuits for d-level Systems

    Full text link
    As a qubit is a two-level quantum system whose state space is spanned by |0>, |1>, so a qudit is a d-level quantum system whose state space is spanned by |0>,...,|d-1>. Quantum computation has stimulated much recent interest in algorithms factoring unitary evolutions of an n-qubit state space into component two-particle unitary evolutions. In the absence of symmetry, Shende, Markov and Bullock use Sard's theorem to prove that at least C 4^n two-qubit unitary evolutions are required, while Vartiainen, Moettoenen, and Salomaa (VMS) use the QR matrix factorization and Gray codes in an optimal order construction involving two-particle evolutions. In this work, we note that Sard's theorem demands C d^{2n} two-qudit unitary evolutions to construct a generic (symmetry-less) n-qudit evolution. However, the VMS result applied to virtual-qubits only recovers optimal order in the case that d is a power of two. We further construct a QR decomposition for d-multi-level quantum logics, proving a sharp asymptotic of Theta(d^{2n}) two-qudit gates and thus closing the complexity question for all d-level systems (d finite.) Gray codes are not required, and the optimal Theta(d^{2n}) asymptotic also applies to gate libraries where two-qudit interactions are restricted by a choice of certain architectures.Comment: 18 pages, 5 figures (very detailed.) MatLab files for factoring qudit unitary into gates in MATLAB directory of source arxiv format. v2: minor change

    Dark Matter from Early Decays

    Full text link
    Two leading dark matter candidates from supersymmetry and other theories of physics beyond the standard model are WIMPs and weak scale gravitinos. If the lightest stable particle is a gravitino, then a WIMP will decay into it with a natural lifetime of order a month ~ M_{pl}^2/M_{weak}^3. We show that if the bulk of dark matter today came from decays of neutral particles with lifetimes of order a year or smaller, then it could lead to a reduction in the amount of small scale substructure, less concentrated halos and constant density cores in the smallest mass halos. Such beneficial effects may therefore be realized naturally, as discussed by Cembranos, Feng, Rajaraman, and Takayama, in the case of supersymmetry.Comment: Matches version accepted for publication in PRD. Added a paragraph to Sec V. 9 pages, 3 figure

    An ion probe study of the sulphur isotopic composition of Fe-Ni sulphides in CM carbonaceous chondrites

    Get PDF
    From the Introduction: The CM chondrites have endured variable degrees of aqueous alteration [1] which has changed their original mineralogy. A detailed study of the petrology and mineralogy of the sulphides in a suite of increasingly aqueously altered CMs, combined with sulphur isotope data measured in situ, can provide clues as to whether differences in the CM group are a result of different degrees of aqueous alteration, or whether they are the result of nebular heterogeneity

    The Implications of Galaxy Formation Models for the TeV Observations of Current Detectors

    Full text link
    This paper represents a step toward constraining galaxy formation models via TeV gamm a ray observations. We use semi-analytic models of galaxy formation to predict a spectral distribution for the intergalactic infrared photon field, which in turn yields information about the absorption of TeV gamma rays from extra-galactic sources. By making predictions for integral flux observations at >200 GeV for several known EGRE T sources, we directly compare our models with current observational upper limits obtained by Whipple. In addition, our predictions may offer a guide to the observing programs for the current population of TeV gamma ray observatories.Comment: 6 pages, 11 figures, to appear in the proceedings of the 6th TeV Workshop at Snowbird, U

    YIELD PREDICTION IN 60ft\u3csup\u3e2\u3c/sup\u3e GRIDS

    Get PDF
    Large detailed yield databases incorporating GPS makes it possible to predict yield on a small scale. The objective of this study was to determine how closely yield could be predicted in grids of 60-ft2 units. Com and soybean yields were averaged to the 60-ft2 grid. The yields were modeled on previous yields, soil fertility, soil type, and terrain variables. Soil fertility variables were kriged from a I-acre grid to the 60-ft2 grid. Terrain data and soil type data were available at the same scale. Multiple regression models and models with spatial correlation determined from yield semivariograms differed some. Previous yields and wetness were the most significant variables. Soil variables alone were not good predictors

    Parallelism for Quantum Computation with Qudits

    Full text link
    Robust quantum computation with d-level quantum systems (qudits) poses two requirements: fast, parallel quantum gates and high fidelity two-qudit gates. We first describe how to implement parallel single qudit operations. It is by now well known that any single-qudit unitary can be decomposed into a sequence of Givens rotations on two-dimensional subspaces of the qudit state space. Using a coupling graph to represent physically allowed couplings between pairs of qudit states, we then show that the logical depth of the parallel gate sequence is equal to the height of an associated tree. The implementation of a given unitary can then optimize the tradeoff between gate time and resources used. These ideas are illustrated for qudits encoded in the ground hyperfine states of the atomic alkalies 87^{87}Rb and 133^{133}Cs. Second, we provide a protocol for implementing parallelized non-local two-qudit gates using the assistance of entangled qubit pairs. Because the entangled qubits can be prepared non-deterministically, this offers the possibility of high fidelity two-qudit gates.Comment: 9 pages, 3 figure

    System analysis approach to deriving design criteria (loads) for Space Shuttle and its payloads. Volume 1: General statement of approach

    Get PDF
    Space shuttle, the most complex transportation system designed to date, illustrates the requirement for an analysis approach that considers all major disciplines simultaneously. Its unique cross coupling and high sensitivity to aerodynamic uncertainties and high performance requirements dictated a less conservative approach than those taken in programs. Analyses performed for the space shuttle and certain payloads, Space Telescope and Spacelab, are used a examples. These illustrate the requirements for system analysis approaches and criteria, including dynamic modeling requirements, test requirements control requirements and the resulting design verification approaches. A survey of the problem, potential approaches available as solutions, implications for future systems, and projected technology development areas are addressed
    corecore