632,865 research outputs found

    Relativistic Cholesky-decomposed density matrix MP2

    Get PDF
    In the present article, we introduce the relativistic Cholesky-decomposed density (CDD) matrix second-order M{\o}ller-Plesset perturbation theory (MP2) energies. The working equations are formulated in terms of the usual intermediates of MP2 when employing the resolution-of-the-identity approximation (RI) for two-electron integrals. Those intermediates are obtained by substituting the occupied and virtual quaternion pseudo-density matrices of our previously proposed two-component atomic orbital-based MP2 (J. Chem. Phys. 145, 014107 (2016)) by the corresponding pivoted quaternion Cholesky factors. While working within the Kramers-restricted formalism, we obtain a formal spin-orbit overhead of 16 and 28 for the Coulomb and exchange contribution to the 2C MP2 correlation energy, respectively, compared to a non-relativistic (NR) spin-free CDD-MP2 implementation. This compact quaternion formulation could also be easily explored in any other algorithm to compute the 2C MP2 energy. The quaternion Cholesky factors become sparse for large molecules and, with a block-wise screening, block sparse-matrix multiplication algorithm, we observed an effective quadratic scaling of the total wall time for heavy-element containing linear molecules with increasing system size. The total run time for both 1C and 2C calculations was dominated by the contraction to the exchange energy. We have also investigated a bulky Te-containing supramolecular complex. For such bulky, three-dimensionally extended molecules the present screening scheme has a much larger prefactor and is less effective

    The effects of size on local banks´ funding costs

    Get PDF
    Motivated by the recent discussion of the declining importance of deposits as banks´ major source of funding we investigate which factors determine funding costs at local banks. Using a panel data set of more than 800 German local savings and cooperative banks for the period from 1998 to 2004 we show that funding costs are not only driven by the relative share of comparatively cheap deposits of bank´s liabilities but among other factors especially by the size of the bank. In our empirical analysis we find strong and robust evidence that, ceteris paribus, smaller banks exhibit lower funding costs than larger banks suggesting that small banks are able to attract deposits more cheaply than their larger counterparts. We argue that this is the case because smaller banks interact more personally with customers, operate in customers´ geographic proximity and have longer and stronger relationships than larger banks and, hence, are able to charge higher prices for their services. Our finding of a strong influence of bank size on funding costs is also in an in- ternational context of great interest as mergers among small local banks - the key driver of bank growth - are a recent phenomenon not only in European banking that is expected to continue in the future. At the same time, net interest income remains by far the most important source of revenue for most local banks, accounting for approximately 70% of total operating revenues in the case of German local banks. The influence of size on funding costs is of strong economic relevance: our results suggest that an increase in size by 50%, for example, from EUR 500 million in total assets to EUR 750 million (exemplary for M&A transactions among local banks) increases funding costs, ceteris paribus, by approximately 18 basis points which relates to approx. 7% of banks´ average net interest margin

    Setting Annual Catch Limits for U.S. Fisheries: An Expert Working Group Report

    Get PDF
    Provides guidance on the application of annual catch limits for U.S. fisheries based on the recommendations of a working group of national and international fisheries experts

    Optimal static and dynamic recycling of defective binary devices

    Full text link
    The binary Defect Combination Problem consists in finding a fully working subset from a given ensemble of imperfect binary components. We determine the typical properties of the model using methods of statistical mechanics, in particular, the region in the parameter space where there is almost surely at least one fully-working subset. Dynamic recycling of a flux of imperfect binary components leads to zero wastage.Comment: 14 pages, 15 figure

    Computing with and without arbitrary large numbers

    Full text link
    In the study of random access machines (RAMs) it has been shown that the availability of an extra input integer, having no special properties other than being sufficiently large, is enough to reduce the computational complexity of some problems. However, this has only been shown so far for specific problems. We provide a characterization of the power of such extra inputs for general problems. To do so, we first correct a classical result by Simon and Szegedy (1992) as well as one by Simon (1981). In the former we show mistakes in the proof and correct these by an entirely new construction, with no great change to the results. In the latter, the original proof direction stands with only minor modifications, but the new results are far stronger than those of Simon (1981). In both cases, the new constructions provide the theoretical tools required to characterize the power of arbitrary large numbers.Comment: 12 pages (main text) + 30 pages (appendices), 1 figure. Extended abstract. The full paper was presented at TAMC 2013. (Reference given is for the paper version, as it appears in the proceedings.

    Massively parallel approximate Gaussian process regression

    Get PDF
    We explore how the big-three computing paradigms -- symmetric multi-processor (SMC), graphical processing units (GPUs), and cluster computing -- can together be brought to bare on large-data Gaussian processes (GP) regression problems via a careful implementation of a newly developed local approximation scheme. Our methodological contribution focuses primarily on GPU computation, as this requires the most care and also provides the largest performance boost. However, in our empirical work we study the relative merits of all three paradigms to determine how best to combine them. The paper concludes with two case studies. One is a real data fluid-dynamics computer experiment which benefits from the local nature of our approximation; the second is a synthetic data example designed to find the largest design for which (accurate) GP emulation can performed on a commensurate predictive set under an hour.Comment: 24 pages, 6 figures, 1 tabl

    Electroweak corrections to Higgs-strahlung off W/Z bosons at the Tevatron and the LHC with HAWK

    Full text link
    The associate production of Higgs bosons with W or Z bosons, known as Higgs-strahlung, is an important search channel for Higgs bosons at the hadron colliders Tevatron and LHC for low Higgs-boson masses. We refine a previous calculation of next-to-leading-order electroweak corrections (and recalculate the QCD corrections) upon including the leptonic decay of the W/Z bosons, thereby keeping the fully differential information of the 2-lepton + Higgs final state. The gauge invariance of the W/Z-resonance treatment is ensured by the use of the complex-mass scheme. The electroweak corrections, which are at the level of -(5-10)% for total cross sections, further increase in size with increasing transverse momenta p_T in differential cross sections. For instance, for p_T,H >~ 200GeV, which is the interesting range at the LHC, the electroweak corrections to WH production reach about -14% for M_H = 120GeV. The described corrections are implemented in the HAWK Monte Carlo program, which was initially designed for the vector-boson-fusion channel, and are discussed for various distributions in the production channels pp / p \bar p -> H + l nu_l / l^-l^+ / nu_l \bar nu_l + X.Comment: 22 p

    The RAM equivalent of P vs. RP

    Full text link
    One of the fundamental open questions in computational complexity is whether the class of problems solvable by use of stochasticity under the Random Polynomial time (RP) model is larger than the class of those solvable in deterministic polynomial time (P). However, this question is only open for Turing Machines, not for Random Access Machines (RAMs). Simon (1981) was able to show that for a sufficiently equipped Random Access Machine, the ability to switch states nondeterministically does not entail any computational advantage. However, in the same paper, Simon describes a different (and arguably more natural) scenario for stochasticity under the RAM model. According to Simon's proposal, instead of receiving a new random bit at each execution step, the RAM program is able to execute the pseudofunction RAND(y)\textit{RAND}(y), which returns a uniformly distributed random integer in the range [0,y)[0,y). Whether the ability to allot a random integer in this fashion is more powerful than the ability to allot a random bit remained an open question for the last 30 years. In this paper, we close Simon's open problem, by fully characterising the class of languages recognisable in polynomial time by each of the RAMs regarding which the question was posed. We show that for some of these, stochasticity entails no advantage, but, more interestingly, we show that for others it does.Comment: 23 page
    • …
    corecore