609 research outputs found

    Entanglement monogamy and entanglement evolution in multipartite systems

    Get PDF
    We analyze the entanglement distribution and the two-qubit residual entanglement in multipartite systems. For a composite system consisting of two cavities interacting with independent reservoirs, it is revealed that the entanglement evolution is restricted by an entanglement monogamy relation derived here. Moreover, it is found that the initial cavity-cavity entanglement evolves completely to the genuine four-partite cavities-reservoirs entanglement in the time interval between the sudden death of cavity-cavity entanglement and the birth of reservoir-reservoir entanglement. In addition, we also address the relationship between the genuine block-block entanglement form and qubit-block form in the interval. © 2009 The American Physical Society.published_or_final_versio

    Functional limit theorems for random regular graphs

    Full text link
    Consider d uniformly random permutation matrices on n labels. Consider the sum of these matrices along with their transposes. The total can be interpreted as the adjacency matrix of a random regular graph of degree 2d on n vertices. We consider limit theorems for various combinatorial and analytical properties of this graph (or the matrix) as n grows to infinity, either when d is kept fixed or grows slowly with n. In a suitable weak convergence framework, we prove that the (finite but growing in length) sequences of the number of short cycles and of cyclically non-backtracking walks converge to distributional limits. We estimate the total variation distance from the limit using Stein's method. As an application of these results we derive limits of linear functionals of the eigenvalues of the adjacency matrix. A key step in this latter derivation is an extension of the Kahn-Szemer\'edi argument for estimating the second largest eigenvalue for all values of d and n.Comment: Added Remark 27. 39 pages. To appear in Probability Theory and Related Field

    A simulation study for comparing testing statistics in response-adaptive randomization

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Response-adaptive randomizations are able to assign more patients in a comparative clinical trial to the tentatively better treatment. However, due to the adaptation in patient allocation, the samples to be compared are no longer independent. At large sample sizes, many asymptotic properties of test statistics derived for independent sample comparison are still applicable in adaptive randomization provided that the patient allocation ratio converges to an appropriate target asymptotically. However, the small sample properties of commonly used test statistics in response-adaptive randomization are not fully studied.</p> <p>Methods</p> <p>Simulations are systematically conducted to characterize the statistical properties of eight test statistics in six response-adaptive randomization methods at six allocation targets with sample sizes ranging from 20 to 200. Since adaptive randomization is usually not recommended for sample size less than 30, the present paper focuses on the case with a sample of 30 to give general recommendations with regard to test statistics for contingency tables in response-adaptive randomization at small sample sizes.</p> <p>Results</p> <p>Among all asymptotic test statistics, the Cook's correction to chi-square test (<it>T</it><sub><it>MC</it></sub>) is the best in attaining the nominal size of hypothesis test. The William's correction to log-likelihood ratio test (<it>T</it><sub><it>ML</it></sub>) gives slightly inflated type I error and higher power as compared with <it>T</it><sub><it>MC</it></sub>, but it is more robust against the unbalance in patient allocation. <it>T</it><sub><it>MC </it></sub>and <it>T</it><sub><it>ML </it></sub>are usually the two test statistics with the highest power in different simulation scenarios. When focusing on <it>T</it><sub><it>MC </it></sub>and <it>T</it><sub><it>ML</it></sub>, the generalized drop-the-loser urn (GDL) and sequential estimation-adjusted urn (SEU) have the best ability to attain the correct size of hypothesis test respectively. Among all sequential methods that can target different allocation ratios, GDL has the lowest variation and the highest overall power at all allocation ratios. The performance of different adaptive randomization methods and test statistics also depends on allocation targets. At the limiting allocation ratio of drop-the-loser (DL) and randomized play-the-winner (RPW) urn, DL outperforms all other methods including GDL. When comparing the power of test statistics in the same randomization method but at different allocation targets, the powers of log-likelihood-ratio, log-relative-risk, log-odds-ratio, Wald-type Z, and chi-square test statistics are maximized at their corresponding optimal allocation ratios for power. Except for the optimal allocation target for log-relative-risk, the other four optimal targets could assign more patients to the worse arm in some simulation scenarios. Another optimal allocation target, <it>R</it><sub><it>RSIHR</it></sub>, proposed by Rosenberger and Sriram (<it>Journal of Statistical Planning and Inference</it>, 1997) is aimed at minimizing the number of failures at fixed power using Wald-type Z test statistics. Among allocation ratios that always assign more patients to the better treatment, <it>R</it><sub><it>RSIHR </it></sub>usually has less variation in patient allocation, and the values of variation are consistent across all simulation scenarios. Additionally, the patient allocation at <it>R</it><sub><it>RSIHR </it></sub>is not too extreme. Therefore, <it>R</it><sub><it>RSIHR </it></sub>provides a good balance between assigning more patients to the better treatment and maintaining the overall power.</p> <p>Conclusion</p> <p>The Cook's correction to chi-square test and Williams' correction to log-likelihood-ratio test are generally recommended for hypothesis test in response-adaptive randomization, especially when sample sizes are small. The generalized drop-the-loser urn design is the recommended method for its good overall properties. Also recommended is the use of the <it>R</it><sub><it>RSIHR </it></sub>allocation target.</p

    The Wasteland of Random Supergravities

    Full text link
    We show that in a general \cal{N} = 1 supergravity with N \gg 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kahler potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Building on the Tracy-Widom law governing fluctuations of extreme eigenvalues, we determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P \propto exp(-c N^p), with c, p being constants. For generic critical points we find p \approx 1.5, while for approximately-supersymmetric critical points, p \approx 1.3. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.Comment: 39 pages, 9 figures; v2: fixed typos, added refs and clarification

    Stochastic Dominance Analysis of CTA Funds

    Get PDF
    In this paper, we employ the stochastic dominance approach to rank the performance of commodity trading advisors (CTA) funds. An advantage of this approach is that it alleviates the problems that can arise if CTA returns are not normally distributed by utilizing the entire returns distribution. We find both first-order and higher-order stochastic dominance relationships amongst the CTA funds and conclude that investors would be better off investing in the first-order dominant funds to maximize their expected utilities and expected wealth. However, for higher-order dominant CTA, riskaverse investors can maximize their expected utilities but not their expected wealth. We conclude that the stochastic dominance approach is more appropriate compared with traditional approaches as a filter in the CTA selection process given that a meaningful economic interpretation of the results is possible as the entire return distribution is utilized when returns are non-normal

    The projection score - an evaluation criterion for variable subset selection in PCA visualization

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In many scientific domains, it is becoming increasingly common to collect high-dimensional data sets, often with an exploratory aim, to generate new and relevant hypotheses. The exploratory perspective often makes statistically guided visualization methods, such as Principal Component Analysis (PCA), the methods of choice. However, the clarity of the obtained visualizations, and thereby the potential to use them to formulate relevant hypotheses, may be confounded by the presence of the many non-informative variables. For microarray data, more easily interpretable visualizations are often obtained by filtering the variable set, for example by removing the variables with the smallest variances or by only including the variables most highly related to a specific response. The resulting visualization may depend heavily on the inclusion criterion, that is, effectively the number of retained variables. To our knowledge, there exists no objective method for determining the optimal inclusion criterion in the context of visualization.</p> <p>Results</p> <p>We present the projection score, which is a straightforward, intuitively appealing measure of the informativeness of a variable subset with respect to PCA visualization. This measure can be universally applied to find suitable inclusion criteria for any type of variable filtering. We apply the presented measure to find optimal variable subsets for different filtering methods in both microarray data sets and synthetic data sets. We note also that the projection score can be applied in general contexts, to compare the informativeness of any variable subsets with respect to visualization by PCA.</p> <p>Conclusions</p> <p>We conclude that the projection score provides an easily interpretable and universally applicable measure of the informativeness of a variable subset with respect to visualization by PCA, that can be used to systematically find the most interpretable PCA visualization in practical exploratory analysis.</p

    Jet energy measurement with the ATLAS detector in proton-proton collisions at root s=7 TeV

    Get PDF
    The jet energy scale and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of √s = 7TeV corresponding to an integrated luminosity of 38 pb-1. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0. 4 or R=0. 6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pT≄20 GeV and pseudorapidities {pipe}η{pipe}<4. 5. The jet energy systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams, exploiting the transverse momentum balance between central and forward jets in events with dijet topologies and studying systematic variations in Monte Carlo simulations. The jet energy uncertainty is less than 2. 5 % in the central calorimeter region ({pipe}η{pipe}<0. 8) for jets with 60≀pT<800 GeV, and is maximally 14 % for pT<30 GeV in the most forward region 3. 2≀{pipe}η{pipe}<4. 5. The jet energy is validated for jet transverse momenta up to 1 TeV to the level of a few percent using several in situ techniques by comparing a well-known reference such as the recoiling photon pT, the sum of the transverse momenta of tracks associated to the jet, or a system of low-pT jets recoiling against a high-pT jet. More sophisticated jet calibration schemes are presented based on calorimeter cell energy density weighting or hadronic properties of jets, aiming for an improved jet energy resolution and a reduced flavour dependence of the jet response. The systematic uncertainty of the jet energy determined from a combination of in situ techniques is consistent with the one derived from single hadron response measurements over a wide kinematic range. The nominal corrections and uncertainties are derived for isolated jets in an inclusive sample of high-pT jets. Special cases such as event topologies with close-by jets, or selections of samples with an enhanced content of jets originating from light quarks, heavy quarks or gluons are also discussed and the corresponding uncertainties are determined. © 2013 CERN for the benefit of the ATLAS collaboration

    Measurement of the cross-section of high transverse momentum vector bosons reconstructed as single jets and studies of jet substructure in pp collisions at √s = 7 TeV with the ATLAS detector

    Get PDF
    This paper presents a measurement of the cross-section for high transverse momentum W and Z bosons produced in pp collisions and decaying to all-hadronic final states. The data used in the analysis were recorded by the ATLAS detector at the CERN Large Hadron Collider at a centre-of-mass energy of √s = 7 TeV;{\rm Te}{\rm V}andcorrespondtoanintegratedluminosityof and correspond to an integrated luminosity of 4.6\;{\rm f}{{{\rm b}}^{-1}}.ThemeasurementisperformedbyreconstructingtheboostedWorZbosonsinsinglejets.ThereconstructedjetmassisusedtoidentifytheWandZbosons,andajetsubstructuremethodbasedonenergyclusterinformationinthejetcentre−of−massframeisusedtosuppressthelargemulti−jetbackground.Thecross−sectionforeventswithahadronicallydecayingWorZboson,withtransversemomentum. The measurement is performed by reconstructing the boosted W or Z bosons in single jets. The reconstructed jet mass is used to identify the W and Z bosons, and a jet substructure method based on energy cluster information in the jet centre-of-mass frame is used to suppress the large multi-jet background. The cross-section for events with a hadronically decaying W or Z boson, with transverse momentum {{p}_{{\rm T}}}\gt 320\;{\rm Ge}{\rm V}andpseudorapidity and pseudorapidity |\eta |\lt 1.9,ismeasuredtobe, is measured to be {{\sigma }_{W+Z}}=8.5\pm 1.7$ pb and is compared to next-to-leading-order calculations. The selected events are further used to study jet grooming techniques

    Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector

    Get PDF
    The inclusive and dijet production cross-sections have been measured for jets containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The measurements use data corresponding to an integrated luminosity of 34 pb^-1. The b-jets are identified using either a lifetime-based method, where secondary decay vertices of b-hadrons in jets are reconstructed using information from the tracking detectors, or a muon-based method where the presence of a muon is used to identify semileptonic decays of b-hadrons inside jets. The inclusive b-jet cross-section is measured as a function of transverse momentum in the range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet cross-section is measured as a function of the dijet invariant mass in the range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets and the angular variable chi in two dijet mass regions. The results are compared with next-to-leading-order QCD predictions. Good agreement is observed between the measured cross-sections and the predictions obtained using POWHEG + Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet cross-section. However, it does not reproduce the measured inclusive cross-section well, particularly for central b-jets with large transverse momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final version published in European Physical Journal

    Observation of associated near-side and away-side long-range correlations in √sNN=5.02  TeV proton-lead collisions with the ATLAS detector

    Get PDF
    Two-particle correlations in relative azimuthal angle (Δϕ) and pseudorapidity (Δη) are measured in √sNN=5.02  TeV p+Pb collisions using the ATLAS detector at the LHC. The measurements are performed using approximately 1  Όb-1 of data as a function of transverse momentum (pT) and the transverse energy (ÎŁETPb) summed over 3.1<η<4.9 in the direction of the Pb beam. The correlation function, constructed from charged particles, exhibits a long-range (2<|Δη|<5) “near-side” (Δϕ∌0) correlation that grows rapidly with increasing ÎŁETPb. A long-range “away-side” (Δϕ∌π) correlation, obtained by subtracting the expected contributions from recoiling dijets and other sources estimated using events with small ÎŁETPb, is found to match the near-side correlation in magnitude, shape (in Δη and Δϕ) and ÎŁETPb dependence. The resultant Δϕ correlation is approximately symmetric about π/2, and is consistent with a dominant cos⁥2Δϕ modulation for all ÎŁETPb ranges and particle pT
    • 

    corecore