889 research outputs found

    Local robustness of Bayesian parametric inference and observed likelihoods

    Get PDF
    Here a new class of local separation measures over prior densities is studied and their usefulness for examining prior to posterior robustness under a sequence of observed likelihoods, possibly erroneous, illustrated. It is shown that provided an approximation to a prior distribution satisfies certain mild smoothness and tail conditions then prior to posterior inference for large samples is robust, irrespective of whether the priors are grossly misspecified with respect to variation distance and irrespective of the form or the validity of the observed likelihood. Furthermore it is usually possible to specify error bounds explicitly in terms of statistics associated with the posterior associated with the approximating prior and asumed prior error bounds. These results apply in a general multivariate setting and are especially easy to interpret when prior densities are approximated using standard families or multivariate prior densities factorise

    Electroweak production of hybrid mesons in a Flux-Tube simulation of Lattice QCD

    Full text link
    We make the first calculation of the electroweak couplings of hybrid mesons to conventional mesons appropriate to photoproduction and to the decays of BB or DD mesons. E1E1 amplitudes are found to be large and may contribute in charge exchange γp→nH+\gamma p \to n H^+ allowing production of (amongst others) the charged 1−+1^{-+} exotic hybrid off a2a_2 exchange. Axial hybrid meson photoproduction is predicted to be large courtesy of π\pi exchange, and its strange hybrid counterpart is predicted in B→ψKH(1+)B \to \psi K_H(1^+) with b.r.∼10−4b.r. \sim 10^{-4}. Higher multipoles, and some implications for hybrid charmonium are briefly discussed.Comment: 4 page

    Constructing a data-driven receptor model for organic and inorganic aerosol : a synthesis analysis of eight mass spectrometric data sets from a boreal forest site

    Get PDF
    The interactions between organic and inorganic aerosol chemical components are integral to understanding and modelling climate and health-relevant aerosol physicochemical properties, such as volatility, hygroscopicity, light scattering and toxicity. This study presents a synthesis analysis for eight data sets, of non-refractory aerosol composition, measured at a boreal forest site. The measurements, performed with an aerosol mass spectrometer, cover in total around 9 months over the course of 3 years. In our statistical analysis, we use the complete organic and inorganic unit-resolution mass spectra, as opposed to the more common approach of only including the organic fraction. The analysis is based on iterative, combined use of (1) data reduction, (2) classification and (3) scaling tools, producing a data-driven chemical mass balance type of model capable of describing site-specific aerosol composition. The receptor model we constructed was able to explain 83 +/- 8% of variation in data, which increased to 96 +/- 3% when signals from low signal-to-noise variables were not considered. The resulting interpretation of an extensive set of aerosol mass spectrometric data infers seven distinct aerosol chemical components for a rural boreal forest site: ammonium sulfate (35 +/- 7% of mass), low and semi-volatile oxidised organic aerosols (27 +/- 8% and 12 +/- 7 %), biomass burning organic aerosol (11 +/- 7 %), a nitrate-containing organic aerosol type (7 +/- 2 %), ammonium nitrate (5 +/- 2 %), and hydrocarbon-like organic aerosol (3 +/- 1 %). Some of the additionally observed, rare outlier aerosol types likely emerge due to surface ionisation effects and likely represent amine compounds from an unknown source and alkaline metals from emissions of a nearby district heating plant. Compared to traditional, ionbalance-based inorganics apportionment schemes for aerosol mass spectrometer data, our statistics-based method provides an improved, more robust approach, yielding readily useful information for the modelling of submicron atmospheric aerosols physical and chemical properties. The results also shed light on the division between organic and inorganic aerosol types and dynamics of salt formation in aerosol. Equally importantly, the combined methodology exemplifies an iterative analysis, using consequent analysis steps by a combination of statistical methods. Such an approach offers new ways to home in on physicochemically sensible solutions with minimal need for a priori information or analyst interference. We therefore suggest that similar statisticsbased approaches offer significant potential for un- or semi-supervised machine-learning applications in future analyses of aerosol mass spectrometric data.Peer reviewe

    Generic dijet soft functions at two-loop order: correlated emissions

    Get PDF
    We present a systematic algorithm for the perturbative computation of soft functions that are defined in terms of two light-like Wilson lines. Our method is based on a universal parametrisation of the phase-space integrals, which we use to isolate the singularities in Laplace space. The observable-dependent integrations can then be performed numerically, and they are implemented in the new, publicly available package SoftSERVE that we use to derive all of our numerical results. Our algorithm applies to both SCET-1 and SCET-2 soft functions, and in the current version it can be used to compute two out of three NNLO colour structures associated with the so-called correlated-emission contribution. We confirm existing two-loop results for about a dozen e+e−e^+e^- and hadron-collider soft functions, and we obtain new predictions for the C-parameter as well as thrust-axis and broadening-axis angularities.Comment: 58 pages, 8 figures, associated package can be found at https://softserve.hepforge.org/. Minor revisio

    Evaluation of novel data-driven metrics of amyloid β deposition for longitudinal PET studies

    Get PDF
    PURPOSE: Positron emission tomography (PET) provides in vivo quantification of amyloid-β (Aβ) pathology. Established methods for assessing Aβ burden can be affected by physiological and technical factors. Novel, data-driven metrics have been developed to account for these sources of variability. We aimed to evaluate the performance of four data-driven amyloid PET metrics against conventional techniques, using a common set of criteria. METHODS: Three cohorts were used for evaluation: Insight 46 (N=464, [18F]florbetapir), AIBL (N=277, [18F]flutemetamol), and an independent test-retest data (N=10, [18F]flutemetamol). Established metrics of amyloid tracer uptake included the Centiloid (CL) and where dynamic data was available, the non-displaceable binding potential (BPND). The four data driven metrics computed were the amyloid load (Aβ load), the Aβ PET pathology accumulation index (Aβ index), the Centiloid derived from non-negative matrix factorisation (CLNMF), and the amyloid pattern similarity score (AMPSS). These metrics were evaluated using reliability and repeatability in test-retest data, associations with BPND and CL, and sample size estimates to detect a 25% slowing in Aβ accumulation. RESULTS: All metrics showed good reliability. Aβ load, Aβ index and CLNMF were strong associated with the BPND. The associations with CL suggests that cross-sectional measures of CLNMF, Aβ index and Aβ load are robust across studies. Sample size estimates for secondary prevention trial scenarios were the lowest for CLNMF and Aβ load compared to the CL. CONCLUSION: Among the novel data-driven metrics evaluated, the Aβ load, the Aβ index and the CLNMF can provide comparable performance to more established quantification methods of Aβ PET tracer uptake. The CLNMF and Aβ load could offer a more precise alternative to CL, although further studies in larger cohorts should be conducted

    Robust Conditional Independence maps of single-voxel Magnetic Resonance Spectra to elucidate associations between brain tumours and metabolites.

    Get PDF
    The aim of the paper is two-fold. First, we show that structure finding with the PC algorithm can be inherently unstable and requires further operational constraints in order to consistently obtain models that are faithful to the data. We propose a methodology to stabilise the structure finding process, minimising both false positive and false negative error rates. This is demonstrated with synthetic data. Second, to apply the proposed structure finding methodology to a data set comprising single-voxel Magnetic Resonance Spectra of normal brain and three classes of brain tumours, to elucidate the associations between brain tumour types and a range of observed metabolites that are known to be relevant for their characterisation. The data set is bootstrapped in order to maximise the robustness of feature selection for nominated target variables. Specifically, Conditional Independence maps (CI-maps) built from the data and their derived Bayesian networks have been used. A Directed Acyclic Graph (DAG) is built from CI-maps, being a major challenge the minimization of errors in the graph structure. This work presents empirical evidence on how to reduce false positive errors via the False Discovery Rate, and how to identify appropriate parameter settings to improve the False Negative Reduction. In addition, several node ordering policies are investigated that transform the graph into a DAG. The obtained results show that ordering nodes by strength of mutual information can recover a representative DAG in a reasonable time, although a more accurate graph can be recovered using a random order of samples at the expense of increasing the computation time

    Quantum machine learning: a classical perspective

    Get PDF
    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
    • …
    corecore