72,040 research outputs found

    A Multi-variate Discrimination Technique Based on Range-Searching

    Get PDF
    We present a fast and transparent multi-variate event classification technique, called PDE-RS, which is based on sampling the signal and background densities in a multi-dimensional phase space using range-searching. The employed algorithm is presented in detail and its behaviour is studied with simple toy examples representing basic patterns of problems often encountered in High Energy Physics data analyses. In addition an example relevant for the search for instanton-induced processes in deep-inelastic scattering at HERA is discussed. For all studied examples, the new presented method performs as good as artificial Neural Networks and has furthermore the advantage to need less computation time. This allows to carefully select the best combination of observables which optimally separate the signal and background and for which the simulations describe the data best. Moreover, the systematic and statistical uncertainties can be easily evaluated. The method is therefore a powerful tool to find a small number of signal events in the large data samples expected at future particle colliders.Comment: Submitted to NIM, 18 pages, 8 figure

    Application of The Method of Elastic Maps In Analysis of Genetic Texts

    Get PDF
    Abstract - Method of elastic maps ( http://cogprints.ecs.soton.ac.uk/archive/00003088/ and http://cogprints.ecs.soton.ac.uk/archive/00003919/ ) allows us to construct efficiently 1D, 2D and 3D non-linear approximations to the principal manifolds with different topology (piece of plane, sphere, torus etc.) and to project data onto it. We describe the idea of the method and demonstrate its applications in analysis of genetic sequences. The animated 3D-scatters are available on our web-site: http://www.ihes.fr/~zinovyev/7clusters/ We found the universal cluster structure of genetic sequences, and demonstrated the thin structure of these clusters for coding regions. This thin structure is related to different translational efficiency

    Space-by-time non-negative matrix factorization for single-trial decoding of M/EEG activity

    Get PDF
    We develop a novel methodology for the single-trial analysis of multichannel time-varying neuroimaging signals. We introduce the space-by-time M/EEG decomposition, based on Non-negative Matrix Factorization (NMF), which describes single-trial M/EEG signals using a set of non-negative spatial and temporal components that are linearly combined with signed scalar activation coefficients. We illustrate the effectiveness of the proposed approach on an EEG dataset recorded during the performance of a visual categorization task. Our method extracts three temporal and two spatial functional components achieving a compact yet full representation of the underlying structure, which validates and summarizes succinctly results from previous studies. Furthermore, we introduce a decoding analysis that allows determining the distinct functional role of each component and relating them to experimental conditions and task parameters. In particular, we demonstrate that the presented stimulus and the task difficulty of each trial can be reliably decoded using specific combinations of components from the identified space-by-time representation. When comparing with a sliding-window linear discriminant algorithm, we show that our approach yields more robust decoding performance across participants. Overall, our findings suggest that the proposed space-by-time decomposition is a meaningful low-dimensional representation that carries the relevant information of single-trial M/EEG signals

    A new analysis strategy for detection of faint gamma-ray sources with Imaging Atmospheric Cherenkov Telescopes

    Full text link
    A new background rejection strategy for gamma-ray astrophysics with stereoscopic Imaging Atmospheric Cherenkov Telescopes (IACT), based on Monte Carlo (MC) simulations and real background data from the H.E.S.S. [High Energy Stereoscopic System, see [1].] experiment, is described. The analysis is based on a multivariate combination of both previously-known and newly-derived discriminant variables using the physical shower properties, as well as its multiple images, for a total of eight variables. Two of these new variables are defined thanks to a new energy evaluation procedure, which is also presented here. The method allows an enhanced sensitivity with the current generation of ground-based Cherenkov telescopes to be achieved, and at the same time its main features of rapidity and flexibility allow an easy generalization to any type of IACT. The robustness against Night Sky Background (NSB) variations of this approach is tested with MC simulated events. The overall consistency of the analysis chain has been checked by comparison of the real gamma-ray signal obtained from H.E.S.S. observations with MC simulations and through reconstruction of known source spectra. Finally, the performance has been evaluated by application to faint H.E.S.S. sources. The gain in sensitivity as compared to the best standard Hillas analysis ranges approximately from 1.2 to 1.8 depending on the source characteristics, which corresponds to an economy in observation time of a factor 1.4 to 3.2.Comment: 26 pages, 13 figure

    Discrimination and synthesis of recursive quantum states in high-dimensional Hilbert spaces

    Full text link
    We propose an interferometric method for statistically discriminating between nonorthogonal states in high dimensional Hilbert spaces for use in quantum information processing. The method is illustrated for the case of photon orbital angular momentum (OAM) states. These states belong to pairs of bases that are mutually unbiased on a sequence of two-dimensional subspaces of the full Hilbert space, but the vectors within the same basis are not necessarily orthogonal to each other. Over multiple trials, this method allows distinguishing OAM eigenstates from superpositions of multiple such eigenstates. Variations of the same method are then shown to be capable of preparing and detecting arbitrary linear combinations of states in Hilbert space. One further variation allows the construction of chains of states obeying recurrence relations on the Hilbert space itself, opening a new range of possibilities for more abstract information-coding algorithms to be carried out experimentally in a simple manner. Among other applications, we show that this approach provides a simplified means of switching between pairs of high-dimensional mutually unbiased OAM bases

    Seismic Ray Impedance Inversion

    Get PDF
    This thesis investigates a prestack seismic inversion scheme implemented in the ray parameter domain. Conventionally, most prestack seismic inversion methods are performed in the incidence angle domain. However, inversion using the concept of ray impedance, as it honours ray path variation following the elastic parameter variation according to Snell’s law, shows the capacity to discriminate different lithologies if compared to conventional elastic impedance inversion. The procedure starts with data transformation into the ray-parameter domain and then implements the ray impedance inversion along constant ray-parameter profiles. With different constant-ray-parameter profiles, mixed-phase wavelets are initially estimated based on the high-order statistics of the data and further refined after a proper well-to-seismic tie. With the estimated wavelets ready, a Cauchy inversion method is used to invert for seismic reflectivity sequences, aiming at recovering seismic reflectivity sequences for blocky impedance inversion. The impedance inversion from reflectivity sequences adopts a standard generalised linear inversion scheme, whose results are utilised to identify rock properties and facilitate quantitative interpretation. It has also been demonstrated that we can further invert elastic parameters from ray impedance values, without eliminating an extra density term or introducing a Gardner’s relation to absorb this term. Ray impedance inversion is extended to P-S converted waves by introducing the definition of converted-wave ray impedance. This quantity shows some advantages in connecting prestack converted wave data with well logs, if compared with the shearwave elastic impedance derived from the Aki and Richards approximation to the Zoeppritz equations. An analysis of P-P and P-S wave data under the framework of ray impedance is conducted through a real multicomponent dataset, which can reduce the uncertainty in lithology identification.Inversion is the key method in generating those examples throughout the entire thesis as we believe it can render robust solutions to geophysical problems. Apart from the reflectivity sequence, ray impedance and elastic parameter inversion mentioned above, inversion methods are also adopted in transforming the prestack data from the offset domain to the ray-parameter domain, mixed-phase wavelet estimation, as well as the registration of P-P and P-S waves for the joint analysis. The ray impedance inversion methods are successfully applied to different types of datasets. In each individual step to achieving the ray impedance inversion, advantages, disadvantages as well as limitations of the algorithms adopted are detailed. As a conclusion, the ray impedance related analyses demonstrated in this thesis are highly competent compared with the classical elastic impedance methods and the author would like to recommend it for a wider application

    JUNIPR: a Framework for Unsupervised Machine Learning in Particle Physics

    Full text link
    In applications of machine learning to particle physics, a persistent challenge is how to go beyond discrimination to learn about the underlying physics. To this end, a powerful tool would be a framework for unsupervised learning, where the machine learns the intricate high-dimensional contours of the data upon which it is trained, without reference to pre-established labels. In order to approach such a complex task, an unsupervised network must be structured intelligently, based on a qualitative understanding of the data. In this paper, we scaffold the neural network's architecture around a leading-order model of the physics underlying the data. In addition to making unsupervised learning tractable, this design actually alleviates existing tensions between performance and interpretability. We call the framework JUNIPR: "Jets from UNsupervised Interpretable PRobabilistic models". In this approach, the set of particle momenta composing a jet are clustered into a binary tree that the neural network examines sequentially. Training is unsupervised and unrestricted: the network could decide that the data bears little correspondence to the chosen tree structure. However, when there is a correspondence, the network's output along the tree has a direct physical interpretation. JUNIPR models can perform discrimination tasks, through the statistically optimal likelihood-ratio test, and they permit visualizations of discrimination power at each branching in a jet's tree. Additionally, JUNIPR models provide a probability distribution from which events can be drawn, providing a data-driven Monte Carlo generator. As a third application, JUNIPR models can reweight events from one (e.g. simulated) data set to agree with distributions from another (e.g. experimental) data set.Comment: 37 pages, 24 figure

    Algebraic and algorithmic frameworks for optimized quantum measurements

    Get PDF
    Von Neumann projections are the main operations by which information can be extracted from the quantum to the classical realm. They are however static processes that do not adapt to the states they measure. Advances in the field of adaptive measurement have shown that this limitation can be overcome by "wrapping" the von Neumann projectors in a higher-dimensional circuit which exploits the interplay between measurement outcomes and measurement settings. Unfortunately, the design of adaptive measurement has often been ad hoc and setup-specific. We shall here develop a unified framework for designing optimized measurements. Our approach is two-fold: The first is algebraic and formulates the problem of measurement as a simple matrix diagonalization problem. The second is algorithmic and models the optimal interaction between measurement outcomes and measurement settings as a cascaded network of conditional probabilities. Finally, we demonstrate that several figures of merit, such as Bell factors, can be improved by optimized measurements. This leads us to the promising observation that measurement detectors which---taken individually---have a low quantum efficiency can be be arranged into circuits where, collectively, the limitations of inefficiency are compensated for
    • 

    corecore