516 research outputs found

    Decoherence in a system of many two--level atoms

    Full text link
    I show that the decoherence in a system of NN degenerate two--level atoms interacting with a bosonic heat bath is for any number of atoms NN governed by a generalized Hamming distance (called ``decoherence metric'') between the superposed quantum states, with a time--dependent metric tensor that is specific for the heat bath.The decoherence metric allows for the complete characterization of the decoherence of all possible superpositions of many-particle states, and can be applied to minimize the over-all decoherence in a quantum memory. For qubits which are far apart, the decoherence is given by a function describing single-qubit decoherence times the standard Hamming distance. I apply the theory to cold atoms in an optical lattice interacting with black body radiation.Comment: replaced with published versio

    Writer Identification Using Inexpensive Signal Processing Techniques

    Full text link
    We propose to use novel and classical audio and text signal-processing and otherwise techniques for "inexpensive" fast writer identification tasks of scanned hand-written documents "visually". The "inexpensive" refers to the efficiency of the identification process in terms of CPU cycles while preserving decent accuracy for preliminary identification. This is a comparative study of multiple algorithm combinations in a pattern recognition pipeline implemented in Java around an open-source Modular Audio Recognition Framework (MARF) that can do a lot more beyond audio. We present our preliminary experimental findings in such an identification task. We simulate "visual" identification by "looking" at the hand-written document as a whole rather than trying to extract fine-grained features out of it prior classification.Comment: 9 pages; 1 figure; presented at CISSE'09 at http://conference.cisse2009.org/proceedings.aspx ; includes the the application source code; based on MARF described in arXiv:0905.123

    Detection of trend changes in time series using Bayesian inference

    Full text link
    Change points in time series are perceived as isolated singularities where two regular trends of a given signal do not match. The detection of such transitions is of fundamental interest for the understanding of the system's internal dynamics. In practice observational noise makes it difficult to detect such change points in time series. In this work we elaborate a Bayesian method to estimate the location of the singularities and to produce some confidence intervals. We validate the ability and sensitivity of our inference method by estimating change points of synthetic data sets. As an application we use our algorithm to analyze the annual flow volume of the Nile River at Aswan from 1871 to 1970, where we confirm a well-established significant transition point within the time series.Comment: 9 pages, 12 figures, submitte

    A New Approach to Time Domain Classification of Broadband Noise in Gravitational Wave Data

    Get PDF
    Broadband noise in gravitational wave (GW) detectors, also known as triggers, can often be a deterrant to the efficiency with which astrophysical search pipelines detect sources. It is important to understand their instrumental or environmental origin so that they could be eliminated or accounted for in the data. Since the number of triggers is large, data mining approaches such as clustering and classification are useful tools for this task. Classification of triggers based on a handful of discrete properties has been done in the past. A rich information content is available in the waveform or 'shape' of the triggers that has had a rather restricted exploration so far. This paper presents a new way to classify triggers deriving information from both trigger waveforms as well as their discrete physical properties using a sequential combination of the Longest Common Sub-Sequence (LCSS) and LCSS coupled with Fast Time Series Evaluation (FTSE) for waveform classification and the multidimensional hierarchical classification (MHC) analysis for the grouping based on physical properties. A generalized k-means algorithm is used with the LCSS (and LCSS+FTSE) for clustering the triggers using a validity measure to determine the correct number of clusters in absence of any prior knowledge. The results have been demonstrated by simulations and by application to a segment of real LIGO data from the sixth science run.Comment: 16 pages, 16 figure

    Interest Rates and Information Geometry

    Full text link
    The space of probability distributions on a given sample space possesses natural geometric properties. For example, in the case of a smooth parametric family of probability distributions on the real line, the parameter space has a Riemannian structure induced by the embedding of the family into the Hilbert space of square-integrable functions, and is characterised by the Fisher-Rao metric. In the nonparametric case the relevant geometry is determined by the spherical distance function of Bhattacharyya. In the context of term structure modelling, we show that minus the derivative of the discount function with respect to the maturity date gives rise to a probability density. This follows as a consequence of the positivity of interest rates. Therefore, by mapping the density functions associated with a given family of term structures to Hilbert space, the resulting metrical geometry can be used to analyse the relationship of yield curves to one another. We show that the general arbitrage-free yield curve dynamics can be represented as a process taking values in the convex space of smooth density functions on the positive real line. It follows that the theory of interest rate dynamics can be represented by a class of processes in Hilbert space. We also derive the dynamics for the central moments associated with the distribution determined by the yield curve.Comment: 20 pages, 3 figure

    A distinct peak-flux distribution of the third class of gamma-ray bursts: A possible signature of X-ray flashes?

    Full text link
    Gamma-ray bursts are the most luminous events in the Universe. Going beyond the short-long classification scheme we work in the context of three burst populations with the third group of intermediate duration and softest spectrum. We are looking for physical properties which discriminate the intermediate duration bursts from the other two classes. We use maximum likelihood fits to establish group memberships in the duration-hardness plane. To confirm these results we also use k-means and hierarchical clustering. We use Monte-Carlo simulations to test the significance of the existence of the intermediate group and we find it with 99.8% probability. The intermediate duration population has a significantly lower peak-flux (with 99.94% significance). Also, long bursts with measured redshift have higher peak-fluxes (with 98.6% significance) than long bursts without measured redshifts. As the third group is the softest, we argue that we have {related} them with X-ray flashes among the gamma-ray bursts. We give a new, probabilistic definition for this class of events.Comment: accepted for publication in Ap

    Statistical mechanics of transcription-factor binding site discovery using Hidden Markov Models

    Full text link
    Hidden Markov Models (HMMs) are a commonly used tool for inference of transcription factor (TF) binding sites from DNA sequence data. We exploit the mathematical equivalence between HMMs for TF binding and the "inverse" statistical mechanics of hard rods in a one-dimensional disordered potential to investigate learning in HMMs. We derive analytic expressions for the Fisher information, a commonly employed measure of confidence in learned parameters, in the biologically relevant limit where the density of binding sites is low. We then use techniques from statistical mechanics to derive a scaling principle relating the specificity (binding energy) of a TF to the minimum amount of training data necessary to learn it.Comment: 25 pages, 2 figures, 1 table V2 - typos fixed and new references adde

    On finite pp-groups whose automorphisms are all central

    Full text link
    An automorphism α\alpha of a group GG is said to be central if α\alpha commutes with every inner automorphism of GG. We construct a family of non-special finite pp-groups having abelian automorphism groups. These groups provide counter examples to a conjecture of A. Mahalanobis [Israel J. Math., {\bf 165} (2008), 161 - 187]. We also construct a family of finite pp-groups having non-abelian automorphism groups and all automorphisms central. This solves a problem of I. Malinowska [Advances in group theory, Aracne Editrice, Rome 2002, 111-127].Comment: 11 pages, Counter examples to a conjecture from [Israel J. Math., {\bf 165} (2008), 161 - 187]; This paper will appear in Israel J. Math. in 201

    Fast parameter inference in a biomechanical model of the left ventricle by using statistical emulation

    Get PDF
    A central problem in biomechanical studies of personalized human left ventricular modelling is estimating the material properties and biophysical parameters from in vivo clinical measurements in a timeframe that is suitable for use within a clinic. Understanding these properties can provide insight into heart function or dysfunction and help to inform personalized medicine. However, finding a solution to the differential equations which mathematically describe the kinematics and dynamics of the myocardium through numerical integration can be computationally expensive. To circumvent this issue, we use the concept of emulation to infer the myocardial properties of a healthy volunteer in a viable clinical timeframe by using in vivo magnetic resonance image data. Emulation methods avoid computationally expensive simulations from the left ventricular model by replacing the biomechanical model, which is defined in terms of explicit partial differential equations, with a surrogate model inferred from simulations generated before the arrival of a patient, vastly improving computational efficiency at the clinic. We compare and contrast two emulation strategies: emulation of the computational model outputs and emulation of the loss between the observed patient data and the computational model outputs. These strategies are tested with two interpolation methods, as well as two loss functions. The best combination of methods is found by comparing the accuracy of parameter inference on simulated data for each combination. This combination, using the output emulation method, with local Gaussian process interpolation and the Euclidean loss function, provides accurate parameter inference in both simulated and clinical data, with a reduction in the computational cost of about three orders of magnitude compared with numerical integration of the differential equations by using finite element discretization techniques
    • 

    corecore