102 research outputs found

    Teleportation of atomic states via position measurements

    Full text link
    We present a scheme for conditionally teleporting an unknown atomic state in cavity QED, which requires two atoms and one cavity mode. The translational degrees of freedom of the atoms are taken into account using the optical Stern-Gerlach model. We show that successful teleportation with probability 1/2 can be achieved through local measurements of the cavity photon number and atomic positions. Neither direct projection onto highly entangled states nor holonomous interaction-time constraints are required.Comment: 9 pages, 3 figures, 3 new appendices include

    Structure and evolution of a European Parliament via a network and correlation analysis

    Get PDF
    We present a study of the network of relationships among elected members of the Finnish parliament, based on a quantitative analysis of initiative co-signatures, and its evolution over 16 years. To understand the structure of the parliament, we constructed a statistically validated network of members, based on the similarity between the patterns of initiatives they signed. We looked for communities within the network and characterized them in terms of members' attributes, such as electoral district and party. To gain insight on the nested structure of communities, we constructed a hierarchical tree of members from the correlation matrix. Afterwards, we studied parliament dynamics yearly, with a focus on correlations within and between parties, by also distinguishing between government and opposition. Finally, we investigated the role played by specific individuals, at a local level. In particular, whether they act as proponents who gather consensus, or as signers. Our results provide a quantitative background to current theories in political science. From a methodological point of view, our network approach has proven able to highlight both local and global features of a complex social system.Comment: 15 pages, 10 figure

    When do improved covariance matrix estimators enhance portfolio optimization? An empirical comparative study of nine estimators

    Get PDF
    The use of improved covariance matrix estimators as an alternative to the sample estimator is considered an important approach for enhancing portfolio optimization. Here we empirically compare the performance of 9 improved covariance estimation procedures by using daily returns of 90 highly capitalized US stocks for the period 1997-2007. We find that the usefulness of covariance matrix estimators strongly depends on the ratio between estimation period T and number of stocks N, on the presence or absence of short selling, and on the performance metric considered. When short selling is allowed, several estimation methods achieve a realized risk that is significantly smaller than the one obtained with the sample covariance method. This is particularly true when T/N is close to one. Moreover many estimators reduce the fraction of negative portfolio weights, while little improvement is achieved in the degree of diversification. On the contrary when short selling is not allowed and T>N, the considered methods are unable to outperform the sample covariance in terms of realized risk but can give much more diversified portfolios than the one obtained with the sample covariance. When T<N the use of the sample covariance matrix and of the pseudoinverse gives portfolios with very poor performance.Comment: 30 page

    Designing and pricing guarantee options in defined contribution pension plans

    Get PDF
    The shift from defined benefit (DB) to defined contribution (DC) is pervasive among pension funds, due to demographic changes and macroeconomic pressures. In DB all risks are borne by the provider, while in plain vanilla DC all risks are borne by the beneficiary. However, for DC to provide income security some kind of guarantee is required. A minimum guarantee clause can be modeled as a put option written on some underlying reference portfolio and we develop a discrete model that selects the reference portfolio to minimise the cost of a guarantee. While the relation DB-DC is typically viewed as a binary one, the model shows how to price a wide range of guarantees creating a continuum between DB and DC. Integrating guarantee pricing with asset allocation decision is useful to both pension fund managers and regulators. The former are given a yardstick to assess if a given asset portfolio is fit-for-purpose; the latter can assess differences of specific reference funds with respect to the optimal one, signalling possible cases of moral hazard. We develop the model and report numerical results to illustrate its uses

    Gene-based and semantic structure of the Gene Ontology as a complex network

    Get PDF
    The last decade has seen the advent and consolidation of ontology based tools for the identification and biological interpretation of classes of genes, such as the Gene Ontology. The Gene Ontology (GO) is constantly evolving over time. The information accumulated time-by-time and included in the GO is encoded in the definition of terms and in the setting up of semantic relations amongst terms. Here we investigate the Gene Ontology from a complex network perspective. We consider the semantic network of terms naturally associated with the semantic relationships provided by the Gene Ontology consortium. Moreover, the GO is a natural example of bipartite network of terms and genes. Here we are interested in studying the properties of the projected network of terms, i.e. a gene-based weighted network of GO terms, in which a link between any two terms is set if at least one gene is annotated in both terms. One aim of the present paper is to compare the structural properties of the semantic and the gene-based network. The relative importance of terms is very similar in the two networks, but the community structure changes. We show that in some cases GO terms that appear to be distinct from a semantic point of view are instead connected, and appear in the same community when considering their gene content. The identification of such gene-based communities of terms might therefore be the basis of a simple protocol aiming at improving the semantic structure of GO. Information about terms that share large gene content might also be important from a biomedical point of view, as it might reveal how genes over-expressed in a certain term also affect other biological processes, molecular functions and cellular components not directly linked according to GO semantics

    Kullback-Leibler distance as a measure of information filtered from multivariate data

    Get PDF
    We show that the Kullback-Leibler distance is a good measure of the statistical uncertainty of correlation matrices estimated by using a finite set of data. For correlation matrices of multivariate Gaussian variables we analytically determine the expected values of the Kullback-Leibler distance of a sample correlation matrix from a reference model and we show that the expected values are known also when the specific model is unknown. We propose to make use of the Kullback-Leibler distance to estimate the information extracted from a correlation matrix by correlation filtering procedures. We also show how to use this distance to measure the stability of filtering procedures with respect to statistical uncertainty. We explain the effectiveness of our method by comparing four filtering procedures, two of them being based on spectral analysis and the other two on hierarchical clustering. We compare these techniques as applied both to simulations of factor models and empirical data. We investigate the ability of these filtering procedures in recovering the correlation matrix of models from simulations. We discuss such ability in terms of both the heterogeneity of model parameters and the length of data series. We also show that the two spectral techniques are typically more informative about the sample correlation matrix than techniques based on hierarchical clustering, whereas the latter are more stable with respect to statistical uncertaint

    Ranking coherence in Topic Models using Statistically Validated Networks

    Get PDF
    Probabilistic topic models have become one of the most widespread machine learning techniques in textual analysis. Topic discovering is an unsupervised process that does not guarantee the interpretability of its output. Hence, the automatic evaluation of topic coherence has attracted the interest of many researchers over the last decade, and it is an open research area. The present article offers a new quality evaluation method based on Statistically Validated Networks (SVNs). The proposed probabilistic approach consists of representing each topic as a weighted network of its most probable words. The presence of a link between each pair of words is assessed by statistically validating their co-occurrence in sentences against the null hypothesis of random co-occurrence. The proposed method allows one to distinguish between high-quality and low-quality topics, by making use of a battery of statistical tests. The statistically significant pairwise associations of words represented by the links in the SVN might reasonably be expected to be strictly related to the semantic coherence and interpretability of a topic. Therefore, the more connected the network, the more coherent the topic in question. We demonstrate the effectiveness of the method through an analysis of a real text corpus, which shows that the proposed measure is more correlated with human judgement than the state-of-the-art coherence measures

    Insurance Fraud Detection: A Statistically-Validated Network Approach

    Get PDF
    Fraud is a social phenomenon, and fraudsters oftencollaborate with other fraudsters, taking on differentroles. The challenge for insurance companies is toimplement claim assessment and improve frauddetection accuracy. We developed an investigativesystem based on bipartite networks, highlighting therelationships between subjects and accidents or vehi-cles and accidents. We formalize filtering rules throughprobability models and test specific methods to assessthe existence of communities in extensive networksand propose new alert metrics for suspicious struc-tures. We apply the methodology to a real database—the Italian Antifraud Integrated Archive—and compare the results to out‐of‐sample fraud scams underinvestigation by the judicial authoritie
    corecore