215 research outputs found

    Power sum polynomials and the ghosts behind them

    Get PDF
    The power sum polynomial associated to a multi-subset of the projective plane PG (2 , q) is the sum of the (q- 1) -th powers of the R\ue9dei factors of the points in the multi-subset. The classification of multi-subsets having the same power sum polynomial passes through the determination of those multi-subsets associated to the zero polynomial, called ghosts. In this paper we provide new classes of ghosts and compute the dimension of the ghost subspace by exploiting the linear code generated by the lines of PG (2 , q) and its dual. Moreover, we explicitly enumerate and classify ghosts for planes of order 2,\ua03,\ua04

    Euler angles for G2

    Full text link
    We provide a simple parametrization for the group G2, which is analogous to the Euler parametrization for SU(2). We show how to obtain the general element of the group in a form emphasizing the structure of the fibration of G2 with fiber SO(4) and base H, the variety of quaternionic subalgebras of octonions. In particular this allows us to obtain a simple expression for the Haar measure on G2. Moreover, as a by-product it yields a concrete realization and an Einstein metric for H.Comment: 21 pages, 2 figures, some misprints correcte

    Approximating Clustering of Fingerprint Vectors with Missing Values

    Full text link
    The problem of clustering fingerprint vectors is an interesting problem in Computational Biology that has been proposed in (Figureroa et al. 2004). In this paper we show some improvements in closing the gaps between the known lower bounds and upper bounds on the approximability of some variants of the biological problem. Namely we are able to prove that the problem is APX-hard even when each fingerprint contains only two unknown position. Moreover we have studied some variants of the orginal problem, and we give two 2-approximation algorithm for the IECMV and OECMV problems when the number of unknown entries for each vector is at most a constant.Comment: 13 pages, 4 figure

    A methodological framework for cloud resource provisioning and scheduling of data parallel applications under uncertainty

    Get PDF
    Data parallel applications are being extensively deployed in cloud environmentsbecause of the possibility of dynamically provisioning storage and computation re-sources. To identify cost-effective solutions that satisfy the desired service levels,resource provisioning and scheduling play a critical role. Nevertheless, the unpre-dictable behavior of cloud performance makes the estimation of the resources actu-ally needed quite complex. In this paper we propose a provisioning and schedulingframework that explicitly tackles uncertainties and performance variability of thecloud infrastructure and of the workload. This framework allows cloud users to es-timate in advance, i.e., prior to the actual execution of the applications, the resourcesettings that cope with uncertainty. We formulate an optimization problem wherethe characteristics not perfectly known or affected by uncertain phenomena arerepresented as random variables modeled by the corresponding probability distri-butions. Provisioning and scheduling decisions \u2013 while optimizing various metrics,such as monetary leasing costs of cloud resources and application execution time \u2013take fully account of uncertainties encountered in cloud environments. To test our framework, we consider data parallel applications characterized by a deadline con-straint and we investigate the impact of their characteristics and of the variabilityof the cloud infrastructure. The experiments show that the resource provisioningand scheduling plans identified by our approach nicely cope with uncertainties andensure that the application deadline is satisfied

    Parameterized Complexity of the k-anonymity Problem

    Full text link
    The problem of publishing personal data without giving up privacy is becoming increasingly important. An interesting formalization that has been recently proposed is the kk-anonymity. This approach requires that the rows of a table are partitioned in clusters of size at least kk and that all the rows in a cluster become the same tuple, after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is known to be APX-hard even when the records values are over a binary alphabet and k=3k=3, and when the records have length at most 8 and k=4k=4 . In this paper we study how the complexity of the problem is influenced by different parameters. In this paper we follow this direction of research, first showing that the problem is W[1]-hard when parameterized by the size of the solution (and the value kk). Then we exhibit a fixed parameter algorithm, when the problem is parameterized by the size of the alphabet and the number of columns. Finally, we investigate the computational (and approximation) complexity of the kk-anonymity problem, when restricting the instance to records having length bounded by 3 and k=3k=3. We show that such a restriction is APX-hard.Comment: 22 pages, 2 figure

    Some like it Hoax: Automated fake news detection in social networks

    Get PDF
    In the recent years, the reliability of information on the Internet has emerged as a crucial issue of modern society. Social network sites (SNSs) have revolutionized the way in which information is spread by allowing users to freely share content. As a consequence, SNSs are also increasingly used as vectors for the diffusion of misinformation and hoaxes. The amount of disseminated information and the rapidity of its diffusion make it practically impossible to assess reliability in a timely manner, highlighting the need for automatic online hoax detection systems. As a contribution towards this objective, we show that Facebook posts can be classified with high accuracy as hoaxes or non-hoaxes on the basis of the users who \ue2\u80\u9cliked\ue2\u80\u9d them. We present two classification techniques, one based on logistic regression, the other on a novel adaptation of boolean crowdsourcing algorithms. On a dataset consisting of 15,500 Facebook posts and 909,236 users, we obtain classification accuracies exceeding 99% even when the training set contains less than 1% of the posts. We further show that our techniques are robust: they work even when we restrict our attention to the users who like both hoax and non-hoax posts. These results suggest that mapping the diffusion pattern of information can be a useful component of automatic hoax detection systems

    Triplet-based similarity score for fully multilabeled trees with poly-occurring labels

    Get PDF
    Motivation: The latest advances in cancer sequencing, and the availability of a wide range of methods to infer the evolutionary history of tumors, have made it important to evaluate, reconcile and cluster different tumor phylogenies. Recently, several notions of distance or similarities have been proposed in the literature, but none of them has emerged as the golden standard. Moreover, none of the known similarity measures is able to manage mutations occurring multiple times in the tree, a circumstance often occurring in real cases. Results: To overcome these limitations, in this article, we propose MP3, the first similarity measure for tumor phylogenies able to effectively manage cases where multiple mutations can occur at the same time and mutations can occur multiple times. Moreover, a comparison of MP3 with other measures shows that it is able to classify correctly similar and dissimilar trees, both on simulated and on real data
    corecore