12,369 research outputs found

    LOBBYISTS, PRIVATE INTERESTS AND THE 1985 FARM BILL

    Get PDF
    Agricultural and Food Policy,

    Finding Optimal Flows Efficiently

    Full text link
    Among the models of quantum computation, the One-way Quantum Computer is one of the most promising proposals of physical realization, and opens new perspectives for parallelization by taking advantage of quantum entanglement. Since a one-way quantum computation is based on quantum measurement, which is a fundamentally nondeterministic evolution, a sufficient condition of global determinism has been introduced as the existence of a causal flow in a graph that underlies the computation. A O(n^3)-algorithm has been introduced for finding such a causal flow when the numbers of output and input vertices in the graph are equal, otherwise no polynomial time algorithm was known for deciding whether a graph has a causal flow or not. Our main contribution is to introduce a O(n^2)-algorithm for finding a causal flow, if any, whatever the numbers of input and output vertices are. This answers the open question stated by Danos and Kashefi and by de Beaudrap. Moreover, we prove that our algorithm produces an optimal flow (flow of minimal depth.) Whereas the existence of a causal flow is a sufficient condition for determinism, it is not a necessary condition. A weaker version of the causal flow, called gflow (generalized flow) has been introduced and has been proved to be a necessary and sufficient condition for a family of deterministic computations. Moreover the depth of the quantum computation is upper bounded by the depth of the gflow. However, the existence of a polynomial time algorithm that finds a gflow has been stated as an open question. In this paper we answer this positively with a polynomial time algorithm that outputs an optimal gflow of a given graph and thus finds an optimal correction strategy to the nondeterministic evolution due to measurements.Comment: 10 pages, 3 figure

    Constrained Optimization for a Subset of the Gaussian Parsimonious Clustering Models

    Full text link
    The expectation-maximization (EM) algorithm is an iterative method for finding maximum likelihood estimates when data are incomplete or are treated as being incomplete. The EM algorithm and its variants are commonly used for parameter estimation in applications of mixture models for clustering and classification. This despite the fact that even the Gaussian mixture model likelihood surface contains many local maxima and is singularity riddled. Previous work has focused on circumventing this problem by constraining the smallest eigenvalue of the component covariance matrices. In this paper, we consider constraining the smallest eigenvalue, the largest eigenvalue, and both the smallest and largest within the family setting. Specifically, a subset of the GPCM family is considered for model-based clustering, where we use a re-parameterized version of the famous eigenvalue decomposition of the component covariance matrices. Our approach is illustrated using various experiments with simulated and real data
    • …
    corecore