4,428 research outputs found

    Network Lasso: Clustering and Optimization in Large Graphs

    Full text link
    Convex optimization is an essential tool for modern data analysis, as it provides a framework to formulate and solve many problems in machine learning and data mining. However, general convex optimization solvers do not scale well, and scalable solvers are often specialized to only work on a narrow class of problems. Therefore, there is a need for simple, scalable algorithms that can solve many common optimization problems. In this paper, we introduce the \emph{network lasso}, a generalization of the group lasso to a network setting that allows for simultaneous clustering and optimization on graphs. We develop an algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in a distributed and scalable manner, which allows for guaranteed global convergence even on large graphs. We also examine a non-convex extension of this approach. We then demonstrate that many types of problems can be expressed in our framework. We focus on three in particular - binary classification, predicting housing prices, and event detection in time series data - comparing the network lasso to baseline approaches and showing that it is both a fast and accurate method of solving large optimization problems

    Feed-forward and its role in conditional linear optical quantum dynamics

    Full text link
    Nonlinear optical quantum gates can be created probabilistically using only single photon sources, linear optical elements and photon-number resolving detectors. These gates are heralded but operate with probabilities much less than one. There is currently a large gap between the performance of the known circuits and the established upper bounds on their success probabilities. One possibility for increasing the probability of success of such gates is feed-forward, where one attempts to correct certain failure events that occurred in the gate's operation. In this brief report we examine the role of feed-forward in improving the success probability. In particular, for the non-linear sign shift gate, we find that in a three-mode implementation with a single round of feed-forward the optimal average probability of success is approximately given by p= 0.272. This value is only slightly larger than the general optimal success probability without feed-forward, P= 0.25.Comment: 4 pages, 3 eps figures, typeset using RevTex4, problems with figures resolve

    Optimal design of nonuniform FIR transmultiplexer using semi-infinite programming

    Get PDF
    This paper considers an optimum nonuniform FIR transmultiplexer design problem subject to specifications in the frequency domain. Our objective is to minimize the sum of the ripple energy for all the individual filters, subject to the specifications on amplitude and aliasing distortions, and to the passband and stopband specifications for the individual filters. This optimum nonuniform transmultiplexer design problem can be formulated as a quadratic semi-infinite programming problem. The dual parametrization algorithm is extended to this nonuniform transmultiplexer design problem. If the lengths of the filters are sufficiently long and the set of decimation integers is compatible, then a solution exists. Since the problem is formulated as a convex problem, if a solution exists, then the solution obtained is unique and the local solution is a global minimum

    The interferometric baselines and GRAVITY astrometric error budget

    Full text link
    GRAVITY is a new generation beam combination instrument for the VLTI. Its goal is to achieve microarsecond astrometric accuracy between objects separated by a few arcsec. This 10610^6 accuracy on astrometric measurements is the most important challenge of the instrument, and careful error budget have been paramount during the technical design of the instrument. In this poster, we will focus on baselines induced errors, which is part of a larger error budget.Comment: SPIE Meeting 2014 -- Montrea

    Validation of the performance of a GMO multiplex screening assay based on microarray detection

    Get PDF
    A new screening method for the detection and identification of GMO, based on the use of multiplex PCR followed by microarray, has been developed and is presented. The technology is based on the identification of quite ubiquitous GMO genetic target elements first amplified by PCR, followed by direct hybridisation of the amplicons on a predefined microarray (DualChipÂź GMO, Eppendorf, Germany). The validation was performed within the framework of a European project (Co-Extra, contract no 007158) and in collaboration with 12 laboratories specialised in GMO detection. The present study reports the strategy and the results of an ISO complying validation of the method carried out through an inter-laboratory study. Sets of blind samples were provided consisting of DNA reference materials covering all the elements detectable by specific probes present on the array. The GMO concentrations varied from 1% down to 0.045%. In addition, a mixture of two GMO events (0.1% RRS diluted in 100% TOPAS19/2) was incorporated in the study to test the robustness of the assay in extreme conditions. Data were processed according to ISO 5725 standard. The method was evaluated with predefined performance criteria with respect to the EC CRL method acceptance criteria. The overall method performance met the acceptance criteria; in particular, the results showed that the method is suitable for the detection of the different target elements at 0.1% concentration of GMO with a 95% accuracy rate. This collaborative trial showed that the method can be considered as fit for the purpose of screening with respect to its intra- and inter-laboratory accuracy. The results demonstrated the validity of combining multiplex PCR with array detection as provided by the DualChipÂź GMO (Eppendorf, Germany) for the screening of GMO. The results showed that the technology is robust, practical and suitable as a screening too

    From Linear Optical Quantum Computing to Heisenberg-Limited Interferometry

    Get PDF
    The working principles of linear optical quantum computing are based on photodetection, namely, projective measurements. The use of photodetection can provide efficient nonlinear interactions between photons at the single-photon level, which is technically problematic otherwise. We report an application of such a technique to prepare quantum correlations as an important resource for Heisenberg-limited optical interferometry, where the sensitivity of phase measurements can be improved beyond the usual shot-noise limit. Furthermore, using such nonlinearities, optical quantum nondemolition measurements can now be carried out at the single-photon level.Comment: 10 pages, 5 figures; Submitted to a Special Issue of J. Opt. B on "Fluctuations and Noise in Photonics and Quantum Optics" (Herman Haus Memorial Issue); v2: minor change

    Efficient optical quantum information processing

    Full text link
    Quantum information offers the promise of being able to perform certain communication and computation tasks that cannot be done with conventional information technology (IT). Optical Quantum Information Processing (QIP) holds particular appeal, since it offers the prospect of communicating and computing with the same type of qubit. Linear optical techniques have been shown to be scalable, but the corresponding quantum computing circuits need many auxiliary resources. Here we present an alternative approach to optical QIP, based on the use of weak cross-Kerr nonlinearities and homodyne measurements. We show how this approach provides the fundamental building blocks for highly efficient non-absorbing single photon number resolving detectors, two qubit parity detectors, Bell state measurements and finally near deterministic control-not (CNOT) gates. These are essential QIP devicesComment: Accepted to the Journal of optics B special issue on optical quantum computation; References update
    • 

    corecore