5,256 research outputs found

    Bootstrapping Free-Space Optical Networks

    Get PDF
    We consider one challenging problem in establishing a Free Space Optical (FSO) network. In our model, it is assumed that each node is a base station and its number of transceivers is limited. Such a network can be abstracted by a graph where each node represents a base station and each edge represents a link connecting two base stations. The problem is that of forming a connected topology, which is known to be NP-complete because of the transceiver limitation. What makes this problem even more challenging is the need to have a "distributed" solution to form a connected topology, because a node can have knowledge only of its neighbors. We have developed a fully distributed approximation algorithm, which constructs a spanning tree with maximal node degree at most one larger than that in the optimal solution. Due to its distributed nature, this algorithm outperforms serial algorithms

    Demonstration of Einstein-Podolsky-Rosen Steering Using Hybrid Continuous- and Discrete-Variable Entanglement of Light

    Full text link
    Einstein-Podolsky-Rosen steering is known to be a key resource for one-sided device-independent quantum information protocols. Here we demonstrate steering using hybrid entanglement between continuous- and discrete-variable optical qubits. To this end, we report on suitable steering inequalities and detail the implementation and requirements for this demonstration. Steering is experimentally certified by observing a violation by more than 5 standard deviations. Our results illustrate the potential of optical hybrid entanglement for applications in heterogeneous quantum networks that would interconnect disparate physical platforms and encodings

    Pruning training sets for learning of object categories

    Get PDF
    Training datasets for learning of object categories are often contaminated or imperfect. We explore an approach to automatically identify examples that are noisy or troublesome for learning and exclude them from the training set. The problem is relevant to learning in semi-supervised or unsupervised setting, as well as to learning when the training data is contaminated with wrongly labeled examples or when correctly labeled, but hard to learn examples, are present. We propose a fully automatic mechanism for noise cleaning, called ’data pruning’, and demonstrate its success on learning of human faces. It is not assumed that the data or the noise can be modeled or that additional training examples are available. Our experiments show that data pruning can improve on generalization performance for algorithms with various robustness to noise. It outperforms methods with regularization properties and is superior to commonly applied aggregation methods, such as bagging
    • …
    corecore