41,738 research outputs found

    Bounding Embeddings of VC Classes into Maximum Classes

    Full text link
    One of the earliest conjectures in computational learning theory-the Sample Compression conjecture-asserts that concept classes (equivalently set systems) admit compression schemes of size linear in their VC dimension. To-date this statement is known to be true for maximum classes---those that possess maximum cardinality for their VC dimension. The most promising approach to positively resolving the conjecture is by embedding general VC classes into maximum classes without super-linear increase to their VC dimensions, as such embeddings would extend the known compression schemes to all VC classes. We show that maximum classes can be characterised by a local-connectivity property of the graph obtained by viewing the class as a cubical complex. This geometric characterisation of maximum VC classes is applied to prove a negative embedding result which demonstrates VC-d classes that cannot be embedded in any maximum class of VC dimension lower than 2d. On the other hand, we show that every VC-d class C embeds in a VC-(d+D) maximum class where D is the deficiency of C, i.e., the difference between the cardinalities of a maximum VC-d class and of C. For VC-2 classes in binary n-cubes for 4 <= n <= 6, we give best possible results on embedding into maximum classes. For some special classes of Boolean functions, relationships with maximum classes are investigated. Finally we give a general recursive procedure for embedding VC-d classes into VC-(d+k) maximum classes for smallest k.Comment: 22 pages, 2 figure

    Manifold Optimization Over the Set of Doubly Stochastic Matrices: A Second-Order Geometry

    Get PDF
    Convex optimization is a well-established research area with applications in almost all fields. Over the decades, multiple approaches have been proposed to solve convex programs. The development of interior-point methods allowed solving a more general set of convex programs known as semi-definite programs and second-order cone programs. However, it has been established that these methods are excessively slow for high dimensions, i.e., they suffer from the curse of dimensionality. On the other hand, optimization algorithms on manifold have shown great ability in finding solutions to nonconvex problems in reasonable time. This paper is interested in solving a subset of convex optimization using a different approach. The main idea behind Riemannian optimization is to view the constrained optimization problem as an unconstrained one over a restricted search space. The paper introduces three manifolds to solve convex programs under particular box constraints. The manifolds, called the doubly stochastic, symmetric and the definite multinomial manifolds, generalize the simplex also known as the multinomial manifold. The proposed manifolds and algorithms are well-adapted to solving convex programs in which the variable of interest is a multidimensional probability distribution function. Theoretical analysis and simulation results testify the efficiency of the proposed method over state of the art methods. In particular, they reveal that the proposed framework outperforms conventional generic and specialized solvers, especially in high dimensions

    Bounded Rationality:Static Versus Dynamic Approaches

    Get PDF
    Two kinds of theories of boundedly rational behavior are possible. Static theories focus on stationary behavior and do not include any explicit mechanism for temporal change. Dynamic theories, on the other hand, explicitly model the fine-grain adjustments made by the subjects in response to their recent experiences. The main contribution of this paper is to argue that the restrictions usually imposed on the distribution of choices in the static approach are generically not supported by a dynamic adjustment mechanism. The genericity here is understood both in the measure theoretic and in the topological sense.

    End-to-End Cross-Modality Retrieval with CCA Projections and Pairwise Ranking Loss

    Full text link
    Cross-modality retrieval encompasses retrieval tasks where the fetched items are of a different type than the search query, e.g., retrieving pictures relevant to a given text query. The state-of-the-art approach to cross-modality retrieval relies on learning a joint embedding space of the two modalities, where items from either modality are retrieved using nearest-neighbor search. In this work, we introduce a neural network layer based on Canonical Correlation Analysis (CCA) that learns better embedding spaces by analytically computing projections that maximize correlation. In contrast to previous approaches, the CCA Layer (CCAL) allows us to combine existing objectives for embedding space learning, such as pairwise ranking losses, with the optimal projections of CCA. We show the effectiveness of our approach for cross-modality retrieval on three different scenarios (text-to-image, audio-sheet-music and zero-shot retrieval), surpassing both Deep CCA and a multi-view network using freely learned projections optimized by a pairwise ranking loss, especially when little training data is available (the code for all three methods is released at: https://github.com/CPJKU/cca_layer).Comment: Preliminary version of a paper published in the International Journal of Multimedia Information Retrieva

    Tree Parity Machine Rekeying Architectures

    Get PDF
    The necessity to secure the communication between hardware components in embedded systems becomes increasingly important with regard to the secrecy of data and particularly its commercial use. We suggest a low-cost (i.e. small logic-area) solution for flexible security levels and short key lifetimes. The basis is an approach for symmetric key exchange using the synchronisation of Tree Parity Machines. Fast successive key generation enables a key exchange within a few milliseconds, given realistic communication channels with a limited bandwidth. For demonstration we evaluate characteristics of a standard-cell ASIC design realisation as IP-core in 0.18-micrometer CMOS-technology

    Multiplicative versus additive noise in multi-state neural networks

    Full text link
    The effects of a variable amount of random dilution of the synaptic couplings in Q-Ising multi-state neural networks with Hebbian learning are examined. A fraction of the couplings is explicitly allowed to be anti-Hebbian. Random dilution represents the dying or pruning of synapses and, hence, a static disruption of the learning process which can be considered as a form of multiplicative noise in the learning rule. Both parallel and sequential updating of the neurons can be treated. Symmetric dilution in the statics of the network is studied using the mean-field theory approach of statistical mechanics. General dilution, including asymmetric pruning of the couplings, is examined using the generating functional (path integral) approach of disordered systems. It is shown that random dilution acts as additive gaussian noise in the Hebbian learning rule with a mean zero and a variance depending on the connectivity of the network and on the symmetry. Furthermore, a scaling factor appears that essentially measures the average amount of anti-Hebbian couplings.Comment: 15 pages, 5 figures, to appear in the proceedings of the Conference on Noise in Complex Systems and Stochastic Dynamics II (SPIE International

    The Hidden Convexity of Spectral Clustering

    Full text link
    In recent years, spectral clustering has become a standard method for data analysis used in a broad range of applications. In this paper we propose a new class of algorithms for multiway spectral clustering based on optimization of a certain "contrast function" over the unit sphere. These algorithms, partly inspired by certain Independent Component Analysis techniques, are simple, easy to implement and efficient. Geometrically, the proposed algorithms can be interpreted as hidden basis recovery by means of function optimization. We give a complete characterization of the contrast functions admissible for provable basis recovery. We show how these conditions can be interpreted as a "hidden convexity" of our optimization problem on the sphere; interestingly, we use efficient convex maximization rather than the more common convex minimization. We also show encouraging experimental results on real and simulated data.Comment: 22 page
    corecore