431 research outputs found

    Replicated Bethe Free Energy: A Variational Principle behind Survey Propagation

    Full text link
    A scheme to provide various mean-field-type approximation algorithms is presented by employing the Bethe free energy formalism to a family of replicated systems in conjunction with analytical continuation with respect to the number of replicas. In the scheme, survey propagation (SP), which is an efficient algorithm developed recently for analyzing the microscopic properties of glassy states for a fixed sample of disordered systems, can be reproduced by assuming the simplest replica symmetry on stationary points of the replicated Bethe free energy. Belief propagation and generalized SP can also be offered in the identical framework under assumptions of the highest and broken replica symmetries, respectively.Comment: appeared in Journal of the Physical Society of Japan 74, 2133-2136 (2005

    Multimodal experiments in the design of living archive

    Get PDF
    Designing a ‘living archive’ that will enable new forms of circus performance to be realised is a complex and dynamic challenge. This paper discusses the methods and approaches used by the research team in the design of the Circus Oz Living Archive. Essential to this project has been the design of a responsive methodology that could embrace the diverse areas of knowledge and practice that have led to a design outcome that integratesthe affordances of the circus with those of digital technologies. The term ‘living archive’ has been adopted as a means to articulate the dynamic nature of the archive. This is an archive that will always be evolving, not only because of the on going collection of content, but more importantly because the performance of the archive users will themselves become part of the archive collection

    ForestHash: Semantic Hashing With Shallow Random Forests and Tiny Convolutional Networks

    Full text link
    Hash codes are efficient data representations for coping with the ever growing amounts of data. In this paper, we introduce a random forest semantic hashing scheme that embeds tiny convolutional neural networks (CNN) into shallow random forests, with near-optimal information-theoretic code aggregation among trees. We start with a simple hashing scheme, where random trees in a forest act as hashing functions by setting `1' for the visited tree leaf, and `0' for the rest. We show that traditional random forests fail to generate hashes that preserve the underlying similarity between the trees, rendering the random forests approach to hashing challenging. To address this, we propose to first randomly group arriving classes at each tree split node into two groups, obtaining a significantly simplified two-class classification problem, which can be handled using a light-weight CNN weak learner. Such random class grouping scheme enables code uniqueness by enforcing each class to share its code with different classes in different trees. A non-conventional low-rank loss is further adopted for the CNN weak learners to encourage code consistency by minimizing intra-class variations and maximizing inter-class distance for the two random class groups. Finally, we introduce an information-theoretic approach for aggregating codes of individual trees into a single hash code, producing a near-optimal unique hash for each class. The proposed approach significantly outperforms state-of-the-art hashing methods for image retrieval tasks on large-scale public datasets, while performing at the level of other state-of-the-art image classification techniques while utilizing a more compact and efficient scalable representation. This work proposes a principled and robust procedure to train and deploy in parallel an ensemble of light-weight CNNs, instead of simply going deeper.Comment: Accepted to ECCV 201

    Elastic net model of ocular dominance - overall stripe pattern and monocular deprivation

    Get PDF
    The elastic net (Durbin and Willshaw 1987) can account for the development of both topography and ocular dominance in the mapping from the lateral geniculate nucleus to primary visual cortex (Goodhill and Willshaw 1990). Here it is further shown for this model that (1) the overall pattern of stripes produced is strongly influenced by the shape of the cortex: in particular, stripes with a global order similar to that seen biologically can be produced under appropriate conditions, and (2) the observed changes in stripe width associated with monocular deprivation are reproduced in the model

    Einstein-Maxwell gravitational instantons and five dimensional solitonic strings

    Get PDF
    We study various aspects of four dimensional Einstein-Maxwell multicentred gravitational instantons. These are half-BPS Riemannian backgrounds of minimal N=2 supergravity, asymptotic to R^4, R^3 x S^1 or AdS_2 x S^2. Unlike for the Gibbons-Hawking solutions, the topology is not restricted by boundary conditions. We discuss the classical metric on the instanton moduli space. One class of these solutions may be lifted to causal and regular multi `solitonic strings', without horizons, of 4+1 dimensional N=2 supergravity, carrying null momentum.Comment: 1+30 page

    Beyond Hebb: Exclusive-OR and Biological Learning

    Full text link
    A learning algorithm for multilayer neural networks based on biologically plausible mechanisms is studied. Motivated by findings in experimental neurobiology, we consider synaptic averaging in the induction of plasticity changes, which happen on a slower time scale than firing dynamics. This mechanism is shown to enable learning of the exclusive-OR (XOR) problem without the aid of error back-propagation, as well as to increase robustness of learning in the presence of noise.Comment: 4 pages RevTeX, 2 figures PostScript, revised versio

    On the Link between Gaussian Homotopy Continuation and Convex Envelopes

    Full text link
    Abstract. The continuation method is a popular heuristic in computer vision for nonconvex optimization. The idea is to start from a simpli-fied problem and gradually deform it to the actual task while tracking the solution. It was first used in computer vision under the name of graduated nonconvexity. Since then, it has been utilized explicitly or im-plicitly in various applications. In fact, state-of-the-art optical flow and shape estimation rely on a form of continuation. Despite its empirical success, there is little theoretical understanding of this method. This work provides some novel insights into this technique. Specifically, there are many ways to choose the initial problem and many ways to progres-sively deform it to the original task. However, here we show that when this process is constructed by Gaussian smoothing, it is optimal in a specific sense. In fact, we prove that Gaussian smoothing emerges from the best affine approximation to Vese’s nonlinear PDE. The latter PDE evolves any function to its convex envelope, hence providing the optimal convexification

    Gravitational Entropy and Global Structure

    Get PDF
    The underlying reason for the existence of gravitational entropy is traced to the impossibility of foliating topologically non-trivial Euclidean spacetimes with a time function to give a unitary Hamiltonian evolution. In dd dimensions the entropy can be expressed in terms of the d2d-2 obstructions to foliation, bolts and Misner strings, by a universal formula. We illustrate with a number of examples including spaces with nut charge. In these cases, the entropy is not just a quarter the area of the bolt, as it is for black holes.Comment: 18 pages. References adde

    Robust 3D face capture using example-based photometric stereo

    Get PDF
    We show that using example-based photometric stereo, it is possible to achieve realistic reconstructions of the human face. The method can handle non-Lambertian reflectance and attached shadows after a simple calibration step. We use spherical harmonics to model and de-noise the illumination functions from images of a reference object with known shape, and a fast grid technique to invert those functions and recover the surface normal for each point of the target object. The depth coordinate is obtained by weighted multi-scale integration of these normals, using an integration weight mask obtained automatically from the images themselves. We have applied these techniques to improve the PHOTOFACE system of Hansen et al. (2010). © 2013 Elsevier B.V. All rights reserved

    Emergence of memory

    Full text link
    We propose a new self-organizing mechanism behind the emergence of memory in which temporal sequences of stimuli are transformed into spatial activity patterns. In particular, the memory emerges despite the absence of temporal correlations in the stimuli. This suggests that neural systems may prepare a spatial structure for processing information before the information itself is available. A simple model illustrating the mechanism is presented based on three principles: (1) Competition between neural units, 2) Hebbian plasticity, and (3) recurrent connections.Comment: 7 pages, 4 figures, EPL styl
    corecore