17,567 research outputs found

    Critical exponents in stochastic sandpile models

    Full text link
    We present large scale simulations of a stochastic sandpile model in two dimensions. We use moments analysis to evaluate critical exponents and finite size scaling method to consistently test the obtained results. The general picture resulting from our analysis allows us to characterize the large scale behavior of the present model with great accuracy.Comment: 6 pages, 4 figures. Invited talk presented at CCP9

    Non-extremal superdescendants of the D1D5 CFT

    Get PDF
    We construct solutions of IIB supergravity dual to non-supersymmetric states of the D1D5 system. These solutions are constructed as perturbations carrying both left and right moving momentum around the maximally rotating D1D5 ground state at linear order. They are found by extending to the asymptotically flat region the geometry generated in the decoupling limit by the action of left and right R-currents on a known D1D5 microstate. The perturbations are regular everywhere and do not carry any global charge. We also study the near-extremal limit of the solutions and derive the first non-trivial correction to the extremal geometry.Comment: 25 page

    Emergence of Invariance and Disentanglement in Deep Representations

    Full text link
    Using established principles from Statistics and Information Theory, we show that invariance to nuisance factors in a deep neural network is equivalent to information minimality of the learned representation, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. We then decompose the cross-entropy loss used during training and highlight the presence of an inherent overfitting term. We propose regularizing the loss by bounding such a term in two equivalent ways: One with a Kullbach-Leibler term, which relates to a PAC-Bayes perspective; the other using the information in the weights as a measure of complexity of a learned model, yielding a novel Information Bottleneck for the weights. Finally, we show that invariance and independence of the components of the representation learned by the network are bounded above and below by the information in the weights, and therefore are implicitly optimized during training. The theory enables us to quantify and predict sharp phase transitions between underfitting and overfitting of random labels when using our regularized loss, which we verify in experiments, and sheds light on the relation between the geometry of the loss function, invariance properties of the learned representation, and generalization error.Comment: Deep learning, neural network, representation, flat minima, information bottleneck, overfitting, generalization, sufficiency, minimality, sensitivity, information complexity, stochastic gradient descent, regularization, total correlation, PAC-Baye

    Centralized and Distributed Sparsification for Low-Complexity Message Passing Algorithm in C-RAN Architectures

    Full text link
    Cloud radio access network (C-RAN) is a promising technology for fifth-generation (5G) cellular systems. However the burden imposed by the huge amount of data to be collected (in the uplink) from the radio remote heads (RRHs) and processed at the base band unit (BBU) poses serious challenges. In order to reduce the computation effort of minimum mean square error (MMSE) receiver at the BBU the Gaussian message passing (MP) together with a suitable sparsification of the channel matrix can be used. In this paper we propose two sets of solutions, either centralized or distributed ones. In the centralized solutions, we propose different approaches to sparsify the channel matrix, in order to reduce the complexity of MP. However these approaches still require that all signals reaching the RRH are conveyed to the BBU, therefore the communication requirements among the backbone network devices are unaltered. In the decentralized solutions instead we aim at reducing both the complexity of MP at the BBU and the requirements on the RRHs-BBU communication links by pre-processing the signals at the RRH and convey a reduced set of signals to the BBU.Comment: Accepted for pubblication in IEEE VTC 201

    Visual Representations: Defining Properties and Deep Approximations

    Full text link
    Visual representations are defined in terms of minimal sufficient statistics of visual data, for a class of tasks, that are also invariant to nuisance variability. Minimal sufficiency guarantees that we can store a representation in lieu of raw data with smallest complexity and no performance loss on the task at hand. Invariance guarantees that the statistic is constant with respect to uninformative transformations of the data. We derive analytical expressions for such representations and show they are related to feature descriptors commonly used in computer vision, as well as to convolutional neural networks. This link highlights the assumptions and approximations tacitly assumed by these methods and explains empirical practices such as clamping, pooling and joint normalization.Comment: UCLA CSD TR140023, Nov. 12, 2014, revised April 13, 2015, November 13, 2015, February 28, 201
    • …
    corecore