787 research outputs found

    Discretizing Gravity in Warped Spacetime

    Full text link
    We investigate the discretized version of the compact Randall-Sundrum model. By studying the mass eigenstates of the lattice theory, we demonstrate that for warped space, unlike for flat space, the strong coupling scale does not depend on the IR scale and lattice size. However, strong coupling does prevent us from taking the continuum limit of the lattice theory. Nonetheless, the lattice theory works in the manifestly holographic regime and successfully reproduces the most significant features of the warped theory. It is even in some respects better than the KK theory, which must be carefully regulated to obtain the correct physical results. Because it is easier to construct lattice theories than to find exact solutions to GR, we expect lattice gravity to be a useful tool for exploring field theory in curved space.Comment: 17 pages, 4 figures; references adde

    Reservoir Topology in Deep Echo State Networks

    Full text link
    Deep Echo State Networks (DeepESNs) recently extended the applicability of Reservoir Computing (RC) methods towards the field of deep learning. In this paper we study the impact of constrained reservoir topologies in the architectural design of deep reservoirs, through numerical experiments on several RC benchmarks. The major outcome of our investigation is to show the remarkable effect, in terms of predictive performance gain, achieved by the synergy between a deep reservoir construction and a structured organization of the recurrent units in each layer. Our results also indicate that a particularly advantageous architectural setting is obtained in correspondence of DeepESNs where reservoir units are structured according to a permutation recurrent matrix.Comment: Preprint of the paper published in the proceedings of ICANN 201

    Richness of Deep Echo State Network Dynamics

    Full text link
    Reservoir Computing (RC) is a popular methodology for the efficient design of Recurrent Neural Networks (RNNs). Recently, the advantages of the RC approach have been extended to the context of multi-layered RNNs, with the introduction of the Deep Echo State Network (DeepESN) model. In this paper, we study the quality of state dynamics in progressively higher layers of DeepESNs, using tools from the areas of information theory and numerical analysis. Our experimental results on RC benchmark datasets reveal the fundamental role played by the strength of inter-reservoir connections to increasingly enrich the representations developed in higher layers. Our analysis also gives interesting insights into the possibility of effective exploitation of training algorithms based on stochastic gradient descent in the RC field.Comment: Preprint of the paper accepted at IWANN 201

    Electron transfer rates for asymmetric reactions

    Full text link
    We use a numerically exact real-time path integral Monte Carlo scheme to compute electron transfer dynamics between two redox sites within a spin-boson approach. The case of asymmetric reactions is studied in detail in the least understood crossover region between nonadiabatic and adiabatic electron transfer. At intermediate-to-high temperature, we find good agreement with standard Marcus theory, provided dynamical recrossing effects are captured. The agreement with our data is practically perfect when temperature renormalization is allowed. At low temperature we find peculiar electron transfer kinetics in strongly asymmetric systems, characterized by rapid transient dynamics and backflow to the donor.Comment: 13 pages, 4 figures, submitted to Chemical Physics Special Issue on the Spin-Boson Problem, ed. by H. Grabert and A. Nitza

    Deep Randomized Neural Networks

    Get PDF
    Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers’ connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains

    Deep Randomized Neural Networks

    Get PDF
    Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers' connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains

    Genetic Polymorphisms of Peroxisome Proliferator-Activated Receptors and the Risk of Cardiovascular Morbidity and Mortality in a Community-Based Cohort in Washington County, Maryland

    Get PDF
    The primary aim of this study was to examine prospectively the associations between 5 peroxisome proliferator-activated receptor (PPAR) single nucleotide polymorphisms (SNPs) and cardiovascular morbidity and mortality in a community-based cohort study in Washington County, Maryland. Data were analyzed from 9,364 Caucasian men and women participating in CLUE-II. Genotyping on 5 PPAR polymorphisms was conducted using peripheral DNA samples collected in 1989. The followup period was from 1989 to 2003. The results showed that there were no statistically significant associations between the PPAR SNPs and cardiovascular deaths or events. In contrast, statistically significant age-adjusted associations were observed for PPARG rs4684847 with both baseline body mass and blood pressure, and for PPARG rs709158, PPARG rs1175543, and PPARD rs2016520 with baseline cholesterol levels. Future studies should be conducted to confirm these findings and to explore the associations in populations with greater racial and ethnic diversity

    Tools for Deconstructing Gauge Theories in AdS5

    Get PDF
    We employ analytical methods to study deconstruction of 5D gauge theories in the AdS5 background. We demonstrate that using the so-called q-Bessel functions allows a quantitative analysis of the deconstructed setup. Our study clarifies the relation of deconstruction with 5D warped theories.Comment: 30 pages; v2: several refinements, references adde
    corecore