4,761 research outputs found

    Riemann Hypothesis: a GGC factorisation

    Full text link
    A GGC (Generalized Gamma Convolution) representation of Riemann's Xi-function is constructed

    Free energy of a folded semiflexible polymer confined to a nanochannel of various geometries

    Full text link
    Monte Carlo simulations are used to study the conformational properties of a folded semiflexible polymer confined to a long channel. We measure the variation in the conformational free energy with respect to the end-to-end distance of the polymer, and from these functions we extract the free energy of the hairpin fold, as well as the entropic force arising from interactions between the portions of the polymer that overlap along the channel. We consider the scaling of the free energies with respect to varying the persistence length of the polymer, as well as the channel dimensions for confinement in cylindrical, rectangular and triangular channels. We focus on polymer behaviour in both the classic Odijk and back folded Odijk regimes. We find the scaling of the entropic force to be close to that predicted from a scaling argument that treats interactions between deflection segments at the second virial level. In addition, the measured hairpin fold free energy is consistent with that obtained directly from a recent theoretical calculation for cylindrical channels. It is also consistent with values determined from measurements of the global persistence length of a polymer in the backfolded Odijk regime in recent simulation studies

    On Hilbert's 8th Problem

    Full text link
    A Hadamard factorization of the Riemann Xi-function is constructed to characterize the zeros of the zeta function

    Polymer translocation into and out of an ellipsoidal cavity

    Full text link
    Monte Carlo simulations are used to study the translocation of a polymer into and out of a ellipsoidal cavity through a narrow pore. We measure the polymer free energy F as a function of a translocation coordinate, s, defined to be the number of bonds that have entered the cavity. To study polymer insertion, we consider the case of a driving force acting on monomers inside the pore, as well as monomer attraction to the cavity wall. We examine the changes to F(s) upon variation in the shape anisometry and volume of the cavity, the polymer length, and the strength of the interactions driving the insertion. For athermal systems, the free energy functions are analyzed using a scaling approach, where we treat the confined portion of the polymer to be in the semi-dilute regime. The free energy functions are used with the Fokker-Planck equation to measure mean translocation times, as well as translocation time distributions. We find that both polymer ejection and insertion is faster for ellipsoidal cavities than for spherical cavities. The results are in qualitative agreement with those of a Langevin dynamics study in the case of ejection but not for insertion. The discrepancy is likely due to out-of-equilibrium conformational behaviour that is not accounted for in the FP approachComment: 11 pages, 11 figure

    van Dantzig Pairs, Wald Couples and Hadamard Factorisation

    Full text link
    Some consequences of a duality between the Hadamard-Weierstrass factorisation of an entire function and van Dantzig-Wald couples of random variables are explored. We demonstrate the methodology on particular functions including the Riemann zeta and xi-functions, Ramanujan's tau function, L-functions and Gamma and Hyperbolic functions

    Posterior Concentration for Sparse Deep Learning

    Full text link
    Spike-and-Slab Deep Learning (SS-DL) is a fully Bayesian alternative to Dropout for improving generalizability of deep ReLU networks. This new type of regularization enables provable recovery of smooth input-output maps with unknown levels of smoothness. Indeed, we show that the posterior distribution concentrates at the near minimax rate for α\alpha-H\"older smooth maps, performing as well as if we knew the smoothness level α\alpha ahead of time. Our result sheds light on architecture design for deep neural networks, namely the choice of depth, width and sparsity level. These network attributes typically depend on unknown smoothness in order to be optimal. We obviate this constraint with the fully Bayes construction. As an aside, we show that SS-DL does not overfit in the sense that the posterior concentrates on smaller networks with fewer (up to the optimal number of) nodes and links. Our results provide new theoretical justifications for deep ReLU networks from a Bayesian point of view

    Deep Learning: Computational Aspects

    Full text link
    In this article we review computational aspects of Deep Learning (DL). Deep learning uses network architectures consisting of hierarchical layers of latent variables to construct predictors for high-dimensional input-output models. Training a deep learning architecture is computationally intensive, and efficient linear algebra libraries is the key for training and inference. Stochastic gradient descent (SGD) optimization and batch sampling are used to learn from massive data sets

    Bayesian Particle Tracking of Traffic Flows

    Full text link
    We develop a Bayesian particle filter for tracking traffic flows that is capable of capturing non-linearities and discontinuities present in flow dynamics. Our model includes a hidden state variable that captures sudden regime shifts between traffic free flow, breakdown and recovery. We develop an efficient particle learning algorithm for real time on-line inference of states and parameters. This requires a two step approach, first, resampling the current particles, with a mixture predictive distribution and second, propagation of states using the conditional posterior distribution. Particle learning of parameters follows from updating recursions for conditional sufficient statistics. To illustrate our methodology, we analyze measurements of daily traffic flow from the Illinois interstate I-55 highway system. We demonstrate how our filter can be used to inference the change of traffic flow regime on a highway road segment based on a measurement from freeway single-loop detectors. Finally, we conclude with directions for future research

    Deep Learning for Short-Term Traffic Flow Prediction

    Full text link
    We develop a deep learning model to predict traffic flows. The main contribution is development of an architecture that combines a linear model that is fitted using 1\ell_1 regularization and a sequence of tanh\tanh layers. The challenge of predicting traffic flows are the sharp nonlinearities due to transitions between free flow, breakdown, recovery and congestion. We show that deep learning architectures can capture these nonlinear spatio-temporal effects. The first layer identifies spatio-temporal relations among predictors and other layers model nonlinear relations. We illustrate our methodology on road sensor data from Interstate I-55 and predict traffic flows during two special events; a Chicago Bears football game and an extreme snowstorm event. Both cases have sharp traffic flow regime changes, occurring very suddenly, and we show how deep learning provides precise short term traffic flow predictions

    Deep Learning: A Bayesian Perspective

    Full text link
    Deep learning is a form of machine learning for nonlinear high dimensional pattern matching and prediction. By taking a Bayesian probabilistic perspective, we provide a number of insights into more efficient algorithms for optimisation and hyper-parameter tuning. Traditional high-dimensional data reduction techniques, such as principal component analysis (PCA), partial least squares (PLS), reduced rank regression (RRR), projection pursuit regression (PPR) are all shown to be shallow learners. Their deep learning counterparts exploit multiple deep layers of data reduction which provide predictive performance gains. Stochastic gradient descent (SGD) training optimisation and Dropout (DO) regularization provide estimation and variable selection. Bayesian regularization is central to finding weights and connections in networks to optimize the predictive bias-variance trade-off. To illustrate our methodology, we provide an analysis of international bookings on Airbnb. Finally, we conclude with directions for future research
    corecore