112 research outputs found
Recommended from our members
Modern Problems in Mathematical Signal Processing: Quantized Compressed Sensing and Randomized Neural Networks
We study two problems from mathematical signal processing. First, we consider problem of approximately recovering signals on a smooth, compact manifold from one-bit linear measurements drawn from either a Gaussian ensemble, partial circulant ensemble, or bounded orthonormal ensemble and quantized using or distributed noise-shaping schemes. We construct a convex optimization algorithm for signal recovery that, given a Geometric Multi-Resolution Analysis approximation of the manifold, guarantees signal recovery with high probability. We prove an upper bound on the recovery error which outperforms prior works that use memoryless scalar quantization, requires a simpler analysis, and extends the class of measurements beyond Gaussians.Second, we consider the problem of approximation continuous functions on compact domains using neural networks. The learning speed of feed-forward neural networks is notoriously slow and has presented a bottleneck in deep learning applications for several decades. For instance, gradient-based learning algorithms, which are used extensively to train neural networks, tend to work slowly when all of the network parameters must be iteratively tuned. To counter this, both researchers and practitioners have tried introducing randomness to reduce the learning requirement. Based on the original construction of B.~Igelnik and Y.H.~Pao, single layer neural-networks with random input-to-hidden layer weights and biases have seen success in practice, but the necessary theoretical justification is lacking. We begin to fill this theoretical gap by providing a (corrected) rigorous proof that the Igelnik and Pao construction is a universal approximator for continuous functions on compact domains, with -error convergence rate inversely proportional to the number of network nodes; we then extend this result to the non-asymptotic setting using a concentration inequality for Monte-Carlo integral approximations. We further adapt this randomized neural network architecture to approximate functions on smooth, compact submanifolds of Euclidean space, providing theoretical guarantees in both the asymptotic and non-asymptotic cases
The bracket geometry of statistics
In this thesis we build a geometric theory of Hamiltonian Monte Carlo, with an emphasis on symmetries and its bracket generalisations, construct the canonical geometry of smooth measures and Stein operators, and derive the complete recipe of measure-constraints preserving dynamics and diffusions on arbitrary manifolds.
Specifically, we will explain the central role played by mechanics with symmetries to obtain efficient numerical integrators, and provide a general method to construct explicit integrators for HMC on geodesic orbit manifolds via symplectic reduction.
Following ideas developed by Maxwell, Volterra, Poincaré, de Rham, Koszul, Dufour, Weinstein, and others,
we will then show that any smooth distribution generates
considerable geometric content, including ``musical"
isomorphisms between multi-vector fields and twisted differential forms, and
a boundary operator - the rotationnel,
which, in particular, engenders the canonical Stein operator.
We then introduce the ``bracket formalism" and its induced mechanics, a generalisation of Poisson mechanics and gradient flows that provides a general mechanism to associate unnormalised probability densities to flows depending on the score pointwise.
Most importantly, we will characterise all measure-constraints preserving flows on arbitrary manifolds, showing the intimate relation between measure-preserving Nambu mechanics and closed twisted forms.
Our results are canonical. As a special case we obtain the characterisation of measure-preserving bracket mechanical systems and measure-preserving diffusions, thus explaining and extending to manifolds
the complete recipe of SGMCMC.
We will discuss the geometry of Stein operators and extend the density approach by showing these are simply a reformulation of the exterior derivative on twisted forms satisfying Stokes' theorem.
Combining the canonical Stein operator with brackets allows us to naturally recover the Riemannian and diffusion Stein operators as special cases.
Finally, we shall introduce the minimum Stein discrepancy estimators, which provide a unifying perspective of parameter inference based on score matching, contrastive divergence, and minimum probability flow.Open Acces
New Directions for Contact Integrators
Contact integrators are a family of geometric numerical schemes which
guarantee the conservation of the contact structure. In this work we review the
construction of both the variational and Hamiltonian versions of these methods.
We illustrate some of the advantages of geometric integration in the
dissipative setting by focusing on models inspired by recent studies in
celestial mechanics and cosmology.Comment: To appear as Chapter 24 in GSI 2021, Springer LNCS 1282
Applications in Electronics Pervading Industry, Environment and Society
This book features the manuscripts accepted for the Special Issue “Applications in Electronics Pervading Industry, Environment and Society—Sensing Systems and Pervasive Intelligence” of the MDPI journal Sensors. Most of the papers come from a selection of the best papers of the 2019 edition of the “Applications in Electronics Pervading Industry, Environment and Society” (APPLEPIES) Conference, which was held in November 2019. All these papers have been significantly enhanced with novel experimental results. The papers give an overview of the trends in research and development activities concerning the pervasive application of electronics in industry, the environment, and society. The focus of these papers is on cyber physical systems (CPS), with research proposals for new sensor acquisition and ADC (analog to digital converter) methods, high-speed communication systems, cybersecurity, big data management, and data processing including emerging machine learning techniques. Physical implementation aspects are discussed as well as the trade-off found between functional performance and hardware/system costs
Mathematics & Statistics 2017 APR Self-Study & Documents
UNM Mathematics & Statistics APR self-study report, review team report, response report, and initial action plan for Spring 2017, fulfilling requirements of the Higher Learning Commission
Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009
Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In
recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence
- …