369 research outputs found

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Precision Physics at LEP

    Get PDF
    1 - Introduction 2 - Small-Angle Bhabha Scattering and the Luminosity Measurement 3 - Z^0 Physics 4 - Fits to Precision Data 5 - Physics at LEP2 6 - ConclusionsComment: Review paper to appear in the RIVISTA DEL NUOVO CIMENTO; 160 pages, LateX, 70 eps figures include

    Enhancing the information content of geophysical data for nuclear site characterisation

    Get PDF
    Our knowledge and understanding to the heterogeneous structure and processes occurring in the Earth’s subsurface is limited and uncertain. The above is true even for the upper 100m of the subsurface, yet many processes occur within it (e.g. migration of solutes, landslides, crop water uptake, etc.) are important to human activities. Geophysical methods such as electrical resistivity tomography (ERT) greatly improve our ability to observe the subsurface due to their higher sampling frequency (especially with autonomous time-lapse systems), larger spatial coverage and less invasive operation, in addition to being more cost-effective than traditional point-based sampling. However, the process of using geophysical data for inference is prone to uncertainty. There is a need to better understand the uncertainties embedded in geophysical data and how they translate themselves when they are subsequently used, for example, for hydrological or site management interpretations and decisions. This understanding is critical to maximize the extraction of information in geophysical data. To this end, in this thesis, I examine various aspects of uncertainty in ERT and develop new methods to better use geophysical data quantitatively. The core of the thesis is based on two literature reviews and three papers. In the first review, I provide a comprehensive overview of the use of geophysical data for nuclear site characterization, especially in the context of site clean-up and leak detection. In the second review, I survey the various sources of uncertainties in ERT studies and the existing work to better quantify or reduce them. I propose that the various steps in the general workflow of an ERT study can be viewed as a pipeline for information and uncertainty propagation and suggested some areas have been understudied. One of these areas is measurement errors. In paper 1, I compare various methods to estimate and model ERT measurement errors using two long-term ERT monitoring datasets. I also develop a new error model that considers the fact that each electrode is used to make multiple measurements. In paper 2, I discuss the development and implementation of a new method for geoelectrical leak detection. While existing methods rely on obtaining resistivity images through inversion of ERT data first, the approach described here estimates leak parameters directly from raw ERT data. This is achieved by constructing hydrological models from prior site information and couple it with an ERT forward model, and then update the leak (and other hydrological) parameters through data assimilation. The approach shows promising results and is applied to data from a controlled injection experiment in Yorkshire, UK. The approach complements ERT imaging and provides a new way to utilize ERT data to inform site characterisation. In addition to leak detection, ERT is also commonly used for monitoring soil moisture in the vadose zone, and increasingly so in a quantitative manner. Though both the petrophysical relationships (i.e., choices of appropriate model and parameterization) and the derived moisture content are known to be subject to uncertainty, they are commonly treated as exact and error‐free. In paper 3, I examine the impact of uncertain petrophysical relationships on the moisture content estimates derived from electrical geophysics. Data from a collection of core samples show that the variability in such relationships can be large, and they in turn can lead to high uncertainty in moisture content estimates, and they appear to be the dominating source of uncertainty in many cases. In the closing chapters, I discuss and synthesize the findings in the thesis within the larger context of enhancing the information content of geophysical data, and provide an outlook on further research in this topic

    Fast algorithm for real-time rings reconstruction

    Get PDF
    The GAP project is dedicated to study the application of GPU in several contexts in which real-time response is important to take decisions. The definition of real-time depends on the application under study, ranging from answer time of μs up to several hours in case of very computing intensive task. During this conference we presented our work in low level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6]. Apart from the study of dedicated solution to decrease the latency due to data transport and preparation, the computing algorithms play an essential role in any GPU application. In this contribution, we show an original algorithm developed for triggers application, to accelerate the ring reconstruction in RICH detector when it is not possible to have seeds for reconstruction from external trackers

    Microwave Sensing and Imaging

    Get PDF
    In recent years, microwave sensing and imaging have acquired an ever-growing importance in several applicative fields, such as non-destructive evaluations in industry and civil engineering, subsurface prospection, security, and biomedical imaging. Indeed, microwave techniques allow, in principle, for information to be obtained directly regarding the physical parameters of the inspected targets (dielectric properties, shape, etc.) by using safe electromagnetic radiations and cost-effective systems. Consequently, a great deal of research activity has recently been devoted to the development of efficient/reliable measurement systems, which are effective data processing algorithms that can be used to solve the underlying electromagnetic inverse scattering problem, and efficient forward solvers to model electromagnetic interactions. Within this framework, this Special Issue aims to provide some insights into recent microwave sensing and imaging systems and techniques

    Non-Parametric Bayesian Methods for Linear System Identification

    Get PDF
    Recent contributions have tackled the linear system identification problem by means of non-parametric Bayesian methods, which are built on largely adopted machine learning techniques, such as Gaussian Process regression and kernel-based regularized regression. Following the Bayesian paradigm, these procedures treat the impulse response of the system to be estimated as the realization of a Gaussian process. Typically, a Gaussian prior accounting for stability and smoothness of the impulse response is postulated, as a function of some parameters (called hyper-parameters in the Bayesian framework). These are generally estimated by maximizing the so-called marginal likelihood, i.e. the likelihood after the impulse response has been marginalized out. Once the hyper-parameters have been fixed in this way, the final estimator is computed as the conditional expected value of the impulse response w.r.t. the posterior distribution, which coincides with the minimum variance estimator. Assuming that the identification data are corrupted by Gaussian noise, the above-mentioned estimator coincides with the solution of a regularized estimation problem, in which the regularization term is the l2 norm of the impulse response, weighted by the inverse of the prior covariance function (a.k.a. kernel in the machine learning literature). Recent works have shown how such Bayesian approaches are able to jointly perform estimation and model selection, thus overcoming one of the main issues affecting parametric identification procedures, that is complexity selection. While keeping the classical system identification methods (e.g. Prediction Error Methods and subspace algorithms) as a benchmark for numerical comparison, this thesis extends and analyzes some key aspects of the above-mentioned Bayesian procedure. In particular, four main topics are considered. 1. PRIOR DESIGN. Adopting Maximum Entropy arguments, a new type of l2 regularization is derived: the aim is to penalize the rank of the block Hankel matrix built with Markov coefficients, thus controlling the complexity of the identified model, measured by its McMillan degree. By accounting for the coupling between different input-output channels, this new prior results particularly suited when dealing for the identification of MIMO systems To speed up the computational requirements of the estimation algorithm, a tailored version of the Scaled Gradient Projection algorithm is designed to optimize the marginal likelihood. 2. CHARACTERIZATION OF UNCERTAINTY. The confidence sets returned by the non-parametric Bayesian identification algorithm are analyzed and compared with those returned by parametric Prediction Error Methods. The comparison is carried out in the impulse response space, by deriving “particle” versions (i.e. Monte-Carlo approximations) of the standard confidence sets. 3. ONLINE ESTIMATION. The application of the non-parametric Bayesian system identification techniques is extended to an online setting, in which new data become available as time goes. Specifically, two key modifications of the original “batch” procedure are proposed in order to meet the real-time requirements. In addition, the identification of time-varying systems is tackled by introducing a forgetting factor in the estimation criterion and by treating it as a hyper-parameter. 4. POST PROCESSING: MODEL REDUCTION. Non-parametric Bayesian identification procedures estimate the unknown system in terms of its impulse response coefficients, thus returning a model with high (possibly infinite) McMillan degree. A tailored procedure is proposed to reduce such model to a lower degree one, which appears more suitable for filtering and control applications. Different criteria for the selection of the order of the reduced model are evaluated and compared
    corecore