212 research outputs found

    Tensor Regression

    Full text link
    Regression analysis is a key area of interest in the field of data analysis and machine learning which is devoted to exploring the dependencies between variables, often using vectors. The emergence of high dimensional data in technologies such as neuroimaging, computer vision, climatology and social networks, has brought challenges to traditional data representation methods. Tensors, as high dimensional extensions of vectors, are considered as natural representations of high dimensional data. In this book, the authors provide a systematic study and analysis of tensor-based regression models and their applications in recent years. It groups and illustrates the existing tensor-based regression methods and covers the basics, core ideas, and theoretical characteristics of most tensor-based regression methods. In addition, readers can learn how to use existing tensor-based regression methods to solve specific regression tasks with multiway data, what datasets can be selected, and what software packages are available to start related work as soon as possible. Tensor Regression is the first thorough overview of the fundamentals, motivations, popular algorithms, strategies for efficient implementation, related applications, available datasets, and software resources for tensor-based regression analysis. It is essential reading for all students, researchers and practitioners of working on high dimensional data.Comment: 187 pages, 32 figures, 10 table

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Probabilistic forecasting and interpretability in power load applications

    Get PDF
    Power load forecasting is a fundamental tool in the modern electric power generation and distribution industry. The ability to accurately predict future behaviours of the grid, both in the short and long term, is vital in order to adequately meet demand and scaling requirements. Over the past few decades Machine Learning (ML) has taken center stage in this context, with an emphasis on short-term forecasting using both traditional ML as well as Deep-Learning (DL) models. In this dissertation, we approach forecasting not only from the angle of improving predictive accuracy, but also with the goal of gaining interpretability of the behavior of the electric load through models that can offer deeper insight and extract useful information. Specifically for this reason, we focus on the use of probabilistic models, which can shed light on valuable information about the underlying structure of the data through the interpretation of their parameters. Furthermore, the use of probabilistic models intrinsically provides us with a way of measuring the confidence in our predictions through the predictive variance. Throughout the dissertation we shall focus on two specific ideas within the greater field of power load forecasting, which will comprise our main contributions. The first contribution addresses the notion of power load profiling, in which ML is used to identify profiles that represent distinct behaviours in the power load data. These profiles have two fundamental uses: first, they can be valuable interpretability tools, as they offer simple yet powerful descriptions of the underlying patterns hidden in the time series data; second, they can improve forecasting accuracy by allowing us to train specialized predictive models tailored to each individual profile. However, in most of the literature profiling and prediction are typically performed sequentially, with an initial clustering algorithm identifying profiles in the input data and a subsequent prediction stage where independent regressors are trained on each profile. In this dissertation we propose a novel probabilistic approach that couples both the profiling and predictive stages by jointly fitting a clustering model and multiple linear regressors. In training, both the clustering of the input data and the fitting of the regressors to the output data influence each other through a joint likelihood function, resulting in a set of clusters that is much better suited to the prediction task and is therefore much more relevant and informative. The model is tested on two real world power load databases, provided by the regional transmission organizations ISO New England and PJM Interconect LLC, in a 24-hour ahead prediction scenario. We achieve better performance than other state of the art approaches while arriving at more consistent and informative profiles of the power load data. Our second contribution applies the idea of multi-task prediction to the context of 24- hour ahead forecasting. In a multi-task prediction problem there are multiple outputs that are assumed to be correlated in some way. Identifying and exploiting these relationships can result in much better performance as well as a better understanding of a multi-task problem. Even though the load forecasting literature is scarce on this subject, it seems obvious to assume that there exist important correlations between the outputs in a 24-hour prediction scenario. To tackle this, we develop a multi-task Gaussian process model that addresses the relationships between the outputs by assuming the existence of, and subsequently estimating, both an inter-task covariance matrix and a multitask noise covariance matrix that capture these important interactions. Our model improves on other multi-task Gaussian process approaches in that it greatly reduces the number of parameters to be inferred while maintaining the interpretability provided by the estimation and visualization of the multi-task covariance matrices. We first test our model on a wide selection of general synthetic and real world multi-task problems with excellent results. We then apply it to a 24-hour ahead power load forecasting scenario using the ISO New England database, outperforming other standard multi-task Gaussian processes and providing very useful visual information through the estimation of the covariance matrices.La predicción de carga es una herramenta fundamental en la industria moderna de la generación y distribución de energía eléctrica. La capacidad de estimar con precisión el comportamiento futuro de la red, tanto a corto como a largo plazo, es vital para poder cumplir con los requisitos de demanda y escalado en las diferentes infraestructuras. A lo largo de las últimas décadas, el Aprendizaje Automático o Machine Learning (ML) ha tomado un papel protagonista en este contexto, con un marcado énfasis en la predicción a corto plazo utilizando tanto modelos de ML tradicionales como redes Deep-Learning (DL). En esta tesis planteamos la predicción de carga no sólo con el objetivo de mejorar las prestaciones en la estimación, sino también de ganar en la interpretabilidad del comportamiento de la carga eléctrica a través de modelos que puedan extraer información útil. Por este motivo nos centraremos en modelos probabilísticos, que por su naturaleza pueden arrojar luz sobre la estructura oculta de los datos a través de la interpretación de sus parámetros. Además el uso de modelos probabilísticos nos proporciona de forma intrínseca una medida de confianza en la predicción a través de la estimación de la varianza predictiva. A lo largo de la tesis nos centraremos en dos ideas concretas en el contexto de la predicción de carga eléctrica, que conformarán nuestras aportaciónes principales. Nuestra primera contribución plantea la idea del perfilado de la carga eléctrica, donde se utilizan modelos de ML para identificar perfiles que representan comportamientos diferenciables en los datos de carga. Estos perfiles tienen dos usos fundamentales: en primer lugar son herramientas útiles para la interpretabilidad del problema ya que ofrecen descripciones sencillas de los posibles patrones ocultos en los datos; en segundo lugar, los perfiles pueden ser utilizados para mejorar las prestaciones de estimación, ya que permiten entrenar varios modelos predictivos especializados en cada perfil individual. Sin embargo, en la literatura el perfilado y la predicción se presentan como eventos en cascada, donde primero se entrena un algoritmo de clústering para detectar perfiles que luego son utilizados para entrenar los modelos de regresión. En esta tesis proponemos un modelo probabilístico novedoso que acopla las dos fases ajustando simultáneamente un modelo de clústering y los correspondientes modelos de regresión. Durante el entrenamiento ambas partes del modelo se influencian entre sí a través de una función de verosimilitud conjunta, resultando en un conjunto de clusters que está mucho mejor adaptado a la tarea de predicción y es por tanto mucho más relevante e informativo. En los experimentos, el modelo es entrenado con datos reales de carga eléctrica provinientes de dos bases de datos públicas proporcionadas por las organizaciónde de transmisión regional estadounidenses ISO New England y PJM Interconect LLC, en un escenario de predicción a 24 horas. El modelo obtiene mejores prestaciones que otros algoritmos competitivos, proporcionando al mismo tiempo un conjunto de perfiles del comportamiento de la carga más consistente e informativo. Nuestra segunda contribución aplica la idea de predicción multi-tarea al contexto de la estimación a 24 horas. Los problemas multi-tarea presentan múltiples salidas que se asume están de alguna forma correladas entre sí. Identificar y aprovechar estas relaciones puede incurrir en un incremento de las prestaciones así como un mejor entendimiento del problema multi-tarea. A pesar de que la literatura de predicción de carga es escasa en este sentido, parece lógico pensar que deben existir importantes correlaciones entre las salidas de un escenario de predicción a 24 horas. Por este motivo hemos desarrollado un proceso Gaussiano multi-tarea que recoge las relaciones entre salidas asumiendo la existencia de de una covarianza inter-tarea así como un ruido multi-tarea. Nuestro modelo ofrece mejoras con respecto a otras formulaciones de procesos Gaussianos multi-tarea al reducir el número de parámetros a estimar mientras se mantiene la interpretabilidad proporcionada por la estimación y visualizacion de las matrices de covarianza y ruido inter-tarea. Primero, en la fase de experimentos nuestro modelo es puesto a prueba sobre una batería de bases de datos tanto sintéticas como reales, obteniendo muy buenos resultados. A continuación se aplica el modelo a un problema de predicción de carga a 24 horas utilizando la base de datos de ISO New England, batiendo en prestaciones a otros procesos Gaussianos multi-tarea y proporcionando información visual útil mediante la estimación de las matrices de covarianza inter-tarea.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Pablo Martínez Olmos.- Secretario: Pablo Muñoz Moreno.- Vocal: José Palacio

    Reasoning with Uncertainty in Deep Learning for Safer Medical Image Computing

    Get PDF
    Deep learning is now ubiquitous in the research field of medical image computing. As such technologies progress towards clinical translation, the question of safety becomes critical. Once deployed, machine learning systems unavoidably face situations where the correct decision or prediction is ambiguous. However, the current methods disproportionately rely on deterministic algorithms, lacking a mechanism to represent and manipulate uncertainty. In safety-critical applications such as medical imaging, reasoning under uncertainty is crucial for developing a reliable decision making system. Probabilistic machine learning provides a natural framework to quantify the degree of uncertainty over different variables of interest, be it the prediction, the model parameters and structures, or the underlying data (images and labels). Probability distributions are used to represent all the uncertain unobserved quantities in a model and how they relate to the data, and probability theory is used as a language to compute and manipulate these distributions. In this thesis, we explore probabilistic modelling as a framework to integrate uncertainty information into deep learning models, and demonstrate its utility in various high-dimensional medical imaging applications. In the process, we make several fundamental enhancements to current methods. We categorise our contributions into three groups according to the types of uncertainties being modelled: (i) predictive; (ii) structural and (iii) human uncertainty. Firstly, we discuss the importance of quantifying predictive uncertainty and understanding its sources for developing a risk-averse and transparent medical image enhancement application. We demonstrate how a measure of predictive uncertainty can be used as a proxy for the predictive accuracy in the absence of ground-truths. Furthermore, assuming the structure of the model is flexible enough for the task, we introduce a way to decompose the predictive uncertainty into its orthogonal sources i.e. aleatoric and parameter uncertainty. We show the potential utility of such decoupling in providing a quantitative “explanations” into the model performance. Secondly, we introduce our recent attempts at learning model structures directly from data. One work proposes a method based on variational inference to learn a posterior distribution over connectivity structures within a neural network architecture for multi-task learning, and share some preliminary results in the MR-only radiotherapy planning application. Another work explores how the training algorithm of decision trees could be extended to grow the architecture of a neural network to adapt to the given availability of data and the complexity of the task. Lastly, we develop methods to model the “measurement noise” (e.g., biases and skill levels) of human annotators, and integrate this information into the learning process of the neural network classifier. In particular, we show that explicitly modelling the uncertainty involved in the annotation process not only leads to an improvement in robustness to label noise, but also yields useful insights into the patterns of errors that characterise individual experts

    A unified framework for machine learning collective variables for enhanced sampling simulations: mlcolvar\texttt{mlcolvar}

    Full text link
    Identifying a reduced set of collective variables is critical for understanding atomistic simulations and accelerating them through enhanced sampling techniques. Recently, several methods have been proposed to learn these variables directly from atomistic data. Depending on the type of data available, the learning process can be framed as dimensionality reduction, classification of metastable states or identification of slow modes. Here we present mlcolvar\texttt{mlcolvar}, a Python library that simplifies the construction of these variables and their use in the context of enhanced sampling through a contributed interface to the PLUMED software. The library is organized modularly to facilitate the extension and cross-contamination of these methodologies. In this spirit, we developed a general multi-task learning framework in which multiple objective functions and data from different simulations can be combined to improve the collective variables. The library's versatility is demonstrated through simple examples that are prototypical of realistic scenarios
    corecore