30 research outputs found

    The Adaptive-Clustering and Error-Correction Method for Forecasting Cyanobacteria Blooms in Lakes and Reservoirs

    Get PDF
    Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression), and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment

    Large Scale Kernel Methods for Fun and Profit

    Get PDF
    Kernel methods are among the most flexible classes of machine learning models with strong theoretical guarantees. Wide classes of functions can be approximated arbitrarily well with kernels, while fast convergence and learning rates have been formally shown to hold. Exact kernel methods are known to scale poorly with increasing dataset size, and we believe that one of the factors limiting their usage in modern machine learning is the lack of scalable and easy to use algorithms and software. The main goal of this thesis is to study kernel methods from the point of view of efficient learning, with particular emphasis on large-scale data, but also on low-latency training, and user efficiency. We improve the state-of-the-art for scaling kernel solvers to datasets with billions of points using the Falkon algorithm, which combines random projections with fast optimization. Running it on GPUs, we show how to fully utilize available computing power for training kernel machines. To boost the ease-of-use of approximate kernel solvers, we propose an algorithm for automated hyperparameter tuning. By minimizing a penalized loss function, a model can be learned together with its hyperparameters, reducing the time needed for user-driven experimentation. In the setting of multi-class learning, we show that – under stringent but realistic assumptions on the separation between classes – a wide set of algorithms needs much fewer data points than in the more general setting (without assumptions on class separation) to reach the same accuracy. The first part of the thesis develops a framework for efficient and scalable kernel machines. This raises the question of whether our approaches can be used successfully in real-world applications, especially compared to alternatives based on deep learning which are often deemed hard to beat. The second part aims to investigate this question on two main applications, chosen because of the paramount importance of having an efficient algorithm. First, we consider the problem of instance segmentation of images taken from the iCub robot. Here Falkon is used as part of a larger pipeline, but the efficiency afforded by our solver is essential to ensure smooth human-robot interactions. In the second instance, we consider time-series forecasting of wind speed, analysing the relevance of different physical variables on the predictions themselves. We investigate different schemes to adapt i.i.d. learning to the time-series setting. Overall, this work aims to demonstrate, through novel algorithms and examples, that kernel methods are up to computationally demanding tasks, and that there are concrete applications in which their use is warranted and more efficient than that of other, more complex, and less theoretically grounded models

    Proceedings, MSVSCC 2011

    Get PDF
    Proceedings of the 5th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 14, 2011 at VMASC in Suffolk, Virginia. 186 pp

    Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain

    Get PDF
    The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio

    Generalised Langevin equation: asymptotic properties and numerical analysis

    Get PDF
    In this thesis we concentrate on instances of the GLE which can be represented in a Markovian form in an extended phase space. We extend previous results on the geometric ergodicity of this class of GLEs using Lyapunov techniques, which allows us to conclude ergodicity for a large class of GLEs relevant to molecular dynamics applications. The main body of this thesis concerns the numerical discretisation of the GLE in the extended phase space representation. We generalise numerical discretisation schemes which have been previously proposed for the underdamped Langevin equation and which are based on a decomposition of the vector field into a Hamiltonian part and a linear SDE. Certain desirable properties regarding the accuracy of configurational averages of these schemes are inherited in the GLE context. We also rigorously prove geometric ergodicity on bounded domains by showing that a uniform minorisation condition and a uniform Lyapunov condition are satisfied for sufficiently small timestep size. We show that the discretisation schemes which we propose behave consistently in the white noise and overdamped limits, hence we provide a family of universal integrators for Langevin dynamics. Finally, we consider multiple-time stepping schemes making use of a decomposition of the fluctuation-dissipation term into a reversible and non-reversible part. These schemes are designed to efficiently integrate instances of the GLE whose Markovian representation involves a high number of auxiliary variables or a configuration dependent fluctuation-dissipation term. We also consider an application of dynamics based on the GLE in the context of large scale Bayesian inference as an extension of previously proposed adaptive thermostat methods. In these methods the gradient of the log posterior density is only evaluated on a subset (minibatch) of the whole dataset, which is randomly selected at each timestep. Incorporating a memory kernel in the adaptive thermostat formulation ensures that time-correlated gradient noise is dissipated in accordance with the fluctuation-dissipation theorem. This allows us to relax the requirement of using i.i.d. minibatches, and explore a variety of minibatch sampling approaches

    Numerical Integration as and for Probabilistic Inference

    Get PDF
    Numerical integration or quadrature is one of the workhorses of modern scientific computing and a key operation to perform inference in intractable probabilistic models. The epistemic uncertainty about the true value of an analytically intractable integral identifies the integration task as an inference problem itself. Indeed, numerical integration can be cast as a probabilistic numerical method known as Bayesian quadrature (BQ). BQ leverages structural assumptions about the function to be integrated via properties encoded in the prior. A posterior belief over the unknown integral value emerges by conditioning the BQ model on an actively selected point set and corresponding function evaluations. Iterative updates to the Bayesian model turn BQ into an adaptive quadrature method that quantifies its uncertainty about the solution of the integral in a principled way. This thesis traces out the scope of probabilistic integration methods and highlights types of integration tasks that BQ excels at. These arise when sample efficiency is required and encodable prior knowledge about the integration problem of low to moderate dimensionality is at hand. The first contribution addresses transfer learning with BQ. It extends the notion of active learning schemes to cost-sensitive settings where cheap approximations to an expensive-to-evaluate integrand are available. The degeneracy of acquisition policies in simple BQ is lifted upon generalization to the multi-source, cost-sensitive setting. This motivates the formulation of a set of desirable properties for BQ acquisition functions. A second application considers integration tasks arising in statistical computations on Riemannian manifolds that have been learned from data. Unsupervised learning algorithms that respect the intrinsic geometry of the data rely on the repeated estimation of expensive and structured integrals. Our custom-made active BQ scheme outperforms conventional integration tools for Riemannian statistics. Despite their unarguable benefits, BQ schemes provide limited flexibility to construct suitable priors while keeping the inference step tractable. In a final contribution, we identify the ubiquitous integration problem of computing multivariate normal probabilities as a type of integration task that is structurally taxing for BQ. The instead proposed method is an elegant algorithm based on Markov chain Monte Carlo that permits both sampling from and estimating the normalization constant of linearly constrained Gaussians that contain an arbitrarily small probability mass. As a whole, this thesis contributes to the wider goal of advancing integration algorithms to satisfy the needs imposed by contemporary probabilistic machine learning applications
    corecore