720 research outputs found
VIVA: An Online Algorithm for Piecewise Curve Estimation Using ℓ\u3csup\u3e0\u3c/sup\u3e Norm Regularization
Many processes deal with piecewise input functions, which occur naturally as a result of digital commands, user interfaces requiring a confirmation action, or discrete-time sampling. Examples include the assembly of protein polymers and hourly adjustments to the infusion rate of IV fluids during treatment of burn victims. Estimation of the input is straightforward regression when the observer has access to the timing information. More work is needed if the input can change at unknown times. Successful recovery of the change timing is largely dependent on the choice of cost function minimized during parameter estimation.
Optimal estimation of a piecewise input will often proceed by minimization of a cost function which includes an estimation error term (most commonly mean square error) and the number (cardinality) of input changes (number of commands). Because the cardinality (â„“0 norm) is not convex, the â„“2 norm (quadratic smoothing) and â„“1 norm (total variation minimization) are often substituted because they permit the use of convex optimization algorithms. However, these penalize the magnitude of input changes and therefore bias the piecewise estimates. Another disadvantage is that global optimization methods must be run after the end of data collection.
One approach to unbiasing the piecewise parameter fits would include application of total variation minimization to recover timing, followed by piecewise parameter fitting. Another method is presented herein: a dynamic programming approach which iteratively develops populations of candidate estimates of increasing length, pruning those proven to be dominated. Because the usage of input data is entirely causal, the algorithm recovers timing and parameter values online. A functional definition of the algorithm, which is an extension of Viterbi decoding and integrates the pruning concept from branch-and-bound, is presented. Modifications are introduced to improve handling of non-uniform sampling, non-uniform confidence, and burst errors. Performance tests using synthesized data sets as well as volume data from a research system recording fluid infusions show five-fold (piecewise-constant data) and 20-fold (piecewise-linear data) reduction in error compared to total variation minimization, along with improved sparsity and reduced sensitivity to the regularization parameter. Algorithmic complexity and delay are also considered
7th International Conference on Nonlinear Vibrations, Localization and Energy Transfer: Extended Abstracts
International audienceThe purpose of our conference is more than ever to promote exchange and discussions between scientists from all around the world about the latest research developments in the area of nonlinear vibrations, with a particular emphasis on the concept of nonlinear normal modes and targeted energytransfer
The numerical simulation of nonlinear waves in a hydrodynamic model test basin
This thesis describes the development of a numerical algorithm for the fully nonlinear simulation of free-surface waves. The aim of the research is to develop, implement and investigate an algorithm for the deterministic and accurate simulation of twodimensional nonlinear water waves in a model test basin. The simulated wave field may have a broad-banded spectrum and the simulations should be carried out by an efficient algorithm in order to be applicable in practical situations. The algorithm is based on a combination of Runge-Kutta (for time integration), Finite Element (boundary value problem) and Finite Difference (velocity recovery) methods. The scheme is further refined and investigated using different models for wave generation, propagation and absorption of waves
Statistical Inference via Convex Optimization
International audienceReverse mathematics is a new field that seeks to find the axioms needed to prove given theorems. Reverse mathematics began as a technical field of mathematical logic, but its main ideas have precedents in the ancient field of geometry and the early twentieth-century field of set theory. This book offers a historical and representative view, emphasizing basic analysis and giving a novel approach to logic. It concludes that mathematics is an arena where theorems cannot always be proved outright, but in which all of their logical equivalents can be found. This creates the possibility of reverse mathematics, where one seeks equivalents that are suitable as axioms. By using a minimum of mathematical logic in a well-motivated way, the book will engage advanced undergraduates and all mathematicians interested in the foundations of mathematics
Recommended from our members
Information Losses in Neural Classifiers With Applications to Training Data Selection Strategies and Cyber Physical Systems
This dissertation considers the subject of information losses arising from finite datasets used in the training of neural classifiers. It proves a relationship between such losses and the product of the expected total variation of the estimated neural model with the information about the feature space contained in the hidden representation of that model. It then bounds this expected total variation as a function of the size of randomly sampled datasets in a fairly general setting, and without bringing in any additional dependence on model complexity. It ultimately obtains bounds on information losses that are less sensitive to input compression and much tighter than existing bounds. It then uses these bounds to explain some recent experimental findings of information compression in neural networks which cannot be explained by previous work. The dissertation goes on to provide analytical derivations for the relationship between neural architectures and the mutual information contained in their representations, which can be useful for guided architecture selection schemes. It then uses these developments to propose and illustrate a new framework for analyzing training data selection methods. The dissertation use this framework to prove that facility location methods reduce these losses, and then derive a new data dependent bound on them. This bound can be used to evaluate datasets and acts as an additional analytical tool for the study of data selection techniques. The dissertation then applies this theory to the problem of Phase Identification in power distribution systems. In particular, it focuses on improving supervised learning accuracies by exploiting some of the problem's information theoretic properties. This focus, along with the advances developed earlier in this work, helps us create two new Phase Identification techniques. The first transforms the bound on information losses into a data selection technique. This is important because phase identification data labels are difficult to obtain in practice. The second interprets the properties of distribution systems in the terms of the information losses developed earlier in the dissertation. This allows us to obtain an improvement in the representation learned by any classifier applied to the problem. Furthermore, since many problems in cyber-physical systems share similarities to the physical properties of phase identification exploited in this dissertation, the techniques can be applied to a wide range of similar problems
New Directions for Contact Integrators
Contact integrators are a family of geometric numerical schemes which
guarantee the conservation of the contact structure. In this work we review the
construction of both the variational and Hamiltonian versions of these methods.
We illustrate some of the advantages of geometric integration in the
dissipative setting by focusing on models inspired by recent studies in
celestial mechanics and cosmology.Comment: To appear as Chapter 24 in GSI 2021, Springer LNCS 1282
Recommended from our members
New perspectives and applications for greedy algorithms in machine learning
Approximating probability densities is a core problem in Bayesian statistics, where the inference involves the computation of a posterior distribution. Variational Inference (VI) is a technique to approximate posterior distributions through optimization. It involves specifying a set of tractable densities, out of which the final approximation is to be chosen. While VI is traditionally motivated with the goal of tractability, the focus in this dissertation is to use Bayesian approximation to obtain parsimonious distributions. With this goal in mind, we develop greedy algorithm variants and study their theoretical properties by establishing novel connections of the resulting optimization problems in parsimonious VI with traditional studies in the discrete optimization literature. Specific realizations lead to efficient solutions for many sparse probabilistic models like Sparse regression, Sparse PCA, Sparse Collective Matrix Factorization (CMF) etc. For cases where existing results are insufficient to provide acceptable approximation guarantees, we extend the optimization results for some large scale algorithms to a much larger class of functions.The developed methods are applied to both simulated and real world datasets, including high dimensional functional Magnetic Resonance Imaging (fMRI) datasets, and to the real world tasks of interpreting data exploration and model predictions.Electrical and Computer Engineerin
Modelling disordered foams using the vertex ensemble
Imperial Users onl
- …