396 research outputs found

    Numerical algebraic geometry for model selection and its application to the life sciences

    Full text link
    Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation, and model selection. These are all optimization problems, well-known to be challenging due to non-linearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data is available. Here, we consider polynomial models (e.g., mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometric structures relating models and data, and we demonstrate its utility on examples from cell signaling, synthetic biology, and epidemiology.Comment: References added, additional clarification

    Numerical algebraic geometry approach to polynomial optimization, The

    Get PDF
    2017 Summer.Includes bibliographical references.Numerical algebraic geometry (NAG) consists of a collection of numerical algorithms, based on homotopy continuation, to approximate the solution sets of systems of polynomial equations arising from applications in science and engineering. This research focused on finding global solutions to constrained polynomial optimization problems of moderate size using NAG methods. The benefit of employing a NAG approach to nonlinear optimization problems is that every critical point of the objective function is obtained with probability-one. The NAG approach to global optimization aims to reduce computational complexity during path tracking by exploiting structure that arises from the corresponding polynomial systems. This thesis will consider applications to systems biology and life sciences where polynomials solve problems in model compatibility, model selection, and parameter estimation. Furthermore, these techniques produce mathematical models of large data sets on non-euclidean manifolds such as a disjoint union of Grassmannians. These methods will also play a role in analyzing the performance of existing local methods for solving polynomial optimization problems

    Nonlinear Filtering based on Log-homotopy Particle Flow : Methodological Clarification and Numerical Evaluation

    Get PDF
    The state estimation of dynamical systems based on measurements is an ubiquitous problem. This is relevant in applications like robotics, industrial manufacturing, computer vision, target tracking etc. Recursive Bayesian methodology can then be used to estimate the hidden states of a dynamical system. The procedure consists of two steps: a process update based on solving the equations modelling the state evolution, and a measurement update in which the prior knowledge about the system is improved based on the measurements. For most real world systems, both the evolution and the measurement models are nonlinear functions of the system states. Additionally, both models can also be perturbed by random noise sources, which could be non-Gaussian in their nature. Unlike linear Gaussian models, there does not exist any optimal estimation scheme for nonlinear/non-Gaussian scenarios. This thesis investigates a particular method for nonlinear and non-Gaussian data assimilation, termed as the log-homotopy based particle flow. Practical filters based on such flows have been known in the literature as Daum Huang filters (DHF), named after the developers. The key concept behind such filters is the gradual inclusion of measurements to counter a major drawback of single step update schemes like the particle filters i.e. namely the degeneracy. This could refer to a situation where the likelihood function has its probability mass well seperated from the prior density, and/or is peaked in comparison. Conventional sampling or grid based techniques do not perform well under such circumstances and in order to achieve a reasonable accuracy, could incur a high processing cost. DHF is a sampling based scheme, which provides a unique way to tackle this challenge thereby lowering the processing cost. This is achieved by dividing the single measurement update step into multiple sub steps, such that particles originating from their prior locations are graduated incrementally until they reach their final locations. The motion is controlled by a differential equation, which is numerically solved to yield the updated states. DH filters, even though not new in the literature, have not been fully explored in the detail yet. They lack the in-depth analysis that the other contemporary filters have gone through. Especially, the implementation details for the DHF are very application specific. In this work, we have pursued four main objectives. The first objective is the exploration of theoretical concepts behind DHF. Secondly, we build an understanding of the existing implementation framework and highlight its potential shortcomings. As a sub task to this, we carry out a detailed study of important factors that affect the performance of a DHF, and suggest possible improvements for each of those factors. The third objective is to use the improved implementation to derive new filtering algorithms. Finally, we have extended the DHF theory and derived new flow equations and filters to cater for more general scenarios. Improvements in the implementation architecture of a standard DHF is one of the key contributions of this thesis. The scope of the applicability of DHF is expanded by combining it with other schemes like the Sequential Markov chain Monte Carlo and the tensor decomposition based solution of the Fokker Planck equation, resulting in the development of new nonlinear filtering algorithms. The standard DHF, using improved implementation and the newly derived algorithms are tested in challenging simulated test scenarios. Detailed analysis have been carried out, together with the comparison against more established filtering schemes. Estimation error and the processing time are used as important performance parameters. We show that our new filtering algorithms exhibit marked performance improvements over the traditional schemes

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Darwin's Rainbow: Evolutionary radiation and the spectrum of consciousness

    Get PDF
    Evolution is littered with paraphyletic convergences: many roads lead to functional Romes. We propose here another example - an equivalence class structure factoring the broad realm of possible realizations of the Baars Global Workspace consciousness model. The construction suggests many different physiological systems can support rapidly shifting, sometimes highly tunable, temporary assemblages of interacting unconscious cognitive modules. The discovery implies various animal taxa exhibiting behaviors we broadly recognize as conscious are, in fact, simply expressing different forms of the same underlying phenomenon. Mathematically, we find much slower, and even multiple simultaneous, versions of the basic structure can operate over very long timescales, a kind of paraconsciousness often ascribed to group phenomena. The variety of possibilities, a veritable rainbow, suggests minds today may be only a small surviving fraction of ancient evolutionary radiations - bush phylogenies of consciousness and paraconsciousness. Under this scenario, the resulting diversity was subsequently pruned by selection and chance extinction. Though few traces of the radiation may be found in the direct fossil record, exaptations and vestiges are scattered across the living mind. Humans, for instance, display an uncommonly profound synergism between individual consciousness and their embedding cultural heritages, enabling efficient Lamarkian adaptation

    Efficient Optimization Algorithms for Nonlinear Data Analysis

    Get PDF
    Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.Siirretty Doriast

    Geometry of the ergodic quotient reveals coherent structures in flows

    Full text link
    Dynamical systems that exhibit diverse behaviors can rarely be completely understood using a single approach. However, by identifying coherent structures in their state spaces, i.e., regions of uniform and simpler behavior, we could hope to study each of the structures separately and then form the understanding of the system as a whole. The method we present in this paper uses trajectory averages of scalar functions on the state space to: (a) identify invariant sets in the state space, (b) form coherent structures by aggregating invariant sets that are similar across multiple spatial scales. First, we construct the ergodic quotient, the object obtained by mapping trajectories to the space of trajectory averages of a function basis on the state space. Second, we endow the ergodic quotient with a metric structure that successfully captures how similar the invariant sets are in the state space. Finally, we parametrize the ergodic quotient using intrinsic diffusion modes on it. By segmenting the ergodic quotient based on the diffusion modes, we extract coherent features in the state space of the dynamical system. The algorithm is validated by analyzing the Arnold-Beltrami-Childress flow, which was the test-bed for alternative approaches: the Ulam's approximation of the transfer operator and the computation of Lagrangian Coherent Structures. Furthermore, we explain how the method extends the Poincar\'e map analysis for periodic flows. As a demonstration, we apply the method to a periodically-driven three-dimensional Hill's vortex flow, discovering unknown coherent structures in its state space. In the end, we discuss differences between the ergodic quotient and alternatives, propose a generalization to analysis of (quasi-)periodic structures, and lay out future research directions.Comment: Submitted to Elsevier Physica D: Nonlinear Phenomen
    • …
    corecore