83 research outputs found
Unsupervised Regression with Applications to Nonlinear System Identification
We derive a cost functional for estimating the relationship between high-dimensional observations and the low-dimensional process that generated them with no input-output examples. Limiting our search to invertible observation functions confers numerous benefits, including a compact representation and no suboptimal local minima. Our approximation algorithms for optimizing this cost
functional are fast and give diagnostic bounds on the quality of their solution. Our method can be viewed as a manifold learning algorithm that utilizes a prior on the
low-dimensional manifold coordinates. The benefits of taking advantage of such priors in manifold learning and searching for the inverse observation functions
in system identification are demonstrated empirically by learning to track moving targets from raw measurements in a sensor network setting and in an RFID tracking experiment
Determining Interconnections in Chemical Reaction Networks
We present a methodology for robust determination of chemical reaction network interconnections. Given time series data that are collected from experiments and taking into account the measurement error, we minimize the 1-norm of the decision variables (reaction rates) keeping the data in close Euler-flt with a general model structure based on mass action kinetics which models the species' dynamics. We illustrate our methodology on a hypothetical chemical reaction network under various experimental scenarios
A Cost-based Optimizer for Gradient Descent Optimization
As the use of machine learning (ML) permeates into diverse application
domains, there is an urgent need to support a declarative framework for ML.
Ideally, a user will specify an ML task in a high-level and easy-to-use
language and the framework will invoke the appropriate algorithms and system
configurations to execute it. An important observation towards designing such a
framework is that many ML tasks can be expressed as mathematical optimization
problems, which take a specific form. Furthermore, these optimization problems
can be efficiently solved using variations of the gradient descent (GD)
algorithm. Thus, to decouple a user specification of an ML task from its
execution, a key component is a GD optimizer. We propose a cost-based GD
optimizer that selects the best GD plan for a given ML task. To build our
optimizer, we introduce a set of abstract operators for expressing GD
algorithms and propose a novel approach to estimate the number of iterations a
GD algorithm requires to converge. Extensive experiments on real and synthetic
datasets show that our optimizer not only chooses the best GD plan but also
allows for optimizations that achieve orders of magnitude performance speed-up.Comment: Accepted at SIGMOD 201
Comparison of screening methods for high-throughput determination of oil yields in micro-algal biofuel strains
High-velocity impact loading in honeycomb sandwich panels reinforced with polymer foam : a numerical approach study
- …