166 research outputs found
Optimal Spectral-Norm Approximate Minimization of Weighted Finite Automata
We address the approximate minimization problem for weighted finite automata
(WFAs) over a one-letter alphabet: to compute the best possible approximation
of a WFA given a bound on the number of states. This work is grounded in
Adamyan-Arov-Krein Approximation theory, a remarkable collection of results on
the approximation of Hankel operators. In addition to its intrinsic
mathematical relevance, this theory has proven to be very effective for model
reduction. We adapt these results to the framework of weighted automata over a
one-letter alphabet. We provide theoretical guarantees and bounds on the
quality of the approximation in the spectral and norm. We develop an
algorithm that, based on the properties of Hankel operators, returns the
optimal approximation in the spectral norm.Comment: 24 pages, authors appear in alphabetical order; minor correction in
Theorem 3.2 and consequently updated notation in Section 3, the validity of
the result is not affecte
Learning-Based Approaches for Graph Problems: A Survey
Over the years, many graph problems specifically those in NP-complete are
studied by a wide range of researchers. Some famous examples include graph
colouring, travelling salesman problem and subgraph isomorphism. Most of these
problems are typically addressed by exact algorithms, approximate algorithms
and heuristics. There are however some drawback for each of these methods.
Recent studies have employed learning-based frameworks such as machine learning
techniques in solving these problems, given that they are useful in discovering
new patterns in structured data that can be represented using graphs. This
research direction has successfully attracted a considerable amount of
attention. In this survey, we provide a systematic review mainly on classic
graph problems in which learning-based approaches have been proposed in
addressing the problems. We discuss the overview of each framework, and provide
analyses based on the design and performance of the framework. Some potential
research questions are also suggested. Ultimately, this survey gives a clearer
insight and can be used as a stepping stone to the research community in
studying problems in this field.Comment: v1: 41 pages; v2: 40 page
Exploiting short-term memory in soft body dynamics as a computational resource
Soft materials are not only highly deformable but they also possess rich and
diverse body dynamics. Soft body dynamics exhibit a variety of properties,
including nonlinearity, elasticity, and potentially infinitely many degrees of
freedom. Here we demonstrate that such soft body dynamics can be employed to
conduct certain types of computation. Using body dynamics generated from a soft
silicone arm, we show that they can be exploited to emulate functions that
require memory and to embed robust closed-loop control into the arm. Our
results suggest that soft body dynamics have a short-term memory and can serve
as a computational resource. This finding paves the way toward exploiting
passive body dynamics for control of a large class of underactuated systems.Comment: 22 pages, 11 figures; email address correcte
Dynamic mode decomposition in vector-valued reproducing kernel Hilbert spaces for extracting dynamical structure among observables
Understanding nonlinear dynamical systems (NLDSs) is challenging in a variety
of engineering and scientific fields. Dynamic mode decomposition (DMD), which
is a numerical algorithm for the spectral analysis of Koopman operators, has
been attracting attention as a way of obtaining global modal descriptions of
NLDSs without requiring explicit prior knowledge. However, since existing DMD
algorithms are in principle formulated based on the concatenation of scalar
observables, it is not directly applicable to data with dependent structures
among observables, which take, for example, the form of a sequence of graphs.
In this paper, we formulate Koopman spectral analysis for NLDSs with structures
among observables and propose an estimation algorithm for this problem. This
method can extract and visualize the underlying low-dimensional global dynamics
of NLDSs with structures among observables from data, which can be useful in
understanding the underlying dynamics of such NLDSs. To this end, we first
formulate the problem of estimating spectra of the Koopman operator defined in
vector-valued reproducing kernel Hilbert spaces, and then develop an estimation
procedure for this problem by reformulating tensor-based DMD. As a special case
of our method, we propose the method named as Graph DMD, which is a numerical
algorithm for Koopman spectral analysis of graph dynamical systems, using a
sequence of adjacency matrices. We investigate the empirical performance of our
method by using synthetic and real-world data.Comment: 34 pages with 4 figures, Published in Neural Networks, 201
Doctor of Philosophy
dissertationMachine learning is the science of building predictive models from data that automatically improve based on past experience. To learn these models, traditional learning algorithms require labeled data. They also require that the entire dataset fits in the memory of a single machine. Labeled data are available or can be acquired for small and moderately sized datasets but curating large datasets can be prohibitively expensive. Similarly, massive datasets are usually too huge to fit into the memory of a single machine. An alternative is to distribute the dataset over multiple machines. Distributed learning, however, poses new challenges as most existing machine learning techniques are inherently sequential. Additionally, these distributed approaches have to be designed keeping in mind various resource limitations of real-world settings, prime among them being intermachine communication. With the advent of big datasets machine learning algorithms are facing new challenges. Their design is no longer limited to minimizing some loss function but, additionally, needs to consider other resources that are critical when learning at scale. In this thesis, we explore different models and measures for learning with limited resources that have a budget. What budgetary constraints are posed by modern datasets? Can we reuse or combine existing machine learning paradigms to address these challenges at scale? How does the cost metrics change when we shift to distributed models for learning? These are some of the questions that have been investigated in this thesis. The answers to these questions hold the key to addressing some of the challenges faced when learning on massive datasets. In the first part of this thesis, we present three different budgeted scenarios that deal with scarcity of labeled data and limited computational resources. The goal is to leverage transfer information from related domains to learn under budgetary constraints. Our proposed techniques comprise semisupervised transfer, online transfer and active transfer. In the second part of this thesis, we study distributed learning with limited communication. We present initial sampling based results, as well as, propose communication protocols for learning distributed linear classifiers
Improved Practical Matrix Sketching with Guarantees
Matrices have become essential data representations for many large-scale
problems in data analytics, and hence matrix sketching is a critical task.
Although much research has focused on improving the error/size tradeoff under
various sketching paradigms, the many forms of error bounds make these
approaches hard to compare in theory and in practice. This paper attempts to
categorize and compare most known methods under row-wise streaming updates with
provable guarantees, and then to tweak some of these methods to gain practical
improvements while retaining guarantees.
For instance, we observe that a simple heuristic iSVD, with no guarantees,
tends to outperform all known approaches in terms of size/error trade-off. We
modify the best performing method with guarantees FrequentDirections under the
size/error trade-off to match the performance of iSVD and retain its
guarantees. We also demonstrate some adversarial datasets where iSVD performs
quite poorly. In comparing techniques in the time/error trade-off, techniques
based on hashing or sampling tend to perform better. In this setting we modify
the most studied sampling regime to retain error guarantee but obtain dramatic
improvements in the time/error trade-off.
Finally, we provide easy replication of our studies on APT, a new testbed
which makes available not only code and datasets, but also a computing platform
with fixed environmental settings.Comment: 27 page
- …