4,400 research outputs found
The parameterized complexity of some geometric problems in unbounded dimension
We study the parameterized complexity of the following fundamental geometric
problems with respect to the dimension : i) Given points in \Rd,
compute their minimum enclosing cylinder. ii) Given two -point sets in
\Rd, decide whether they can be separated by two hyperplanes. iii) Given a
system of linear inequalities with variables, find a maximum-size
feasible subsystem. We show that (the decision versions of) all these problems
are W[1]-hard when parameterized by the dimension . %and hence not solvable
in time, for any computable function and constant
%(unless FPT=W[1]). Our reductions also give a -time lower bound
(under the Exponential Time Hypothesis)
Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models
Mainstream machine-learning techniques such as deep learning and
probabilistic programming rely heavily on sampling from generally intractable
probability distributions. There is increasing interest in the potential
advantages of using quantum computing technologies as sampling engines to speed
up these tasks or to make them more effective. However, some pressing
challenges in state-of-the-art quantum annealers have to be overcome before we
can assess their actual performance. The sparse connectivity, resulting from
the local interaction between quantum bits in physical hardware
implementations, is considered the most severe limitation to the quality of
constructing powerful generative unsupervised machine-learning models. Here we
use embedding techniques to add redundancy to data sets, allowing us to
increase the modeling capacity of quantum annealers. We illustrate our findings
by training hardware-embedded graphical models on a binarized data set of
handwritten digits and two synthetic data sets in experiments with up to 940
quantum bits. Our model can be trained in quantum hardware without full
knowledge of the effective parameters specifying the corresponding quantum
Gibbs-like distribution; therefore, this approach avoids the need to infer the
effective temperature at each iteration, speeding up learning; it also
mitigates the effect of noise in the control parameters, making it robust to
deviations from the reference Gibbs distribution. Our approach demonstrates the
feasibility of using quantum annealers for implementing generative models, and
it provides a suitable framework for benchmarking these quantum technologies on
machine-learning-related tasks.Comment: 17 pages, 8 figures. Minor further revisions. As published in Phys.
Rev.
An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams
Existing FNNs are mostly developed under a shallow network configuration
having lower generalization power than those of deep structures. This paper
proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be
automatically extracted from data streams or removed if they play limited role
during their lifespan. The structure of the network can be deepened on demand
by stacking additional layers using a drift detection method which not only
detects the covariate drift, variations of input space, but also accurately
identifies the real drift, dynamic changes of both feature space and target
space. DEVFNN is developed under the stacked generalization principle via the
feature augmentation concept where a recently developed algorithm, namely
gClass, drives the hidden layer. It is equipped by an automatic feature
selection method which controls activation and deactivation of input attributes
to induce varying subsets of input features. A deep network simplification
procedure is put forward using the concept of hidden layer merging to prevent
uncontrollable growth of dimensionality of input space due to the nature of
feature augmentation approach in building a deep network structure. DEVFNN
works in the sample-wise fashion and is compatible for data stream
applications. The efficacy of DEVFNN has been thoroughly evaluated using seven
datasets with non-stationary properties under the prequential test-then-train
protocol. It has been compared with four popular continual learning algorithms
and its shallow counterpart where DEVFNN demonstrates improvement of
classification accuracy. Moreover, it is also shown that the concept drift
detection method is an effective tool to control the depth of network structure
while the hidden layer merging scenario is capable of simplifying the network
complexity of a deep network with negligible compromise of generalization
performance.Comment: This paper has been published in IEEE Transactions on Fuzzy System
Reevaluating Assembly Evaluations with Feature Response Curves: GAGE and Assemblathons
In just the last decade, a multitude of bio-technologies and software
pipelines have emerged to revolutionize genomics. To further their central
goal, they aim to accelerate and improve the quality of de novo whole-genome
assembly starting from short DNA reads. However, the performance of each of
these tools is contingent on the length and quality of the sequencing data, the
structure and complexity of the genome sequence, and the resolution and quality
of long-range information. Furthermore, in the absence of any metric that
captures the most fundamental "features" of a high-quality assembly, there is
no obvious recipe for users to select the most desirable assembler/assembly.
International competitions such as Assemblathons or GAGE tried to identify the
best assembler(s) and their features. Some what circuitously, the only
available approach to gauge de novo assemblies and assemblers relies solely on
the availability of a high-quality fully assembled reference genome sequence.
Still worse, reference-guided evaluations are often both difficult to analyze,
leading to conclusions that are difficult to interpret. In this paper, we
circumvent many of these issues by relying upon a tool, dubbed FRCbam, which is
capable of evaluating de novo assemblies from the read-layouts even when no
reference exists. We extend the FRCurve approach to cases where lay-out
information may have been obscured, as is true in many deBruijn-graph-based
algorithms. As a by-product, FRCurve now expands its applicability to a much
wider class of assemblers -- thus, identifying higher-quality members of this
group, their inter-relations as well as sensitivity to carefully selected
features, with or without the support of a reference sequence or layout for the
reads. The paper concludes by reevaluating several recently conducted assembly
competitions and the datasets that have resulted from them.Comment: Submitted to PLoS One. Supplementary material available at
http://www.nada.kth.se/~vezzi/publications/supplementary.pdf and
http://cs.nyu.edu/mishra/PUBLICATIONS/12.supplementaryFRC.pd
Meta-heuristic based Construction Supply Chain Modelling and Optimization
Driven by the severe competition within the construction industry, the necessity of improving and optimizing the performance of construction supply chain has been aroused. This thesis proposes three problems with regard to the construction supply chain optimization from three perspectives, namely, deterministic single objective optimization, stochastic optimization and multi-objective optimization respectively. Mathematical models for each problem are constructed accordingly and meta-heuristic algorithms are developed and applied for resolving these three problems
Hi-Val: Iterative Learning of Hierarchical Value Functions for Policy Generation
Task decomposition is effective in manifold applications where the global complexity of a problem makes planning and decision-making too demanding. This is true, for example, in high-dimensional robotics domains, where (1) unpredictabilities and modeling limitations typically prevent the manual specification of robust behaviors, and (2) learning an action policy is challenging due to the curse of dimensionality. In this work, we borrow the concept of Hierarchical Task Networks (HTNs) to decompose the learning procedure, and we exploit Upper Confidence Tree (UCT) search to introduce HOP, a novel iterative algorithm for hierarchical optimistic planning with learned value functions. To obtain better generalization and generate policies, HOP simultaneously learns and uses action values. These are used to formalize constraints within the search space and to reduce the dimensionality of the problem. We evaluate our algorithm both on a fetching task using a simulated 7-DOF KUKA light weight arm and, on a pick and delivery task with a Pioneer robot
Evolving Ensemble Fuzzy Classifier
The concept of ensemble learning offers a promising avenue in learning from
data streams under complex environments because it addresses the bias and
variance dilemma better than its single model counterpart and features a
reconfigurable structure, which is well suited to the given context. While
various extensions of ensemble learning for mining non-stationary data streams
can be found in the literature, most of them are crafted under a static base
classifier and revisits preceding samples in the sliding window for a
retraining step. This feature causes computationally prohibitive complexity and
is not flexible enough to cope with rapidly changing environments. Their
complexities are often demanding because it involves a large collection of
offline classifiers due to the absence of structural complexities reduction
mechanisms and lack of an online feature selection mechanism. A novel evolving
ensemble classifier, namely Parsimonious Ensemble pENsemble, is proposed in
this paper. pENsemble differs from existing architectures in the fact that it
is built upon an evolving classifier from data streams, termed Parsimonious
Classifier pClass. pENsemble is equipped by an ensemble pruning mechanism,
which estimates a localized generalization error of a base classifier. A
dynamic online feature selection scenario is integrated into the pENsemble.
This method allows for dynamic selection and deselection of input features on
the fly. pENsemble adopts a dynamic ensemble structure to output a final
classification decision where it features a novel drift detection scenario to
grow the ensemble structure. The efficacy of the pENsemble has been numerically
demonstrated through rigorous numerical studies with dynamic and evolving data
streams where it delivers the most encouraging performance in attaining a
tradeoff between accuracy and complexity.Comment: this paper has been published by IEEE Transactions on Fuzzy System
- …