10,388 research outputs found
Weak consistency of the 1-nearest neighbor measure with applications to missing data
When data is partially missing at random, imputation and importance weighting
are often used to estimate moments of the unobserved population. In this paper,
we study 1-nearest neighbor (1NN) importance weighting, which estimates moments
by replacing missing data with the complete data that is the nearest neighbor
in the non-missing covariate space. We define an empirical measure, the 1NN
measure, and show that it is weakly consistent for the measure of the missing
data. The main idea behind this result is that the 1NN measure is performing
inverse probability weighting in the limit. We study applications to missing
data and mitigating the impact of covariate shift in prediction tasks
Ensemble Committees for Stock Return Classification and Prediction
This paper considers a portfolio trading strategy formulated by algorithms in
the field of machine learning. The profitability of the strategy is measured by
the algorithm's capability to consistently and accurately identify stock
indices with positive or negative returns, and to generate a preferred
portfolio allocation on the basis of a learned model. Stocks are characterized
by time series data sets consisting of technical variables that reflect market
conditions in a previous time interval, which are utilized produce binary
classification decisions in subsequent intervals. The learned model is
constructed as a committee of random forest classifiers, a non-linear support
vector machine classifier, a relevance vector machine classifier, and a
constituent ensemble of k-nearest neighbors classifiers. The Global Industry
Classification Standard (GICS) is used to explore the ensemble model's efficacy
within the context of various fields of investment including Energy, Materials,
Financials, and Information Technology. Data from 2006 to 2012, inclusive, are
considered, which are chosen for providing a range of market circumstances for
evaluating the model. The model is observed to achieve an accuracy of
approximately 70% when predicting stock price returns three months in advance.Comment: 15 pages, 4 figures, Neukom Institute Computational Undergraduate
Research prize - second plac
Neural Nearest Neighbors Networks
Non-local methods exploiting the self-similarity of natural signals have been
well studied, for example in image analysis and restoration. Existing
approaches, however, rely on k-nearest neighbors (KNN) matching in a fixed
feature space. The main hurdle in optimizing this feature space w.r.t.
application performance is the non-differentiability of the KNN selection rule.
To overcome this, we propose a continuous deterministic relaxation of KNN
selection that maintains differentiability w.r.t. pairwise distances, but
retains the original KNN as the limit of a temperature parameter approaching
zero. To exploit our relaxation, we propose the neural nearest neighbors block
(N3 block), a novel non-local processing layer that leverages the principle of
self-similarity and can be used as building block in modern neural network
architectures. We show its effectiveness for the set reasoning task of
correspondence classification as well as for image restoration, including image
denoising and single image super-resolution, where we outperform strong
convolutional neural network (CNN) baselines and recent non-local models that
rely on KNN selection in hand-chosen features spaces.Comment: to appear at NIPS*2018, code available at
https://github.com/visinf/n3net
A memory-based classification approach to marker-based EBMT
We describe a novel approach to example-based machine translation that makes use of marker-based chunks, in which the decoder is a memory-based classifier. The classifier is trained to map trigrams of source-language chunks onto trigrams of target-language chunks; then, in a second
decoding step, the predicted trigrams are rearranged according to their overlap. We present the first results of this method on a Dutch-to-English translation system
using Europarl data. Sparseness of the class space causes the results to lag behind a baseline phrase-based SMT system.
In a further comparison, we also
apply the method to a word-aligned version
of the same data, and report a smaller
difference with a word-based SMT system.
We explore the scaling abilities of the
memory-based approach, and observe linear
scaling behavior in training and classification
speed and memory costs, and loglinear
BLEU improvements in the amount
of training examples
- ā¦