17,373 research outputs found
On Sampling Strategies for Neural Network-based Collaborative Filtering
Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework.Comment: This is a longer version (with supplementary attached) of the KDD'17
pape
Efficient posterior sampling for high-dimensional imbalanced logistic regression
High-dimensional data are routinely collected in many areas. We are
particularly interested in Bayesian classification models in which one or more
variables are imbalanced. Current Markov chain Monte Carlo algorithms for
posterior computation are inefficient as and/or increase due to
worsening time per step and mixing rates. One strategy is to use a
gradient-based sampler to improve mixing while using data sub-samples to reduce
per-step computational complexity. However, usual sub-sampling breaks down when
applied to imbalanced data. Instead, we generalize piece-wise deterministic
Markov chain Monte Carlo algorithms to include importance-weighted and
mini-batch sub-sampling. These approaches maintain the correct stationary
distribution with arbitrarily small sub-samples, and substantially outperform
current competitors. We provide theoretical support and illustrate gains in
simulated and real data applications.Comment: 4 figure
Combining Neuro-Fuzzy Classifiers for Improved Generalisation and Reliability
In this paper a combination of neuro-fuzzy
classifiers for improved classification performance and reliability
is considered. A general fuzzy min-max (GFMM) classifier with
agglomerative learning algorithm is used as a main building
block. An alternative approach to combining individual classifier
decisions involving the combination at the classifier model level is
proposed. The resulting classifier complexity and transparency is
comparable with classifiers generated during a single crossvalidation
procedure while the improved classification
performance and reduced variance is comparable to the ensemble
of classifiers with combined (averaged/voted) decisions. We also
illustrate how combining at the model level can be used for
speeding up the training of GFMM classifiers for large data sets
Density Preserving Sampling: Robust and Efficient Alternative to Cross-validation for Error Estimation
Estimation of the generalization ability of a classi-
fication or regression model is an important issue, as it indicates
the expected performance on previously unseen data and is
also used for model selection. Currently used generalization
error estimation procedures, such as cross-validation (CV) or
bootstrap, are stochastic and, thus, require multiple repetitions
in order to produce reliable results, which can be computationally
expensive, if not prohibitive. The correntropy-inspired density-
preserving sampling (DPS) procedure proposed in this paper
eliminates the need for repeating the error estimation procedure
by dividing the available data into subsets that are guaranteed to
be representative of the input dataset. This allows the production
of low-variance error estimates with an accuracy comparable to
10 times repeated CV at a fraction of the computations required
by CV. This method can also be used for model ranking and
selection. This paper derives the DPS procedure and investigates
its usability and performance using a set of public benchmark
datasets and standard classifier
Rapid Sampling for Visualizations with Ordering Guarantees
Visualizations are frequently used as a means to understand trends and gather
insights from datasets, but often take a long time to generate. In this paper,
we focus on the problem of rapidly generating approximate visualizations while
preserving crucial visual proper- ties of interest to analysts. Our primary
focus will be on sampling algorithms that preserve the visual property of
ordering; our techniques will also apply to some other visual properties. For
instance, our algorithms can be used to generate an approximate visualization
of a bar chart very rapidly, where the comparisons between any two bars are
correct. We formally show that our sampling algorithms are generally applicable
and provably optimal in theory, in that they do not take more samples than
necessary to generate the visualizations with ordering guarantees. They also
work well in practice, correctly ordering output groups while taking orders of
magnitude fewer samples and much less time than conventional sampling schemes.Comment: Tech Report. 17 pages. Condensed version to appear in VLDB Vol. 8 No.
Standard survey methods for estimating colony losses and explanatory risk factors in Apis mellifera
This chapter addresses survey methodology and questionnaire design for the collection of data pertaining to estimation of honey bee colony loss rates and identification of risk factors for colony loss. Sources of error in surveys are described. Advantages and disadvantages of different random and non-random sampling strategies and different modes of data collection are presented to enable the researcher to make an informed choice. We discuss survey and questionnaire methodology in some detail, for the purpose of raising awareness of issues to be considered during the survey design stage in order to minimise error and bias in the results. Aspects of survey design are illustrated using surveys in Scotland. Part of a standardized questionnaire is given as a further example, developed by the COLOSS working group for Monitoring and Diagnosis. Approaches to data analysis are described, focussing on estimation of loss rates. Dutch monitoring data from 2012 were used for an example of a statistical analysis with the public domain R software. We demonstrate the estimation of the overall proportion of losses and corresponding confidence interval using a quasi-binomial model to account for extra-binomial variation. We also illustrate generalized linear model fitting when incorporating a single risk factor, and derivation of relevant confidence intervals
LASAGNE: Locality And Structure Aware Graph Node Embedding
In this work we propose Lasagne, a methodology to learn locality and
structure aware graph node embeddings in an unsupervised way. In particular, we
show that the performance of existing random-walk based approaches depends
strongly on the structural properties of the graph, e.g., the size of the
graph, whether the graph has a flat or upward-sloping Network Community Profile
(NCP), whether the graph is expander-like, whether the classes of interest are
more k-core-like or more peripheral, etc. For larger graphs with flat NCPs that
are strongly expander-like, existing methods lead to random walks that expand
rapidly, touching many dissimilar nodes, thereby leading to lower-quality
vector representations that are less useful for downstream tasks. Rather than
relying on global random walks or neighbors within fixed hop distances, Lasagne
exploits strongly local Approximate Personalized PageRank stationary
distributions to more precisely engineer local information into node
embeddings. This leads, in particular, to more meaningful and more useful
vector representations of nodes in poorly-structured graphs. We show that
Lasagne leads to significant improvement in downstream multi-label
classification for larger graphs with flat NCPs, that it is comparable for
smaller graphs with upward-sloping NCPs, and that is comparable to existing
methods for link prediction tasks
DROP: Dimensionality Reduction Optimization for Time Series
Dimensionality reduction is a critical step in scaling machine learning
pipelines. Principal component analysis (PCA) is a standard tool for
dimensionality reduction, but performing PCA over a full dataset can be
prohibitively expensive. As a result, theoretical work has studied the
effectiveness of iterative, stochastic PCA methods that operate over data
samples. However, termination conditions for stochastic PCA either execute for
a predetermined number of iterations, or until convergence of the solution,
frequently sampling too many or too few datapoints for end-to-end runtime
improvements. We show how accounting for downstream analytics operations during
DR via PCA allows stochastic methods to efficiently terminate after operating
over small (e.g., 1%) subsamples of input data, reducing whole workload
runtime. Leveraging this, we propose DROP, a DR optimizer that enables speedups
of up to 5x over Singular-Value-Decomposition-based PCA techniques, and exceeds
conventional approaches like FFT and PAA by up to 16x in end-to-end workloads
- …