1,477,757 research outputs found
Input window size and neural network predictors
Neural network approaches to time series prediction are briefly discussed, and the need to specify an appropriately sized input window identified. Relevant theoretical results from dynamic systems theory are briefly introduced, and heuristics for finding the correct embedding dimension, and hence window size, are discussed. The method is applied to two time series and the resulting generalisation performance of the trained feedforward neural network predictors is analysed. It is shown that the heuristics can provide useful information in defining the appropriate network architectur
On the Window Size for Classification in Changing Environments
Classification in changing environments (commonly known as concept drift) requires adaptation of the classifier to accommodate the
changes. One approach is to keep a moving window on the streaming data and constantly update the classifier on it. Here we consider an
abrupt change scenario where one set of probability distributions of the classes is instantly replaced with another. For a fixed ‘transition
period’ around the change, we derive a generic relationship between the size of the moving window and the classification error rate. We
derive expressions for the error in the transition period and for the optimal window size for the case of two Gaussian classes where the
concept change is a geometrical displacement of the whole class configuration in the space. A simple window resize strategy based
on the derived relationship is proposed and compared with fixed-size windows on a real benchmark data set data set (Electricity Market)
Dynamic programming for multi-view disparity/depth estimation
novel algorithm for disparity/depth estimation from multi-view images is presented. A dynamic programming approach with window-based correlation and a novel cost function is proposed.. The smoothness of disparity/depth map is embedded in dynamic programming approach, whilst the window-based correlation increases reliability. The enhancement methods are included, i.e. adaptive window size and shiftable window are used to increase reliability in homogenous areas and to increase sharpness at object boundaries. First, the algorithms estimates depth maps along a single camera axis. The algorithsm exploits then combines the depth estimates from different axis to derive a suitable depth map for multi-view images. The proposed scheme outperforms existing approaches in parallel and in the non-parallel camera configurations. © 2006 IEEE.A novel algorithm for disparity/depth estimation from multi-view images is presented. A dynamic programming approach with window-based correlation and a novel cost function is proposed. The smoothness of disparity/depth map is embedded in dynamic programming approach, whilst the window-based correlation increases reliability. The enhancement methods are included, i.e. adaptive window size and shiftable window are used to increase reliability in homogenous areas and to increase sharpness at object boundaries. First, the algorithms estimate depth maps along a single camera axis. The algorithms exploits then combines the depth estimates from different axis to derive a suitable depth map for multi-view images. The proposed scheme outperforms existing approaches in parallel and in the non-parallel camera configuration
Almost-Smooth Histograms and Sliding-Window Graph Algorithms
We study algorithms for the sliding-window model, an important variant of the
data-stream model, in which the goal is to compute some function of a
fixed-length suffix of the stream. We extend the smooth-histogram framework of
Braverman and Ostrovsky (FOCS 2007) to almost-smooth functions, which includes
all subadditive functions. Specifically, we show that if a subadditive function
can be -approximated in the insertion-only streaming model, then
it can be -approximated also in the sliding-window model with
space complexity larger by factor , where is the
window size.
We demonstrate how our framework yields new approximation algorithms with
relatively little effort for a variety of problems that do not admit the
smooth-histogram technique. For example, in the frequency-vector model, a
symmetric norm is subadditive and thus we obtain a sliding-window
-approximation algorithm for it. Another example is for streaming
matrices, where we derive a new sliding-window
-approximation algorithm for Schatten -norm. We then
consider graph streams and show that many graph problems are subadditive,
including maximum submodular matching, minimum vertex-cover, and maximum
-cover, thereby deriving sliding-window -approximation algorithms for
them almost for free (using known insertion-only algorithms). Finally, we
design for every an artificial function, based on the
maximum-matching size, whose almost-smoothness parameter is exactly
Efficient estimation of AUC in a sliding window
In many applications, monitoring area under the ROC curve (AUC) in a sliding
window over a data stream is a natural way of detecting changes in the system.
The drawback is that computing AUC in a sliding window is expensive, especially
if the window size is large and the data flow is significant.
In this paper we propose a scheme for maintaining an approximate AUC in a
sliding window of length . More specifically, we propose an algorithm that,
given , estimates AUC within , and can maintain this
estimate in time, per update, as the window slides.
This provides a speed-up over the exact computation of AUC, which requires
time, per update. The speed-up becomes more significant as the size of
the window increases. Our estimate is based on grouping the data points
together, and using these groups to calculate AUC. The grouping is designed
carefully such that () the groups are small enough, so that the error stays
small, () the number of groups is small, so that enumerating them is not
expensive, and () the definition is flexible enough so that we can
maintain the groups efficiently.
Our experimental evaluation demonstrates that the average approximation error
in practice is much smaller than the approximation guarantee ,
and that we can achieve significant speed-ups with only a modest sacrifice in
accuracy
- …
