9,945 research outputs found
General highlight detection in sport videos
Attention is a psychological measurement of human reflection against stimulus. We propose a general framework of highlight detection by comparing attention intensity during the watching of sports videos. Three steps are involved: adaptive selection on salient features, unified attention estimation and highlight identification. Adaptive selection computes feature correlation to decide an optimal set of salient features. Unified estimation combines these features by the technique of multi-resolution autoregressive (MAR) and thus creates a temporal curve of attention intensity. We rank the intensity of attention to discriminate boundaries of highlights. Such a framework alleviates semantic uncertainty around sport highlights and leads to an efficient and effective highlight detection. The advantages are as follows: (1) the capability of using data at coarse temporal resolutions; (2) the robustness against noise caused by modality asynchronism, perception uncertainty and feature mismatch; (3) the employment of Markovian constrains on content presentation, and (4) multi-resolution estimation on attention intensity, which enables the precise allocation of event boundaries
A Direct Estimation of High Dimensional Stationary Vector Autoregressions
The vector autoregressive (VAR) model is a powerful tool in modeling complex
time series and has been exploited in many fields. However, fitting high
dimensional VAR model poses some unique challenges: On one hand, the
dimensionality, caused by modeling a large number of time series and higher
order autoregressive processes, is usually much higher than the time series
length; On the other hand, the temporal dependence structure in the VAR model
gives rise to extra theoretical challenges. In high dimensions, one popular
approach is to assume the transition matrix is sparse and fit the VAR model
using the "least squares" method with a lasso-type penalty. In this manuscript,
we propose an alternative way in estimating the VAR model. The main idea is,
via exploiting the temporal dependence structure, to formulate the estimating
problem into a linear program. There is instant advantage for the proposed
approach over the lasso-type estimators: The estimation equation can be
decomposed into multiple sub-equations and accordingly can be efficiently
solved in a parallel fashion. In addition, our method brings new theoretical
insights into the VAR model analysis. So far the theoretical results developed
in high dimensions (e.g., Song and Bickel (2011) and Kock and Callot (2012))
mainly pose assumptions on the design matrix of the formulated regression
problems. Such conditions are indirect about the transition matrices and not
transparent. In contrast, our results show that the operator norm of the
transition matrices plays an important role in estimation accuracy. We provide
explicit rates of convergence for both estimation and prediction. In addition,
we provide thorough experiments on both synthetic and real-world equity data to
show that there are empirical advantages of our method over the lasso-type
estimators in both parameter estimation and forecasting.Comment: 36 pages, 3 figur
Computing (R, S) policies with correlated demand
This paper considers the single-item single-stocking non-stationary
stochastic lot-sizing problem under correlated demand. By operating under a
nonstationary (R, S) policy, in which R denote the reorder period and S the
associated order-up-to-level, we introduce a mixed integer linear programming
(MILP) model which can be easily implemented by using off-theshelf optimisation
software. Our modelling strategy can tackle a wide range of time-seriesbased
demand processes, such as autoregressive (AR), moving average(MA),
autoregressive moving average(ARMA), and autoregressive with autoregressive
conditional heteroskedasticity process(AR-ARCH). In an extensive computational
study, we compare the performance of our model against the optimal policy
obtained via stochastic dynamic programming. Our results demonstrate that the
optimality gap of our approach averages 2.28% and that computational
performance is good
Regularized adaptive long autoregressive spectral analysis
This paper is devoted to adaptive long autoregressive spectral analysis when
(i) very few data are available, (ii) information does exist beforehand
concerning the spectral smoothness and time continuity of the analyzed signals.
The contribution is founded on two papers by Kitagawa and Gersch. The first one
deals with spectral smoothness, in the regularization framework, while the
second one is devoted to time continuity, in the Kalman formalism. The present
paper proposes an original synthesis of the two contributions: a new
regularized criterion is introduced that takes both information into account.
The criterion is efficiently optimized by a Kalman smoother. One of the major
features of the method is that it is entirely unsupervised: the problem of
automatically adjusting the hyperparameters that balance data-based versus
prior-based information is solved by maximum likelihood. The improvement is
quantified in the field of meteorological radar
- …