24,818 research outputs found
An Enhanced Method For Evaluating Automatic Video Summaries
Evaluation of automatic video summaries is a challenging problem. In the past
years, some evaluation methods are presented that utilize only a single feature
like color feature to detect similarity between automatic video summaries and
ground-truth user summaries. One of the drawbacks of using a single feature is
that sometimes it gives a false similarity detection which makes the assessment
of the quality of the generated video summary less perceptual and not accurate.
In this paper, a novel method for evaluating automatic video summaries is
presented. This method is based on comparing automatic video summaries
generated by video summarization techniques with ground-truth user summaries.
The objective of this evaluation method is to quantify the quality of video
summaries, and allow comparing different video summarization techniques
utilizing both color and texture features of the video frames and using the
Bhattacharya distance as a dissimilarity measure due to its advantages. Our
Experiments show that the proposed evaluation method overcomes the drawbacks of
other methods and gives a more perceptual evaluation of the quality of the
automatic video summaries.Comment: This paper has been withdrawn by the author due to some errors and
  incomplete stud
Is the economic crisis over (and out)?
This note analyzes the recent global recession: its causes, the predictability of the timing of its start and of its end, and the implications for macro policy. These follow from the general-equilibrium macro model of Abadir and Talmain (2002) and its implications for a new type of macroeconometrics. The note also proposes some banking regulations, and presents prospects for the future.recession, recovery, causes and symptoms, turning points, prediction, macro policy
Handwritten Digits Recognition using Deep Convolutional Neural Network: An Experimental Study using EBlearn
In this paper, results of an experimental study of a deep convolution neural
network architecture which can classify different handwritten digits using
EBLearn library are reported. The purpose of this neural network is to classify
input images into 10 different classes or digits (0-9) and to explore new
findings. The input dataset used consists of digits images of size 32X32 in
grayscale (MNIST dataset).Comment: This paper has been withdrawn by the author due to some errors and
  incomplete stud
On efficient simulation in dynamic models
Ways of improving the efficiency of Monte-Carlo (MC) techniques are studied for dynamic models. Such models cause the conventional Antithetic Variate (AV) technique to fail, and will be proved to reduce the benefit from using Control Variates with nearly nonstationary series. This paper suggests modifications of the two conventional variance reduction techniques to enhance their efficiency. New classes of AVs are also proposed. Methods of reordering innovations are found to do less well than others which rely on changing some signs in the spirit of the traditional AV. Numerical and analytical calculations are given to investigate the features of the proposed techniques. JEL classification code: C15 Key words: Dynamic models, Monte-Carlo (MC), Variance Reduction Technique (VRT), Antithetic Variate (AV), Control Variate (CV), Efficiency Gain (EG), Response Surface (RS).
Macro and Financial Markets: The Memory of an Elephant?
Macroeconomic and aggregate financial series share an unconventional type of nonlinear dynamics. Existing techniques (like co-integration) model these dynamics incompletely, hence generating seemingly paradoxical results.
To avoid this, we provide a methodology to disentangle the long-run relation between variables from their own dynamics, and illustrate with two applications.
First, in the forward-premium puzzle, adding a component quantifying the persistent nonlinear dynamics of exchange rates yields substantial predictability and makes the forward-premium term insignificant. Second, S&P 500 grows in a pattern of momentum followed by reversal, forming long cycles around a trend given by GDP, a stable non-breaking relation since WWII.
Classification-Keywords:
An Analytical Approximation for the Excess Noise Factor of Avalanche Photodiodes with Dead Space
Approximate analytical expressions are derived for the mean gain and the excess noise factor of avalanche photodiodes including the effect of dead space. The analysis is based on undertaking a characteristic-equation approach to obtain an approximate analytical solution to the existing system of recurrence equations which characterize the statistics of the random multiplication gain. The analytical expressions for the excess noise factor and the mean gain are shown to be in good agreement with the exact results obtained from numerical solutions of the recurrence equations for values of the dead space reaching up to 20% of the width of the multiplication region
"On some definitions in matrix algebra"
Many definitions in matrix algebra are not standardized. This notediscusses some of thepitfalls associated with undesirable orwrong definitions, anddealswith central conceptslikesymmetry, orthogonality, square root, Hermitian and quadratic forms, and matrix derivatives.
Model-Free Estimation of Large Variance Matrices
This paper introduces a new method for estimating large variance matrices. Starting from the orthogonal decomposition of the sample variance matrix, we exploit the fact that orthogonal matrices are never ill-conditioned and therefore focus on improving the estimation of the eigenvalues. We estimate the eigenvectors from just a fraction of the data, then use them to transform the data into approximately orthogonal series that we use to estimate a well-conditioned matrix of eigenvalues. Our estimator is model-free: we make no assumptions on the distribution of the random sample or on any parametric structure the variance matrix may have. By design, it delivers well-conditioned estimates regardless of the dimension of problem and the number of observations available. Simulation evidence show that the new estimator outperforms the usual sample variance matrix, not only by achieving a substantial improvement in the condition number (as expected), but also by much lower error norms that measure its deviation from the true variance.variance matrices, ill-conditioning, mean squared error, mean absolute deviations, resampling
- …
