12,043 research outputs found
Online Importance Weight Aware Updates
An importance weight quantifies the relative importance of one example over
another, coming up in applications of boosting, asymmetric classification
costs, reductions, and active learning. The standard approach for dealing with
importance weights in gradient descent is via multiplication of the gradient.
We first demonstrate the problems of this approach when importance weights are
large, and argue in favor of more sophisticated ways for dealing with them. We
then develop an approach which enjoys an invariance property: that updating
twice with importance weight is equivalent to updating once with importance
weight . For many important losses this has a closed form update which
satisfies standard regret guarantees when all examples have . We also
briefly discuss two other reasonable approaches for handling large importance
weights. Empirically, these approaches yield substantially superior prediction
with similar computational performance while reducing the sensitivity of the
algorithm to the exact setting of the learning rate. We apply these to online
active learning yielding an extraordinarily fast active learning algorithm that
works even in the presence of adversarial noise
An Efficient Monte Carlo-based Probabilistic Time-Dependent Routing Calculation Targeting a Server-Side Car Navigation System
Incorporating speed probability distribution to the computation of the route
planning in car navigation systems guarantees more accurate and precise
responses. In this paper, we propose a novel approach for dynamically selecting
the number of samples used for the Monte Carlo simulation to solve the
Probabilistic Time-Dependent Routing (PTDR) problem, thus improving the
computation efficiency. The proposed method is used to determine in a proactive
manner the number of simulations to be done to extract the travel-time
estimation for each specific request while respecting an error threshold as
output quality level. The methodology requires a reduced effort on the
application development side. We adopted an aspect-oriented programming
language (LARA) together with a flexible dynamic autotuning library (mARGOt)
respectively to instrument the code and to take tuning decisions on the number
of samples improving the execution efficiency. Experimental results demonstrate
that the proposed adaptive approach saves a large fraction of simulations
(between 36% and 81%) with respect to a static approach while considering
different traffic situations, paths and error requirements. Given the
negligible runtime overhead of the proposed approach, it results in an
execution-time speedup between 1.5x and 5.1x. This speedup is reflected at
infrastructure-level in terms of a reduction of around 36% of the computing
resources needed to support the whole navigation pipeline
Significance of log-periodic precursors to financial crashes
We clarify the status of log-periodicity associated with speculative bubbles
preceding financial crashes. In particular, we address Feigenbaum's [2001]
criticism and show how it can be rebuked. Feigenbaum's main result is as
follows: ``the hypothesis that the log-periodic component is present in the
data cannot be rejected at the 95% confidence level when using all the data
prior to the 1987 crash; however, it can be rejected by removing the last year
of data.'' (e.g., by removing 15% of the data closest to the critical point).
We stress that it is naive to analyze a critical point phenomenon, i.e., a
power law divergence, reliably by removing the most important part of the data
closest to the critical point. We also present the history of log-periodicity
in the present context explaining its essential features and why it may be
important. We offer an extension of the rational expectation bubble model for
general and arbitrary risk-aversion within the general stochastic discount
factor theory. We suggest guidelines for using log-periodicity and explain how
to develop and interpret statistical tests of log-periodicity. We discuss the
issue of prediction based on our results and the evidence of outliers in the
distribution of drawdowns. New statistical tests demonstrate that the 1% to 10%
quantile of the largest events of the population of drawdowns of the Nasdaq
composite index and of the Dow Jones Industrial Average index belong to a
distribution significantly different from the rest of the population. This
suggests that very large drawdowns result from an amplification mechanism that
may make them more predictable than smaller market moves.Comment: Latex document of 38 pages including 16 eps figures and 3 tables, in
press in Quantitative Financ
- …