190,195 research outputs found
Sequential Cauchy Combination Test for Multiple Testing Problems with Financial Applications
We introduce a simple tool to control for false discoveries and identify
individual signals in scenarios involving many tests, dependent test
statistics, and potentially sparse signals. The tool applies the Cauchy
combination test recursively on a sequence of expanding subsets of -values
and is referred to as the sequential Cauchy combination test. While the
original Cauchy combination test aims to make a global statement about a set of
null hypotheses by summing transformed -values, our sequential version
determines which -values trigger the rejection of the global null. The
sequential test achieves strong familywise error rate control, exhibits less
conservatism compared to existing controlling procedures when dealing with
dependent test statistics, and provides a power boost. As illustrations, we
revisit two well-known large-scale multiple testing problems in finance for
which the test statistics have either serial dependence or cross-sectional
dependence, namely monitoring drift bursts in asset prices and searching for
assets with a nonzero alpha. In both applications, the sequential Cauchy
combination test proves to be a preferable alternative. It overcomes many of
the drawbacks inherent to inequality-based controlling procedures, extreme
value approaches, resampling and screening methods, and it improves the power
in simulations, leading to distinct empirical outcomes.Comment: 35 pages, 6 figure
Automated sequence and motion planning for robotic spatial extrusion of 3D trusses
While robotic spatial extrusion has demonstrated a new and efficient means to
fabricate 3D truss structures in architectural scale, a major challenge remains
in automatically planning extrusion sequence and robotic motion for trusses
with unconstrained topologies. This paper presents the first attempt in the
field to rigorously formulate the extrusion sequence and motion planning (SAMP)
problem, using a CSP encoding. Furthermore, this research proposes a new
hierarchical planning framework to solve the extrusion SAMP problems that
usually have a long planning horizon and 3D configuration complexity. By
decoupling sequence and motion planning, the planning framework is able to
efficiently solve the extrusion sequence, end-effector poses, joint
configurations, and transition trajectories for spatial trusses with
nonstandard topologies. This paper also presents the first detailed computation
data to reveal the runtime bottleneck on solving SAMP problems, which provides
insight and comparing baseline for future algorithmic development. Together
with the algorithmic results, this paper also presents an open-source and
modularized software implementation called Choreo that is machine-agnostic. To
demonstrate the power of this algorithmic framework, three case studies,
including real fabrication and simulation results, are presented.Comment: 24 pages, 16 figure
Video Logo Retrieval based on local Features
Estimation of the frequency and duration of logos in videos is important and
challenging in the advertisement industry as a way of estimating the impact of
ad purchases. Since logos occupy only a small area in the videos, the popular
methods of image retrieval could fail. This paper develops an algorithm called
Video Logo Retrieval (VLR), which is an image-to-video retrieval algorithm
based on the spatial distribution of local image descriptors that measure the
distance between the query image (the logo) and a collection of video images.
VLR uses local features to overcome the weakness of global feature-based models
such as convolutional neural networks (CNN). Meanwhile, VLR is flexible and
does not require training after setting some hyper-parameters. The performance
of VLR is evaluated on two challenging open benchmark tasks (SoccerNet and
Standford I2V), and compared with other state-of-the-art logo retrieval or
detection algorithms. Overall, VLR shows significantly higher accuracy compared
with the existing methods.Comment: Accepted by ICIP 20. Contact author: Bochen Guan ([email protected]
Searching for a trail of evidence in a maze
Consider a graph with a set of vertices and oriented edges connecting pairs
of vertices. Each vertex is associated with a random variable and these are
assumed to be independent. In this setting, suppose we wish to solve the
following hypothesis testing problem: under the null, the random variables have
common distribution N(0,1) while under the alternative, there is an unknown
path along which random variables have distribution , , and
distribution N(0,1) away from it. For which values of the mean shift can
one reliably detect and for which values is this impossible? Consider, for
example, the usual regular lattice with vertices of the form and oriented edges , where . We show that for paths of length starting at
the origin, the hypotheses become distinguishable (in a minimax sense) if
, while they are not if . We derive
equivalent results in a Bayesian setting where one assumes that all paths are
equally likely; there, the asymptotic threshold is . We
obtain corresponding results for trees (where the threshold is of order 1 and
independent of the size of the tree), for distributions other than the Gaussian
and for other graphs. The concept of the predictability profile, first
introduced by Benjamini, Pemantle and Peres, plays a crucial role in our
analysis.Comment: Published in at http://dx.doi.org/10.1214/07-AOS526 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Speeding up neighborhood search in local Gaussian process prediction
Recent implementations of local approximate Gaussian process models have
pushed computational boundaries for non-linear, non-parametric prediction
problems, particularly when deployed as emulators for computer experiments.
Their flavor of spatially independent computation accommodates massive
parallelization, meaning that they can handle designs two or more orders of
magnitude larger than previously. However, accomplishing that feat can still
require massive supercomputing resources. Here we aim to ease that burden. We
study how predictive variance is reduced as local designs are built up for
prediction. We then observe how the exhaustive and discrete nature of an
important search subroutine involved in building such local designs may be
overly conservative. Rather, we suggest that searching the space radially,
i.e., continuously along rays emanating from the predictive location of
interest, is a far thriftier alternative. Our empirical work demonstrates that
ray-based search yields predictors with accuracy comparable to exhaustive
search, but in a fraction of the time - bringing a supercomputer implementation
back onto the desktop.Comment: 24 pages, 5 figures, 4 table
- …