31,216 research outputs found
Optimal randomized incremental construction for guaranteed logarithmic planar point location
Given a planar map of segments in which we wish to efficiently locate
points, we present the first randomized incremental construction of the
well-known trapezoidal-map search-structure that only requires expected preprocessing time while deterministically guaranteeing worst-case
linear storage space and worst-case logarithmic query time. This settles a long
standing open problem; the best previously known construction time of such a
structure, which is based on a directed acyclic graph, so-called the history
DAG, and with the above worst-case space and query-time guarantees, was
expected . The result is based on a deeper understanding of the
structure of the history DAG, its depth in relation to the length of its
longest search path, as well as its correspondence to the trapezoidal search
tree. Our results immediately extend to planar maps induced by finite
collections of pairwise interior disjoint well-behaved curves.Comment: The article significantly extends the theoretical aspects of the work
presented in http://arxiv.org/abs/1205.543
Nonparametric causal effects based on incremental propensity score interventions
Most work in causal inference considers deterministic interventions that set
each unit's treatment to some fixed value. However, under positivity violations
these interventions can lead to non-identification, inefficiency, and effects
with little practical relevance. Further, corresponding effects in longitudinal
studies are highly sensitive to the curse of dimensionality, resulting in
widespread use of unrealistic parametric models. We propose a novel solution to
these problems: incremental interventions that shift propensity score values
rather than set treatments to fixed values. Incremental interventions have
several crucial advantages. First, they avoid positivity assumptions entirely.
Second, they require no parametric assumptions and yet still admit a simple
characterization of longitudinal effects, independent of the number of
timepoints. For example, they allow longitudinal effects to be visualized with
a single curve instead of lists of coefficients. After characterizing these
incremental interventions and giving identifying conditions for corresponding
effects, we also develop general efficiency theory, propose efficient
nonparametric estimators that can attain fast convergence rates even when
incorporating flexible machine learning, and propose a bootstrap-based
confidence band and simultaneous test of no treatment effect. Finally we
explore finite-sample performance via simulation, and apply the methods to
study time-varying sociological effects of incarceration on entry into
marriage
Improved Implementation of Point Location in General Two-Dimensional Subdivisions
We present a major revamp of the point-location data structure for general
two-dimensional subdivisions via randomized incremental construction,
implemented in CGAL, the Computational Geometry Algorithms Library. We can now
guarantee that the constructed directed acyclic graph G is of linear size and
provides logarithmic query time. Via the construction of the Voronoi diagram
for a given point set S of size n, this also enables nearest-neighbor queries
in guaranteed O(log n) time. Another major innovation is the support of general
unbounded subdivisions as well as subdivisions of two-dimensional parametric
surfaces such as spheres, tori, cylinders. The implementation is exact,
complete, and general, i.e., it can also handle non-linear subdivisions. Like
the previous version, the data structure supports modifications of the
subdivision, such as insertions and deletions of edges, after the initial
preprocessing. A major challenge is to retain the expected O(n log n)
preprocessing time while providing the above (deterministic) space and
query-time guarantees. We describe an efficient preprocessing algorithm, which
explicitly verifies the length L of the longest query path in O(n log n) time.
However, instead of using L, our implementation is based on the depth D of G.
Although we prove that the worst case ratio of D and L is Theta(n/log n), we
conjecture, based on our experimental results, that this solution achieves
expected O(n log n) preprocessing time.Comment: 21 page
A growth path for deep space communications
Increased Deep Space Network (DPN) receiving capability far beyond that now available for Voyager is achievable through a mix of increased antenna aperture and increased frequency of operation. In this note a sequence of options are considered: adding midsized antennas for arraying with the existing network at X-band; converting to Ka-band and adding array elements; augmenting the DSN with an orbiting Ka-band station; and augmenting the DSN with an optical receiving capability, either on the ground or in space. Costs of these options are compared as means of achieving significantly increased receiving capability. The envelope of lowest costs projects a possible path for moving from X-band to Ka-band and thence to optical frequencies, and potentially for moving from ground-based to space-based apertures. The move to Ka-band is clearly of value now, with development of optical communications technology a good investment for the future
Single- and Multiple-Shell Uniform Sampling Schemes for Diffusion MRI Using Spherical Codes
In diffusion MRI (dMRI), a good sampling scheme is important for efficient
acquisition and robust reconstruction. Diffusion weighted signal is normally
acquired on single or multiple shells in q-space. Signal samples are typically
distributed uniformly on different shells to make them invariant to the
orientation of structures within tissue, or the laboratory coordinate frame.
The Electrostatic Energy Minimization (EEM) method, originally proposed for
single shell sampling scheme in dMRI, was recently generalized to multi-shell
schemes, called Generalized EEM (GEEM). GEEM has been successfully used in the
Human Connectome Project (HCP). However, EEM does not directly address the goal
of optimal sampling, i.e., achieving large angular separation between sampling
points. In this paper, we propose a more natural formulation, called Spherical
Code (SC), to directly maximize the minimal angle between different samples in
single or multiple shells. We consider not only continuous problems to design
single or multiple shell sampling schemes, but also discrete problems to
uniformly extract sub-sampled schemes from an existing single or multiple shell
scheme, and to order samples in an existing scheme. We propose five algorithms
to solve the above problems, including an incremental SC (ISC), a sophisticated
greedy algorithm called Iterative Maximum Overlap Construction (IMOC), an 1-Opt
greedy method, a Mixed Integer Linear Programming (MILP) method, and a
Constrained Non-Linear Optimization (CNLO) method. To our knowledge, this is
the first work to use the SC formulation for single or multiple shell sampling
schemes in dMRI. Experimental results indicate that SC methods obtain larger
angular separation and better rotational invariance than the state-of-the-art
EEM and GEEM. The related codes and a tutorial have been released in DMRITool.Comment: Accepted by IEEE transactions on Medical Imaging. Codes have been
released in dmritool
https://diffusionmritool.github.io/tutorial_qspacesampling.htm
Obstacle-aware Adaptive Informative Path Planning for UAV-based Target Search
Target search with unmanned aerial vehicles (UAVs) is relevant problem to
many scenarios, e.g., search and rescue (SaR). However, a key challenge is
planning paths for maximal search efficiency given flight time constraints. To
address this, we propose the Obstacle-aware Adaptive Informative Path Planning
(OA-IPP) algorithm for target search in cluttered environments using UAVs. Our
approach leverages a layered planning strategy using a Gaussian Process
(GP)-based model of target occupancy to generate informative paths in
continuous 3D space. Within this framework, we introduce an adaptive replanning
scheme which allows us to trade off between information gain, field coverage,
sensor performance, and collision avoidance for efficient target detection.
Extensive simulations show that our OA-IPP method performs better than
state-of-the-art planners, and we demonstrate its application in a realistic
urban SaR scenario.Comment: Paper accepted for International Conference on Robotics and
Automation (ICRA-2019) to be held at Montreal, Canad
Scalable Recollections for Continual Lifelong Learning
Given the recent success of Deep Learning applied to a variety of single
tasks, it is natural to consider more human-realistic settings. Perhaps the
most difficult of these settings is that of continual lifelong learning, where
the model must learn online over a continuous stream of non-stationary data. A
successful continual lifelong learning system must have three key capabilities:
it must learn and adapt over time, it must not forget what it has learned, and
it must be efficient in both training time and memory. Recent techniques have
focused their efforts primarily on the first two capabilities while questions
of efficiency remain largely unexplored. In this paper, we consider the problem
of efficient and effective storage of experiences over very large time-frames.
In particular we consider the case where typical experiences are O(n) bits and
memories are limited to O(k) bits for k << n. We present a novel scalable
architecture and training algorithm in this challenging domain and provide an
extensive evaluation of its performance. Our results show that we can achieve
considerable gains on top of state-of-the-art methods such as GEM.Comment: AAAI 201
Predicting the Next Best View for 3D Mesh Refinement
3D reconstruction is a core task in many applications such as robot
navigation or sites inspections. Finding the best poses to capture part of the
scene is one of the most challenging topic that goes under the name of Next
Best View. Recently, many volumetric methods have been proposed; they choose
the Next Best View by reasoning over a 3D voxelized space and by finding which
pose minimizes the uncertainty decoded into the voxels. Such methods are
effective, but they do not scale well since the underlaying representation
requires a huge amount of memory. In this paper we propose a novel mesh-based
approach which focuses on the worst reconstructed region of the environment
mesh. We define a photo-consistent index to evaluate the 3D mesh accuracy, and
an energy function over the worst regions of the mesh which takes into account
the mutual parallax with respect to the previous cameras, the angle of
incidence of the viewing ray to the surface and the visibility of the region.
We test our approach over a well known dataset and achieve state-of-the-art
results.Comment: 13 pages, 5 figures, to be published in IAS-1
Recommended from our members
Instance-based prediction of real-valued attributes
Instance-based representations have been applied to numerous classification tasks with a fair amount of success. These tasks predict a symbolic class based on observed attributes. This paper presents a method for predicting a numeric value based on observed attributes. We prove that if the numeric values are generated by continuous functions with bounded slope, then the predicted values are accurate approximations of the actual values. We demonstrate the utility of this approach by comparing it with standard approaches for value-prediction. The approach requires no background knowledge
- …