9,894 research outputs found
Tradeoffs for nearest neighbors on the sphere
We consider tradeoffs between the query and update complexities for the
(approximate) nearest neighbor problem on the sphere, extending the recent
spherical filters to sparse regimes and generalizing the scheme and analysis to
account for different tradeoffs. In a nutshell, for the sparse regime the
tradeoff between the query complexity and update complexity
for data sets of size is given by the following equation in
terms of the approximation factor and the exponents and :
For small , minimizing the time for updates leads to a linear
space complexity at the cost of a query time complexity .
Balancing the query and update costs leads to optimal complexities
, matching bounds from [Andoni-Razenshteyn, 2015] and [Dubiner,
IEEE-TIT'10] and matching the asymptotic complexities of [Andoni-Razenshteyn,
STOC'15] and [Andoni-Indyk-Laarhoven-Razenshteyn-Schmidt, NIPS'15]. A
subpolynomial query time complexity can be achieved at the cost of a
space complexity of the order , matching the bound
of [Andoni-Indyk-Patrascu, FOCS'06] and
[Panigrahy-Talwar-Wieder, FOCS'10] and improving upon results of
[Indyk-Motwani, STOC'98] and [Kushilevitz-Ostrovsky-Rabani, STOC'98].
For large , minimizing the update complexity results in a query complexity
of , improving upon the related exponent for large of
[Kapralov, PODS'15] by a factor , and matching the bound
of [Panigrahy-Talwar-Wieder, FOCS'08]. Balancing the costs leads to optimal
complexities , while a minimum query time complexity can be
achieved with update complexity , improving upon the
previous best exponents of Kapralov by a factor .Comment: 16 pages, 1 table, 2 figures. Mostly subsumed by arXiv:1608.03580
[cs.DS] (along with arXiv:1605.02701 [cs.DS]
Scalability and Total Recall with Fast CoveringLSH
Locality-sensitive hashing (LSH) has emerged as the dominant algorithmic
technique for similarity search with strong performance guarantees in
high-dimensional spaces. A drawback of traditional LSH schemes is that they may
have \emph{false negatives}, i.e., the recall is less than 100\%. This limits
the applicability of LSH in settings requiring precise performance guarantees.
Building on the recent theoretical "CoveringLSH" construction that eliminates
false negatives, we propose a fast and practical covering LSH scheme for
Hamming space called \emph{Fast CoveringLSH (fcLSH)}. Inheriting the design
benefits of CoveringLSH our method avoids false negatives and always reports
all near neighbors. Compared to CoveringLSH we achieve an asymptotic
improvement to the hash function computation time from to
, where is the dimensionality of data and is
the number of hash tables. Our experiments on synthetic and real-world data
sets demonstrate that \emph{fcLSH} is comparable (and often superior) to
traditional hashing-based approaches for search radius up to 20 in
high-dimensional Hamming space.Comment: Short version appears in Proceedings of CIKM 201
A Parallel Monte Carlo Code for Simulating Collisional N-body Systems
We present a new parallel code for computing the dynamical evolution of
collisional N-body systems with up to N~10^7 particles. Our code is based on
the the Henon Monte Carlo method for solving the Fokker-Planck equation, and
makes assumptions of spherical symmetry and dynamical equilibrium. The
principal algorithmic developments involve optimizing data structures, and the
introduction of a parallel random number generation scheme, as well as a
parallel sorting algorithm, required to find nearest neighbors for interactions
and to compute the gravitational potential. The new algorithms we introduce
along with our choice of decomposition scheme minimize communication costs and
ensure optimal distribution of data and workload among the processing units.
The implementation uses the Message Passing Interface (MPI) library for
communication, which makes it portable to many different supercomputing
architectures. We validate the code by calculating the evolution of clusters
with initial Plummer distribution functions up to core collapse with the number
of stars, N, spanning three orders of magnitude, from 10^5 to 10^7. We find
that our results are in good agreement with self-similar core-collapse
solutions, and the core collapse times generally agree with expectations from
the literature. Also, we observe good total energy conservation, within less
than 0.04% throughout all simulations. We analyze the performance of the code,
and demonstrate near-linear scaling of the runtime with the number of
processors up to 64 processors for N=10^5, 128 for N=10^6 and 256 for N=10^7.
The runtime reaches a saturation with the addition of more processors beyond
these limits which is a characteristic of the parallel sorting algorithm. The
resulting maximum speedups we achieve are approximately 60x, 100x, and 220x,
respectively.Comment: 53 pages, 13 figures, accepted for publication in ApJ Supplement
Tradeoffs for nearest neighbors on the sphere
We consider tradeoffs between the query and update complexities for the (approximate) nearest neighbor problem on the sphere, extending the recent spherical filters to sparse regimes and generalizing the scheme and analysis to account for different tradeoffs. In a nutshell, for the sparse regime the tradeoff between the query complexity and update complexity for data sets of size is given by the following equation in terms of the approximation factor and the exponents and : For small , minimizing the time for updates leads to a linear space complexity at the cost of a query time complexity . Balancing the query and update costs leads to optimal complexities , matching bounds from [Andoni-Razenshteyn, 2015] and [Dubiner, IEEE-TIT'10] and matching the asymptotic complexities of [Andoni-Razenshteyn, STOC'15] and [Andoni-Indyk-Laarhoven-Razenshteyn-Schmidt, NIPS'15]. A subpolynomial query time complexity can be achieved at the cost of a space complexity of the order , matching the bound of [Andoni-Indyk-Patrascu, FOCS'06] and [Panigrahy-Talwar-Wieder, FOCS'10] and improving upon results of [Indyk-Motwani, STOC'98] and [Kushilevitz-Ostrovsky-Rabani, STOC'98]. For large , minimizing the update complexity results in a query complexity of , improving upon the related exponent for large of [Kapralov, PODS'15] by a factor , and matching the bound of [Panigrahy-Talwar-Wieder, FOCS'08]. Balancing the costs leads to optimal complexities , while a minimum query time complexity can be achieved with update complexity , improving upon the previous best exponents of Kapralov by a factor
Towards a unified linear kinetic transport model with the trace ion module for EIRENE
Linear kinetic Monte Carlo particle transport models are frequently employed
in fusion plasma simulations to quantify atomic and surface effects on the main
plasma flow dynamics. Separate codes are used for transport of neutral
particles (incl. radiation) and charged particles (trace impurity ions).
Integration of both modules into main plasma fluid solvers provides then self
consistent solutions, in principle. The required interfaces are far from
trivial, because rapid atomic processes in particular in the edge region of
fusion plasmas require either smoothing and resampling, or frequent transfer of
particles from one into the other Monte Carlo code. We propose a different
scheme here, in which despite the inherently different mathematical form of
kinetic equations for ions and neutrals (e.g. Fokker-Planck vs. Boltzmann
collision integrals) both types of particle orbits can be integrated into one
single code. We show that the approximations and shortcomings of this "single
sourcing" concept (e.g., restriction to explicit ion drift orbit integration)
can be fully tolerable in a wide range of typical fusion edge plasma
conditions, and be overcompensated by the code-system simplicity, as well as by
inherently ensured consistency in geometry (one single numerical grid only) and
(the common) atomic and surface process modulesComment: 15 pages, 7 figure
The Lazarus project: A pragmatic approach to binary black hole evolutions
We present a detailed description of techniques developed to combine 3D
numerical simulations and, subsequently, a single black hole close-limit
approximation. This method has made it possible to compute the first complete
waveforms covering the post-orbital dynamics of a binary black hole system with
the numerical simulation covering the essential non-linear interaction before
the close limit becomes applicable for the late time dynamics. To determine
when close-limit perturbation theory is applicable we apply a combination of
invariant a priori estimates and a posteriori consistency checks of the
robustness of our results against exchange of linear and non-linear treatments
near the interface. Once the numerically modeled binary system reaches a regime
that can be treated as perturbations of the Kerr spacetime, we must
approximately relate the numerical coordinates to the perturbative background
coordinates. We also perform a rotation of a numerically defined tetrad to
asymptotically reproduce the tetrad required in the perturbative treatment. We
can then produce numerical Cauchy data for the close-limit evolution in the
form of the Weyl scalar and its time derivative
with both objects being first order coordinate and tetrad invariant. The
Teukolsky equation in Boyer-Lindquist coordinates is adopted to further
continue the evolution. To illustrate the application of these techniques we
evolve a single Kerr hole and compute the spurious radiation as a measure of
the error of the whole procedure. We also briefly discuss the extension of the
project to make use of improved full numerical evolutions and outline the
approach to a full understanding of astrophysical black hole binary systems
which we can now pursue.Comment: New typos found in the version appeared in PRD. (Mostly found and
collected by Bernard Kelly
Workshop on gravitational waves
In this article we summarise the proceedings of the Workshop on Gravitational
Waves held during ICGC-95. In the first part we present the discussions on 3PN
calculations (L. Blanchet, P. Jaranowski), black hole perturbation theory (M.
Sasaki, J. Pullin), numerical relativity (E. Seidel), data analysis (B.S.
Sathyaprakash), detection of gravitational waves from pulsars (S. Dhurandhar),
and the limit on rotation of relativistic stars (J. Friedman). In the second
part we briefly discuss the contributed papers which were mainly on detectors
and detection techniques of gravitational waves.Comment: 18 pages, kluwer.sty, no figure
Systematics of pion emission in heavy ion collisions in the 1A GeV regime
Using the large acceptance apparatus FOPI, we study pion emission in the
reactions (energies in GeV/nucleon are given in parentheses): 40Ca+40Ca (0.4,
0.6, 0.8, 1.0, 1.5, 1.93), 96Ru+96Ru (0.4, 1.0, 1.5), 96Zr+96Zr (0.4, 1.0,
1.5), 197Au+197Au (0.4, 0.6, 0.8, 1.0, 1.2, 1.5). The observables include
longitudinal and transverse rapidity distributions and stopping, polar
anisotropies, pion multiplicities, transverse momentum spectra, ratios for
positively and negatively charged pions of average transverse momenta and of
yields, directed flow, elliptic flow. The data are compared to earlier data
where possible and to transport model simulations.Comment: 56 pages,42 figures; to be published in Nuclear Physics
- …