16,717 research outputs found

    A Static Optimality Transformation with Applications to Planar Point Location

    Full text link
    Over the last decade, there have been several data structures that, given a planar subdivision and a probability distribution over the plane, provide a way for answering point location queries that is fine-tuned for the distribution. All these methods suffer from the requirement that the query distribution must be known in advance. We present a new data structure for point location queries in planar triangulations. Our structure is asymptotically as fast as the optimal structures, but it requires no prior information about the queries. This is a 2D analogue of the jump from Knuth's optimum binary search trees (discovered in 1971) to the splay trees of Sleator and Tarjan in 1985. While the former need to know the query distribution, the latter are statically optimal. This means that we can adapt to the query sequence and achieve the same asymptotic performance as an optimum static structure, without needing any additional information.Comment: 13 pages, 1 figure, a preliminary version appeared at SoCG 201

    Distance-Sensitive Planar Point Location

    Get PDF
    Let S\mathcal{S} be a connected planar polygonal subdivision with nn edges that we want to preprocess for point-location queries, and where we are given the probability γi\gamma_i that the query point lies in a polygon PiP_i of S\mathcal{S}. We show how to preprocess S\mathcal{S} such that the query time for a point~pPip\in P_i depends on~γi\gamma_i and, in addition, on the distance from pp to the boundary of~PiP_i---the further away from the boundary, the faster the query. More precisely, we show that a point-location query can be answered in time O(min(logn,1+logarea(Pi)γiΔp2))O\left(\min \left(\log n, 1 + \log \frac{\mathrm{area}(P_i)}{\gamma_i \Delta_{p}^2}\right)\right), where Δp\Delta_{p} is the shortest Euclidean distance of the query point~pp to the boundary of PiP_i. Our structure uses O(n)O(n) space and O(nlogn)O(n \log n) preprocessing time. It is based on a decomposition of the regions of S\mathcal{S} into convex quadrilaterals and triangles with the following property: for any point pPip\in P_i, the quadrilateral or triangle containing~pp has area Ω(Δp2)\Omega(\Delta_{p}^2). For the special case where S\mathcal{S} is a subdivision of the unit square and γi=area(Pi)\gamma_i=\mathrm{area}(P_i), we present a simpler solution that achieves a query time of O(min(logn,log1Δp2))O\left(\min \left(\log n, \log \frac{1}{\Delta_{p}^2}\right)\right). The latter solution can be extended to convex subdivisions in three dimensions

    Image formation in synthetic aperture radio telescopes

    Full text link
    Next generation radio telescopes will be much larger, more sensitive, have much larger observation bandwidth and will be capable of pointing multiple beams simultaneously. Obtaining the sensitivity, resolution and dynamic range supported by the receivers requires the development of new signal processing techniques for array and atmospheric calibration as well as new imaging techniques that are both more accurate and computationally efficient since data volumes will be much larger. This paper provides a tutorial overview of existing image formation techniques and outlines some of the future directions needed for information extraction from future radio telescopes. We describe the imaging process from measurement equation until deconvolution, both as a Fourier inversion problem and as an array processing estimation problem. The latter formulation enables the development of more advanced techniques based on state of the art array processing. We demonstrate the techniques on simulated and measured radio telescope data.Comment: 12 page

    Fast Model Identification via Physics Engines for Data-Efficient Policy Search

    Full text link
    This paper presents a method for identifying mechanical parameters of robots or objects, such as their mass and friction coefficients. Key features are the use of off-the-shelf physics engines and the adaptation of a Bayesian optimization technique towards minimizing the number of real-world experiments needed for model-based reinforcement learning. The proposed framework reproduces in a physics engine experiments performed on a real robot and optimizes the model's mechanical parameters so as to match real-world trajectories. The optimized model is then used for learning a policy in simulation, before real-world deployment. It is well understood, however, that it is hard to exactly reproduce real trajectories in simulation. Moreover, a near-optimal policy can be frequently found with an imperfect model. Therefore, this work proposes a strategy for identifying a model that is just good enough to approximate the value of a locally optimal policy with a certain confidence, instead of wasting effort on identifying the most accurate model. Evaluations, performed both in simulation and on a real robotic manipulation task, indicate that the proposed strategy results in an overall time-efficient, integrated model identification and learning solution, which significantly improves the data-efficiency of existing policy search algorithms.Comment: IJCAI 1

    Self-Improving Algorithms

    Full text link
    We investigate ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an unknown input distribution D. We assume here that D is of product type. More precisely, suppose that we need to process a sequence I_1, I_2, ... of inputs I = (x_1, x_2, ..., x_n) of some fixed length n, where each x_i is drawn independently from some arbitrary, unknown distribution D_i. The goal is to design an algorithm for these inputs so that eventually the expected running time will be optimal for the input distribution D = D_1 * D_2 * ... * D_n. We give such self-improving algorithms for two problems: (i) sorting a sequence of numbers and (ii) computing the Delaunay triangulation of a planar point set. Both algorithms achieve optimal expected limiting complexity. The algorithms begin with a training phase during which they collect information about the input distribution, followed by a stationary regime in which the algorithms settle to their optimized incarnations.Comment: 26 pages, 8 figures, preliminary versions appeared at SODA 2006 and SoCG 2008. Thorough revision to improve the presentation of the pape
    corecore