4,079 research outputs found

    Signature extension preprocessing for LANDSAT MSS data

    Get PDF
    There are no author-identified significant results in this report

    Preconditioning of weighted H(div)-norm and applications to numerical simulation of highly heterogeneous media

    Full text link
    In this paper we propose and analyze a preconditioner for a system arising from a finite element approximation of second order elliptic problems describing processes in highly het- erogeneous media. Our approach uses the technique of multilevel methods and the recently proposed preconditioner based on additive Schur complement approximation by J. Kraus (see [8]). The main results are the design and a theoretical and numerical justification of an iterative method for such problems that is robust with respect to the contrast of the media, defined as the ratio between the maximum and minimum values of the coefficient (related to the permeability/conductivity).Comment: 28 page

    Hashing-Based-Estimators for Kernel Density in High Dimensions

    Full text link
    Given a set of points P⊂RdP\subset \mathbb{R}^{d} and a kernel kk, the Kernel Density Estimate at a point x∈Rdx\in\mathbb{R}^{d} is defined as KDEP(x)=1∣P∣∑y∈Pk(x,y)\mathrm{KDE}_{P}(x)=\frac{1}{|P|}\sum_{y\in P} k(x,y). We study the problem of designing a data structure that given a data set PP and a kernel function, returns *approximations to the kernel density* of a query point in *sublinear time*. We introduce a class of unbiased estimators for kernel density implemented through locality-sensitive hashing, and give general theorems bounding the variance of such estimators. These estimators give rise to efficient data structures for estimating the kernel density in high dimensions for a variety of commonly used kernels. Our work is the first to provide data-structures with theoretical guarantees that improve upon simple random sampling in high dimensions.Comment: A preliminary version of this paper appeared in FOCS 201

    Ranking Large Temporal Data

    Full text link
    Ranking temporal data has not been studied until recently, even though ranking is an important operator (being promoted as a firstclass citizen) in database systems. However, only the instant top-k queries on temporal data were studied in, where objects with the k highest scores at a query time instance t are to be retrieved. The instant top-k definition clearly comes with limitations (sensitive to outliers, difficult to choose a meaningful query time t). A more flexible and general ranking operation is to rank objects based on the aggregation of their scores in a query interval, which we dub the aggregate top-k query on temporal data. For example, return the top-10 weather stations having the highest average temperature from 10/01/2010 to 10/07/2010; find the top-20 stocks having the largest total transaction volumes from 02/05/2011 to 02/07/2011. This work presents a comprehensive study to this problem by designing both exact and approximate methods (with approximation quality guarantees). We also provide theoretical analysis on the construction cost, the index size, the update and the query costs of each approach. Extensive experiments on large real datasets clearly demonstrate the efficiency, the effectiveness, and the scalability of our methods compared to the baseline methods.Comment: VLDB201
    • …
    corecore