558 research outputs found

    Inverse problem for wave equation with sources and observations on disjoint sets

    Full text link
    We consider an inverse problem for a hyperbolic partial differential equation on a compact Riemannian manifold. Assuming that Γ1\Gamma_1 and Γ2\Gamma_2 are two disjoint open subsets of the boundary of the manifold we define the restricted Dirichlet-to-Neumann operator ΛΓ1,Γ2\Lambda_{\Gamma_1,\Gamma_2}. This operator corresponds the boundary measurements when we have smooth sources supported on Γ1\Gamma_1 and the fields produced by these sources are observed on Γ2\Gamma_2. We show that when Γ1\Gamma_1 and Γ2\Gamma_2 are disjoint but their closures intersect at least at one point, then the restricted Dirichlet-to-Neumann operator ΛΓ1,Γ2\Lambda_{\Gamma_1,\Gamma_2} determines the Riemannian manifold and the metric on it up to an isometry. In the Euclidian space, the result yields that an anisotropic wave speed inside a compact body is determined, up to a natural coordinate transformations, by measurements on the boundary of the body even when wave sources are kept away from receivers. Moreover, we show that if we have three arbitrary non-empty open subsets Γ1,Γ2\Gamma_1,\Gamma_2, and Γ3\Gamma_3 of the boundary, then the restricted Dirichlet-to-Neumann operators ΛΓj,Γk\Lambda_{\Gamma_j,\Gamma_k} for 1≤j<k≤31\leq j<k\leq 3 determine the Riemannian manifold to an isometry. Similar result is proven also for the finite-time boundary measurements when the hyperbolic equation satisfies an exact controllability condition

    Learned cardinalities: Estimating correlated joins with deep learning

    Get PDF
    We describe a new deep learning approach to cardinality estimation. MSCN is a multi-set convolutional network, tailored to representing relational query plans, that employs set semantics to capture query features and true cardinalities. MSCN builds on sampling-based estimation, addressing its weaknesses when no sampled tuples qualify a predicate, and in capturing join-crossing correlations. Our evaluation of MSCN using a real-world dataset shows that deep learning signiicantly enhances the quality of cardinality estimation, which is the core problem in query optimization

    How Good Are Query Optimizers, Really?

    Get PDF
    Finding a good join order is crucial for query performance. In this paper, we introduce the Join Order Benchmark (JOB) and experimentally revisi

    Optimistically compressed Hash Tables & Strings in the USSR

    Get PDF
    Modern query engines rely heavily on hash tables for query processing. Overall query performance and memory footprint is often determined by how hash tables and the tuples within them are represented. In this work, we propose three complementary techniques to improve this representation: Domain-Guided Prefix Suppression bit-packs keys and values tightly to reduce hash table record width. Optimistic Splitting decomposes values (and operations on them) into (operations on) frequently- and infrequently-accessed value slices. By removing the infrequently-accessed value slices from the hash table record, it improves cache locality. The Unique Strings Self-aligned Region (USSR) accelerates handling frequently occurring strings, which are widespread in real-world data sets, by creating an on-the-fly dictionary of the most frequent strings. This allows executing many string operations with integer logic and reduces memory pressure. We integrated these techniques into Vectorwise. On the TPC-H benchmark, our approach reduces peak memory consumption by 2–4x and improves performance by up to 1.5x. On a real-world BI workload, we measured a 2x improvement in performance and in micro-benchmarks we observed speedups of up to 25x

    Efficient query processing with Optimistically Compressed Hash Tables & Strings in the USSR

    Get PDF
    Modern query engines rely heavily on hash tables for query processing. Overall query performance and memory footprint is often determined by how hash tables and the tuples within them are represented. In this work, we propose three complementary techniques to improve this representation: Domain-Guided Prefix Suppression bit-packs keys and values tightly to reduce hash table record width. Optimistic Splitting decomposes values (and operations on them) into (operations on) frequently-accessed and infrequently-accessed value slices. By removing the infrequently-accessed value slices from the hash table record, it improves cache locality. The Unique Strings Self-aligned Region (USSR) accelerates handling frequently-occurring strings, which are very common in real-world data sets, by creating an on-the-fly dictionary of the most frequent strings. This allows executing many string operations with integer logic and reduces memory pressure. We integrated these techniques into Vectorwise. On the TPC-H benchmark, our approach reduces peak memory consumption by 2-4Ă— and improves performance by up to 1.5Ă—. On a real-world BI workload, we measured a 2Ă— improvement in performance and in micro-benchmarks we observed speedups of up to 25Ă—
    • …
    corecore