128 research outputs found

    Simultaneous pairs of dual integral equations

    Get PDF
    A fruitful method of attack in the solution of mixed boundary value problems is the utilization of integral transform techniques. Satisfaction of the mixed boundary conditions then leads to consideration of one or more pairs of dual integral equations. Oftentimes the solution of these dual integral equations only follows from laborious and complex manipulations. An exception to this is the work of Copson [1] and Lowengrub, Sneddon [2] in which the general solution to the pair of dual integral equations (1a) ∫_(0)^(∞)ψ(ξ)J_(v)(ξr) dξ = f1(r), 1<r<∞, (1b) ∫_(0)^(∞)ψ(ξ)ξ^(2a)J_(v)(ξr) dξ=f2(r), 0<r<1, is obtained in a simple and straightforward manner. It is the purpose of this article to demonstrate that the solution techniques in [1], [2] are equally applicable for the simultaneous pairs of dual integral equations ∫_(0)^(∞) [aψ1(ξ)+ψ2(ξ)]J(v+2)(ξr)dξ=f1(r), 1<r<∞, ∫_(0)^(∞) [bψ1(ξ)+ψ2(ξ)]ξ^(2a)J_(v+2)(ξr)dξ=f2(r), 0<r<1, ∫_(0)^(∞) [cψ1(ξ)+ψ2(ξ)]J(v+2)(ξr)dξ=f3(r), 1<r<∞, ∫_(0)^(∞) [ψ1(ξ)+ψ2(ξ)]ξ^(2a)J_(v)(ξr)dξ=f4(r), 0<r<1

    Diag-Join: An Opportunistic Join Algorithm for 1:N Relationships

    Full text link
    Time of creation is one of the predominant (often implicit) clustering strategies found not only in Data Warehouse systems: line items are created together with their corresponding order, objects are created together with their subparts and so on. The newly created data is then appended to the existing data. We present a new join algorithm, called Diag-Join, which exploits time-of-creation clustering. The performance evaluation reveals its superiority over standard join algorithms like nested-loop join and GRACE hash join. We also present an analytical cost model for Diag-Join

    The Implementation and Performance of Compressed Databases

    Full text link
    In this paper, we show how compression can be integrated into a relational database system. Specifically, we describe how the storage manager, the query execution engine, and the query optimizer of a database system can be extended to deal with compressed data. Our main result is that compression can significantly improve the response time of queries if very light-weight compression techniques are used. We will present such light-weight compression techniques and give the results of running the TPC-D benchmark on a so compressed database and a non-compressed database using the AODB database system, an experimental database system that was developed at the Universities of Mannheim and Passau. Our benchmark results demonstrate that compression indeed offers high performance gains (up to 55%) for IO-intensive queries and moderate gains for CPU-intensive queries. Compression can, however, also increase the running time of certain update operations. In all, we recommend to extend today\'s database systems with light-weight compression techniques and to make extensive use of this feature

    Effiziente Laufzeitsysteme für Datenlager

    Full text link
    Aktuelle DBMS sind für OLTP-Anwendungen optimiert. Die Anforderungen von OLAP- und OLTP-Anwendungen an das DBMS unterscheiden sich erheblich. Wir habe einige dieser Unterschiede identifiziert und ein Laufzeitsystem entwickelt, das diese Unterschiede ausnutzt, um die Leistung für OLAP-Anwendungen zu verbessern. Die entwickelten Techniken beinhalten (1) die Verwendung einer virtuellen Maschine zur Auswertung von Ausdrücken, (2) die effiziente Integration von Kompression und (3) spezifische algebraische Operatoren. Unsere Evaluierung hat ergeben, daß die Verwendung dieser Techniken signifikante (Faktor 2 oder mehr) Leistungssteigerungen ermöglicht

    Simultaneous Pairs of Dual Integral Equations

    Full text link

    Anatomy of a Native XML Base Management System

    Full text link
    Several alternatives to manage large XML document collections exist, ranging from file systems over relational or other database systems to specifically tailored XML repositories. In this paper we give a tour of Natix, a database management system designed from scratch for storing and processing XML data. Contrary to the common belief that management of XML data is just another application for traditional databases like relational systems, we illustrate how almost every component in a database system is affected in terms of adequacy and performance. We show how to design and optimize areas such as storage, transaction management comprising recovery and multi-user synchronisation as well as query processing for XML

    AsterixDB: A Scalable, Open Source BDMS

    Full text link
    AsterixDB is a new, full-function BDMS (Big Data Management System) with a feature set that distinguishes it from other platforms in today's open source Big Data ecosystem. Its features make it well-suited to applications like web data warehousing, social data storage and analysis, and other use cases related to Big Data. AsterixDB has a flexible NoSQL style data model; a query language that supports a wide range of queries; a scalable runtime; partitioned, LSM-based data storage and indexing (including B+-tree, R-tree, and text indexes); support for external as well as natively stored data; a rich set of built-in types; support for fuzzy, spatial, and temporal types and queries; a built-in notion of data feeds for ingestion of data; and transaction support akin to that of a NoSQL store. Development of AsterixDB began in 2009 and led to a mid-2013 initial open source release. This paper is the first complete description of the resulting open source AsterixDB system. Covered herein are the system's data model, its query language, and its software architecture. Also included are a summary of the current status of the project and a first glimpse into how AsterixDB performs when compared to alternative technologies, including a parallel relational DBMS, a popular NoSQL store, and a popular Hadoop-based SQL data analytics platform, for things that both technologies can do. Also included is a brief description of some initial trials that the system has undergone and the lessons learned (and plans laid) based on those early "customer" engagements
    • …
    corecore