888 research outputs found

    Cache-and-query for wide area sensor databases

    Get PDF

    Semantic Cache System

    Get PDF

    An incremental database access method for autonomous interoperable databases

    Get PDF
    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values

    Semantic Cache Reasoners

    Get PDF

    Issues on distributed caching of spatial data

    Get PDF
    Die Menge an digitalen Informationen ĂŒber Orte hat bis heute rapide zugenommen. Mit der Verbreitung mobiler, internetfĂ€higer GerĂ€te kann nun jederzeit und von ĂŒberall auf diese Informationen zugegriffen werden. Im Zuge dieser Entwicklung wurden zahlreiche ortsbasierte Anwendungen und Dienste populĂ€r. So reihen sich digitale Einkaufsassistenten und Touristeninformationsdienste sowie geosoziale Anwendungen in der Liste der beliebtesten Vertreter. Steigende Benutzerzahlen sowie die rapide wachsenden Datenmengen, stellen ernstzunehmende Herausforderungen fĂŒr die Anbieter ortsbezogener Informationen dar. So muss der Datenbereitstellungsprozess effizient gestaltet sein, um einen kosteneffizienten Betrieb zu ermöglichen. DarĂŒber hinaus sollten Ressourcen flexibel genug zugeordnet werden können, um Lastungleichgewichte zwischen Systemkomponenten ausgleichen zu können. Außerdem mĂŒssen Datenanbieter in der Lage sein, die VerarbeitungskapazitĂ€ten mit steigender und fallender Anfragelast zu skalieren. Mit dieser Arbeit stellen wir einen verteilten Zwischenspeicher fĂŒr ortsbasierte Daten vor. In dem verteilten Zwischenspeicher werden Replika der am hĂ€ufigsten verwendeten Daten von mehreren unabhĂ€ngigen Servern im flĂŒchtigen Speicher vorgehalten. Mit unserem Ansatz können die Herausforderungen fĂŒr Anbieter ortsbezogener Informationen wie folgt addressiert werden: ZunĂ€chst sorgt eine speziell fĂŒr die Zugriffsmuster ortsbezogener Anwendungen konzipierte Zwischenspreicherungsstragie fĂŒr eine Erhöhung der Gesamteffizienz, da eine erhebliche Menge der zwischengespeicherten Ergebnisse vorheriger Anfragen wiederverwendet werden kann. DarĂŒber hinaus bewirken unsere speziell fĂŒr den Geo-Kontext entwickelten Lastbalancierungsverfahren den Ausgleich dynamischer Lastungleichgewichte. Letztlich befĂ€higen unsere verteilten Protokolle zur Hinzu- und Wegnahme von Servern die Anbieter ortsbezogener Informationen, die VerarbeitungskapazitĂ€t steigender oder fallender Anfragelast anzupassen. In diesem Dokument untersuchen wir zunĂ€chst die Anforderungen der Datenbereitstellung im Kontext von ortsbasierten Anwendungen. Anschließend diskutieren wir mögliche Entwurfsmuster und leiten eine Architektur fĂŒr einen verteilten Zwischenspeicher ab. Im Verlauf dieser Arbeit, entstanden mehrere konkrete Implementierungsvarianten, die wir in diesem Dokument vorstellen und miteinander vergleichen. Unsere Evaluation zeigt nicht nur die prinzipielle Machbarkeit, sondern auch die EffektivitĂ€t von unserem Caching-Ansatz fĂŒr die Erreichung von Skalierbarkeit und VerfĂŒgbarkeit im Kontext der Bereitstellung von ortsbasierten Daten

    WWW Programming using computational logic systems (and the PiLLoW/Ciao library)

    Get PDF
    We discuss from a practical point of view a number of issues involved in writing Internet and WWW applications using LP/CLP systems. We describe Pd_l_oW, a public-domain Internet and WWW programming library for LP/CLP systems which we argĂŒe significantly simplifies the process of writing such applications. Pd_l_oW provides facilities for generating HTML structured documents, producing HTML forms, writing form handlers, accessing and parsing WWW documents, and accessing code posted at HTTP addresses. We also describe the architecture of some application classes, using a high-level model of client-server interaction, active modules. We then propose an architecture for automatic LP/CLP code downloading for local execution, using generic browsers. Finally, we also provide an overview of related work on the topic. The PiLLoW library has been developed in the context of the &- Prolog and CIAO systems, but it has been adapted to a number of popular LP/CLP systems, supporting most of its functionality

    Une approche flexible et dĂ©centralisĂ©e du traitement de requĂȘtes dans les systĂšmes gĂ©o-distribuĂ©s

    Get PDF
    This thesis studies the design of query processing systems, across a diversity of geo-distributed settings. Optimising performance metrics such as response time, freshness, or operational cost involves design decisions, such as what derived state (e.g., indexes, materialised views, or caches) to maintain, and how to distribute and where to place the corresponding computation and state. These metrics are often in tension, and the trade-offs depend on the specific application and/or environment. This requires the ability to adapt the query engine's topology and architecture, and the placement of its components. This thesis makes the following contributions: - A flexible architecture for geo-distributed query engines, based on components connected in a bidirectional acyclic graph. - A common microservice abstraction and API for these components, the Query Processing Unit (QPU). A QPU encapsulates some primitive query processing task. Multiple QPU types exist, which can be instantiated and composed into complex graphs. - A model for constructing modular query engine architectures as a distributed topology of QPUs, enabling flexible design and trade-offs between performance metrics. - Proteus, a QPU-based framework for constructing and deploying query engines. - Representative deployments of Proteus and experimental evaluation thereof.Cette thĂšse prĂ©sente l'Ă©tude de la conception de systĂšmes de traitement de requĂȘtes dans divers cadres gĂ©o-distribuĂ©s. L'optimisation des mesures de performance telles que le temps de rĂ©ponse, la fraĂźcheur ou le coĂ»t opĂ©rationnel implique des dĂ©cisions de conception tel que le choix de l’état dĂ©rivĂ© (indices, vues matĂ©rialisĂ©es, caches par ex.) Ă  construire et maintenir, et la distribution et le placement de ces derniers et de leurs calculs. Ces mĂ©triques sont souvent opposĂ©es et les compromis dĂ©pendent de l'application et/ou de la spĂ©cificitĂ© de l'environnement. La capacitĂ© d'adapter la topologie et l'architecture du systĂšme de traitement de requĂȘtes devient alors essentielle, ainsi que le placement de ses composants. Cette thĂšse apporte les contributions suivantes : - Une architecture flexible pour les systĂšmes de traitement de requĂȘtes gĂ©o-distribuĂ©s, basĂ©e sur des composants connectĂ©s dans un graphe bidirectionnel acyclique. - Une abstraction de micro-service et une API communes pour ces composants, le Query Processing Unit (QPU). Un QPU encapsule une tĂąche de traitement de requĂȘte primitive. Il existe plusieurs types de QPU qui peuvent ĂȘtre instanciĂ©s et composĂ©s en graphes complexes. - Un modĂšle pour construire des architectures de systĂšmes de traitement de requĂȘtes modulaires composĂ©es d’une topologie distribuĂ©e de QPUs, permettant une conception flexible et des compromis selon les mesures de performance visĂ©es. - Proteus, un framework basĂ© sur les QPU, permettant la construction et le dĂ©ploiement de systĂšmes de traitement de requĂȘtes. - DĂ©ploiements reprĂ©sentatifs de systĂšmes de traitement de requĂȘtes Ă  l'aide de Proteus, et leur Ă©valuation expĂ©rimentale

    Window Query Processing with Proxy Cache

    Get PDF
    A location dependent query (LDQ) result set is valid only in a specific region called the validity region (VR). While limiting the validity of a particular result set to a given area, the VR may also be used in caching implementations to determine if cached results satisfy semantically equivalent queries. Existing LDQ caching schemes rely on the database servers to provide the VR at a cost of high computational overhead. Alternatively, a LDQ proxy cache, which approximates the VR can be employed, freeing the database servers from the high cost of calculating the VR. A LDQ proxy cache architecture is proposed to compute an estimated validity region (EVR) based on the observed querying history at the proxy server. We present an algorithm - Window_EVR - for the LDQ proxy to compute the EVR for a window query result set. The simulation results show that LDQ proxy caching using the Window_EVR algorithm significantly reduces both the window query response time and the workload at the database servers while maintaining query result set accuracy

    Semantic Caching Framework: An FPGA-Based Application for IoT Security Monitoring

    Get PDF
    Security monitoring is one subdomain of cybersecurity which aims to guarantee the safety of systems, continuously monitoring unusual events. The development of Internet Of Things leads to huge amounts of information, being heterogeneous and requiring to be efficiently managed. Cloud Computing provides software and hardware resources for large scale data management. However, performances for sequences of on-line queries on long term historical data may be not compatible with the emergency security monitoring. This work aims to address this problem by proposing a semantic caching framework and its application to acceleration hardware with FPGA for fast- and accurate-enough logs processing for various data stores and execution engines
    • 

    corecore