10 research outputs found

    A Genetic Programming Approach for Distributed Queries

    Get PDF
    With the emergence of relatively inexpensive and advanced communication technology, Distributed Database Management Systems (DDBMS) have become an integral part of many computer applications. Efficient query processing is one of the most important issues in distributed database systems. In a distributed environment, it is common that queries extract data from different sites. It is important to limit the amount of data transfer across different sites. Semijoin is a way to reduce the cost of expensive joins between various sites. A key issue in query optimization based on semijoin reduction is to find a good sequence of semijoins that reduce the relations referenced in a given query before the joins are performed. This paper proposes a new approach, based on Genetic Programming (GP), to improve the process of database query in Distributed Database Systems. A longer version of this paper is available

    Characterization and Analysis of a Nested Genetic Algorithm for Distributed Database Design

    Get PDF
    Distributed database design is a difficult and complex task involving two interdependent problems : data allocation and operation allocation. First, data must be allocated to nodes in the network. Second, given such an allocation, data must be efficiently retrieved, processed, and possibly communicated to meet the retrieval and update requirements of the users. The problem is characterized by integer variables, a discontinuous and extremely complex cost function, and numerous constraints. A nested genetic algorithm naturally fits this problem formulation with the outer algorithm addressing data allocation and the inner algorithm addressing operation allocation. We present and characterize such an algorithm according to its gene structure and control parameters. We experimentally analyze the effects of poolsize and crossover operator on the performance of our algorithm. Index Terms - Genetic algorithms, performance modeling and analysis, experimental analysis of algorithms, distributed database desig

    A knowledge-based approach to multiple query processing

    Full text link
    The collective processing of multiple queries in a database system has recently received renewed attention due to its capability of improving the overall performance of a database system and its applicability to the design of knowledge-based expert systems and extensible database systems. A new multiple query processing strategy is presented which utilizes semantic knowledge on data integrity and information on predicate conditions of the access paths (plans) of queries. The processing of multiple queries is accomplished by the utilization of subset relationships between intermediate results of query executions, which are inferred employing both semantic and logical information. Given a set of fixed order access plans, the A* algorithm is used to find the set of reformulated access plans which is optimal for a given collection of semantic knowledge.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/28071/1/0000514.pd

    Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    Get PDF
    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability

    Dynamic Integration of Evolving Distributed Databases using Services

    Get PDF
    This thesis investigates the integration of many separate existing heterogeneous and distributed databases which, due to organizational changes, must be merged and appear as one database. A solution to some database evolution problems is presented. It presents an Evolution Adaptive Service-Oriented Data Integration Architecture (EA-SODIA) to dynamically integrate heterogeneous and distributed source databases, aiming to minimize the cost of the maintenance caused by database evolution. An algorithm, named Relational Schema Mapping by Views (RSMV), is designed to integrate source databases that are exposed as services into a pre-designed global schema that is in a data integrator service. Instead of producing hard-coded programs, views are built using relational algebra operations to eliminate the heterogeneities among the source databases. More importantly, the definitions of those views are represented and stored in the meta-database with some constraints to test their validity. Consequently, the method, called Evolution Detection, is then able to identify in the meta-database the views affected by evolutions and then modify them automatically. An evaluation is presented using case study. Firstly, it is shown that most types of heterogeneity defined in this thesis can be eliminated by RSMV, except semantic conflict. Secondly, it presents that few manual modification on the system is required as long as the evolutions follow the rules. For only three types of database evolutions, human intervention is required and some existing views are discarded. Thirdly, the computational cost of the automatic modification shows a slow linear growth in the number of source database. Other characteristics addressed include EA-SODIA’ scalability, domain independence, autonomy of source databases, and potential of involving other data sources (e.g.XML). Finally, the descriptive comparison with other data integration approaches is presented. It shows that although other approaches may provide better performance of query processing in some circumstances, the service-oriented architecture provide better autonomy, flexibility and capability of evolution

    A Service Late Binding Enabled Solution for Data Integration from Autonomous and Evolving Databases

    Get PDF
    Integrating data from autonomous, distributed and heterogeneous data sources to provide a unified vision is a common demand for many businesses. Since the data sources may evolve frequently to satisfy their own independent business needs, solutions which use hard coded queries to integrate participating databases may cause high maintenance costs when evolution occurs. Thus a new solution which can handle database evolution with lower maintenance effort is required. This thesis presents a new solution: Service Late binding Enabled Data Integration (SLEDI) which is set into a framework modeling the essential processes of the data integration activity. It integrates schematic heterogeneous relational databases with decreased maintenance costs for handling database evolution. An algorithm, named Information Provision Unit Describing (IPUD) is designed to describe each database as a set of Information Provision Units (IPUs). The IPUs are represented as Directed Acyclic Graph (DAG) structured data instead of hard coded queries, and further realized as data services. Hence the data integration is achieved through service invocations. Furthermore, a set of processes is defined to handle the database evolution through automatically identifying and modifying the IPUs which are affected by the evolution. An extensive evaluation based on a case study is presented. The result shows that the schematic heterogeneities defined in this thesis can be solved by IPUD except the relation isomorphism discrepancy. Ten out of thirteen types of schematic database evolution can be automatically handled by the evolution handling processes as long as the evolution is represented by the designed data model. The computational costs of the automatic evolution handling show a slow linear growth with the number of participating databases. Other characteristics addressed include SLEDI’s scalability, independence of application domain and databases model. The descriptive comparison with other data integration approaches shows that although the Data as a Service approach may result in lower performance under some circumstances, it supports better flexibility for integrating data from autonomous and evolving data sources

    Adaptive Database Systems Based On Query Feedback and Cached Results

    Get PDF
    This dissertation explores the query optimization technique of using cached results and feedback for improving performance of database systems. Cached results and experience obtained by running queries are used to save execution time for follow–up queries, adapt data and system parameters, and improve overall system performance. First, we develop a framework which integrates query optimization and cache management. The optimizer is capable of generating efficient query plans using previous query results cached on the disk. Alternative methods to access and update the caches are considered by the optimizer based on cost estimation. Different cache management strategies are also included in this framework for comparison. Empirical performance study verifies the advantage and practicality of this framework. To help the optimizer in selecting the best plan, we propose a novel approach for providing accurate but cost-effective selectivity estimation. Distribution of attribute values is regressed in real time, using actual query result sizes obtained as feedback, to make accurate selectivity estimation. This method avoids the expensive off-line database access overhead required by the conventional methods and adapts fairly well to updates and query locality. This is verified empirically. To execute a query plan more efficiently, a buffer pool is usually provided for caching data pages in memory to reduce disk accesses. We enhance buffer utilization by devising a buffer allocation scheme for recurring queries using page fault feedback obtained from previous executions. Performance improvement of this scheme is shown by empirical examples and a systematic simulation

    A state transition model for distributed query processing

    No full text
    A state transition model for the optimization of query processing in a distributed database system is presented. The problem is parametrized by means of a state describing the amount of processing that has been performed at each site where the database is located. A state transition occurs each time a new join or semijoin is executed. Dynamic programming is used to compute recursively the costs of the states and the globally optimal solution, taking into account communication and local processing costs. The state transition model is general enough to account for the possibility of parallel processing among the various sites, as well as for redundancy in the database. The model also permits significant reductions of the necessary computations by taking advantage of simple additivity and site-uniformity properties of a cost model, and of clever strategies that improve on the basic dynamic programming algorithm
    corecore