2,837 research outputs found

    Vertical Fragmentation for Database Using FPClose Algorithm

    Get PDF
    Vertical fragmentation technique is used to enhance the performance of database system and reduce the number of access to irrelevant instances by splitting a table or relation into different fragments vertically. The partitioning design can be derived using FPClose algorithm, which is a data mining algorithm used to extract the frequent closed itemsets in a dataset. A new design approach is implemented to perform fragmentation. A benchmark with different minimum support levels is tested. The obtained results from FPClose algorithm are compared with the Apriori algorithm

    Development of new data partitioning and allocation algorithms for query optimization of distributed data warehouse systems

    Get PDF
    Distributed databases and in particular distributed data warehousing are becoming an increasingly important technology for information integration and data analysis. Data Warehouse (DW) systems are used by decision makers for performance measurement and decision support. However, although data warehousing and on-line analytical processing (OLAP) are essential elements of decision support, the OLAP query response time is strongly affected by the volume of data need to be accessed from storage disks. Data partitioning is one of the physical design techniques that may be used to optimize query processing cost in DWs. It is a non redundant optimization technique because it does not replicate data, contrary to redundant techniques like materialized views and indexes. The warehouse partitioning problem is concerned with determining the set of dimension tables to be partitioned and using them to generate the fact table fragments. In this work an enhanced grouping algorithm that avoids the limitations of some existing vertical partitioning algorithms is proposed. Furthermore, a static partitioning algorithm that allows fragmentation at early stages of schema design is presented. The thesis also, investigates the performance of the data warehouse after implementing a combination of Genetic Algorithm (GA) and Simulated Annealing (SA) techniques to horizontally partition the data warehouse star schema. It, then presents the experimentation and implementation results of the proposed algorithm. This research presented different approaches to optimize data fragments allocation cost using a greedy mathematical model and a combination of simulated annealing and genetic algorithm to determine the site by site allocation leading to optimal solutions for fragments distribution. Throughout this thesis, the term fragmentation and partitioning will be used interchangeably

    Efficient Database Distribution Using Local Search Algorithm

    Get PDF
    A problem in railway database is identied. Focus of the problem is to reduce the average response time for all the read and write queries to the railway database. One way of doing this is by opening more than one database servers and distributing the database across these servers to improve the performance. In this work we are proposing an ecient distribution of the database across these servers considering read and write request frequencies at all locations. The problem of database distribution across dierent locations is mapped to the well studied problem called Uncapacitated Facility Location(UFL) problem. Various techniques such as greedy approach, LP rounding technique, primal-dual technique and local search have been proposed to tackle this problem. Of those, we are using local search technique in this work. In particular, poly- nomial version of the local search approximation algorithm is used to solve the railway database problem. Distributed database is implemented using postgresql database server and jboss appli- cation server is used to manage the global transaction. On this architecture, database is distributed using the local optimal solution obtained by local search algorithm and it is compared with other solutions in terms of the average response time for the read and write requests

    Potential applications of structured commodity financing techniques for banks in developing countries

    Get PDF
    This paper discusses of a number of innovative financial techniques that can be used by developing country banks to open up new financing possibilities in the commodities sector, for industries servicing the commodity sector, and for financing on the basis of "commoditized" income streams. This includes techniques such as factoring and forfaiting, countertrade, warehouse receipt finance, prepayments, export receivables finance, Islamic finance, structured import finance, and securitization. A number of practical models for developing country banks are described.agricultural finance structured finance repos banks securitization

    Architectural Principles for Database Systems on Storage-Class Memory

    Get PDF
    Database systems have long been optimized to hide the higher latency of storage media, yielding complex persistence mechanisms. With the advent of large DRAM capacities, it became possible to keep a full copy of the data in DRAM. Systems that leverage this possibility, such as main-memory databases, keep two copies of the data in two different formats: one in main memory and the other one in storage. The two copies are kept synchronized using snapshotting and logging. This main-memory-centric architecture yields nearly two orders of magnitude faster analytical processing than traditional, disk-centric ones. The rise of Big Data emphasized the importance of such systems with an ever-increasing need for more main memory. However, DRAM is hitting its scalability limits: It is intrinsically hard to further increase its density. Storage-Class Memory (SCM) is a group of novel memory technologies that promise to alleviate DRAM’s scalability limits. They combine the non-volatility, density, and economic characteristics of storage media with the byte-addressability and a latency close to that of DRAM. Therefore, SCM can serve as persistent main memory, thereby bridging the gap between main memory and storage. In this dissertation, we explore the impact of SCM as persistent main memory on database systems. Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit IO. This architecture yields many benefits: First, it obviates the need to reload data from storage to main memory during recovery, as data is discovered and accessed directly in SCM. Second, it allows replacing the traditional logging infrastructure by fine-grained, cheap micro-logging at data-structure level. Third, secondary data can be stored in DRAM and reconstructed during recovery. Fourth, system runtime information can be stored in SCM to improve recovery time. Finally, the system may retain and continue in-flight transactions in case of system failures. However, SCM is no panacea as it raises unprecedented programming challenges. Given its byte-addressability and low latency, processors can access, read, modify, and persist data in SCM using load/store instructions at a CPU cache line granularity. The path from CPU registers to SCM is long and mostly volatile, including store buffers and CPU caches, leaving the programmer with little control over when data is persisted. Therefore, there is a need to enforce the order and durability of SCM writes using persistence primitives, such as cache line flushing instructions. This in turn creates new failure scenarios, such as missing or misplaced persistence primitives. We devise several building blocks to overcome these challenges. First, we identify the programming challenges of SCM and present a sound programming model that solves them. Then, we tackle memory management, as the first required building block to build a database system, by designing a highly scalable SCM allocator, named PAllocator, that fulfills the versatile needs of database systems. Thereafter, we propose the FPTree, a highly scalable hybrid SCM-DRAM persistent B+-Tree that bridges the gap between the performance of transient and persistent B+-Trees. Using these building blocks, we realize our envisioned database architecture in SOFORT, a hybrid SCM-DRAM columnar transactional engine. We propose an SCM-optimized MVCC scheme that eliminates write-ahead logging from the critical path of transactions. Since SCM -resident data is near-instantly available upon recovery, the new recovery bottleneck is rebuilding DRAM-based data. To alleviate this bottleneck, we propose a novel recovery technique that achieves nearly instant responsiveness of the database by accepting queries right after recovering SCM -based data, while rebuilding DRAM -based data in the background. Additionally, SCM brings new failure scenarios that existing testing tools cannot detect. Hence, we propose an online testing framework that is able to automatically simulate power failures and detect missing or misplaced persistence primitives. Finally, our proposed building blocks can serve to build more complex systems, paving the way for future database systems on SCM

    Efficient Partitioning and Allocation of Data for Workflow Compositions

    Get PDF
    Our aim is to provide efficient partitioning and allocation of data for web service compositions. Web service compositions are represented as partial order database transactions. We accommodate a variety of transaction types, such as read-only and write-oriented transactions, to support workloads in cloud environments. We introduce an approach that partitions and allocates small units of data, called micropartitions, to multiple database nodes. Each database node stores only the data needed to support a specific workload. Transactions are routed directly to the appropriate data nodes. Our approach guarantees serializability and efficient execution. In Phase 1, we cluster transactions based on data requirements. We associate each cluster with an abstract query definition. An abstract query represents the minimal data requirement that would satisfy all the queries that belong to a given cluster. A micropartition is generated by executing the abstract query on the original database. We show that our abstract query definition is complete and minimal. Intuitively, completeness means that all queries of the corresponding cluster can be correctly answered using the micropartition generated from the abstract query. The minimality property means that no smaller partition of the data can satisfy all of the queries in the cluster. We also aim to support efficient web services execution. Our approach reduces the number of data accesses to distributed data. We also aim to limit the number of replica updates. Our empirical results show that the partitioning approach improves data access efficiency over standard partitioning of data. In Phase 2, we investigate the performance improvement via parallel execution.Based on the data allocation achieved in Phase I, we develop a scheduling approach. Our approach guarantees serializability while efficiently exploiting parallel execution of web services. We achieve conflict serializability by scheduling conflicting operations in a predefined order. This order is based on the calculation of a minimal delay requirement. We use this delay to schedule services to preserve serializability without the traditional locking mechanisms

    A Methodology for Vertically Partitioning in a Multi-Relation Database Environment

    Get PDF
    Vertical partitioning, in which attributes of a relation are assigned to partitions, is aimed at improving database performance. We extend previous research that is based on a single relation to multi-relation database environment, by including referential integrity constraints, access time based heuristic, and a comprehensive cost model that considers most transaction types including updates and joins. The algorithm was applied to a real-world insurance CLAIMS database. Simulation experiments were conducted and the results show a performance improvement of 36% to 65% over unpartitioned case. Application of our method for small databases resulted in partitioning schemes that are comparable to optimal.Facultad de Informátic

    A Methodology for Vertically Partitioning in a Multi-Relation Database Environment

    Get PDF
    Vertical partitioning, in which attributes of a relation are assigned to partitions, is aimed at improving database performance. We extend previous research that is based on a single relation to multi-relation database environment, by including referential integrity constraints, access time based heuristic, and a comprehensive cost model that considers most transaction types including updates and joins. The algorithm was applied to a real-world insurance CLAIMS database. Simulation experiments were conducted and the results show a performance improvement of 36% to 65% over unpartitioned case. Application of our method for small databases resulted in partitioning schemes that are comparable to optimal.Facultad de Informátic
    corecore