22 research outputs found

    Growth of relational model: Interdependence and complementary to big data

    Get PDF
    A database management system is a constant application of science that provides a platform for the creation, movement, and use of voluminous data. The area has witnessed a series of developments and technological advancements from its conventional structured database to the recent buzzword, bigdata. This paper aims to provide a complete model of a relational database that is still being widely used because of its well known ACID properties namely, atomicity, consistency, integrity and durability. Specifically, the objective of this paper is to highlight the adoption of relational model approaches by bigdata techniques. Towards addressing the reason for this in corporation, this paper qualitatively studied the advancements done over a while on the relational data model. First, the variations in the data storage layout are illustrated based on the needs of the application. Second, quick data retrieval techniques like indexing, query processing and concurrency control methods are revealed. The paper provides vital insights to appraise the efficiency of the structured database in the unstructured environment, particularly when both consistency and scalability become an issue in the working of the hybrid transactional and analytical database management system

    Handling imperfect information in criterion evaluation, aggregation and indexing

    Get PDF

    Performance Tuning Oracle 11g Database Melalui Inisial Paramater, Structure Database dan SQL Tuning. Studi Pada ERP SISFORBUN Dana Pensiun Perkebunan (DAPENBUN)

    Get PDF
    Dana Pensiun Perkebunan (DAPENBUN) sebagai pengelola manfaat pensiun bagi karyawan PTPN seluruh Indonesia beserta lembaga afiliasi dengan jumlah peserta per 31 Desember 2021 sebanyak 284.934 orang. Aplikasi SISFORBUN ini digunakan untuk memproses manfaat pensiun bagi seluruh peserta. Aplikasi berbasis web ini menggunakan database Oracle 11g dan sudah digunakan sejak tahun 2013, dengan seiring berjalanya waktu perkembangan data semakin banyak dengan jumlah record terbesar dalam satu table sebesar 26.696.667 record data, sehingga performa proses data semakin menurun. Atas dasar permasalahan tersebut perlu dilakukan penelitian untuk meningkat performa dari Aplikasi. Penelitian ini menggunakan metode SQL Tuning, Strukturing Object dan Initial Parameter Database. Setelah dilakukan pengujian dengan melakukan proses pembayaran Manfaat Pensiun pada laporan  manajemen nomor 18 (LM18), penulis dapat menyimpulkan bahwa setelah dilakukan optimasi, waktu yang diperlukan untuk proses pembayaran Manfaat Pensiun (LM18) menjadi lebih cepat dibandingkan sebelum dilakukan optimasi

    Efficient Maximum A-Posteriori Inference in Markov Logic and Application in Description Logics

    Full text link
    Maximum a-posteriori (MAP) query in statistical relational models computes the most probable world given evidence and further knowledge about the domain. It is arguably one of the most important types of computational problems, since it is also used as a subroutine in weight learning algorithms. In this thesis, we discuss an improved inference algorithm and an application for MAP queries. We focus on Markov logic (ML) as statistical relational formalism. Markov logic combines Markov networks with first-order logic by attaching weights to first-order formulas. For inference, we improve existing work which translates MAP queries to integer linear programs (ILP). The motivation is that existing ILP solvers are very stable and fast and are able to precisely estimate the quality of an intermediate solution. In our work, we focus on improving the translation process such that we result in ILPs having fewer variables and fewer constraints. Our main contribution is the Cutting Plane Aggregation (CPA) approach which leverages symmetries in ML networks and parallelizes MAP inference. Additionally, we integrate the cutting plane inference (Riedel 2008) algorithm which significantly reduces the number of groundings by solving multiple smaller ILPs instead of one large ILP. We present the new Markov logic engine RockIt which outperforms state-of-the-art engines in standard Markov logic benchmarks. Afterwards, we apply the MAP query to description logics. Description logics (DL) are knowledge representation formalisms whose expressivity is higher than propositional logic but lower than first-order logic. The most popular DLs have been standardized in the ontology language OWL and are an elementary component in the Semantic Web. We combine Markov logic, which essentially follows the semantic of a log-linear model, with description logics to log-linear description logics. In log-linear description logic weights can be attached to any description logic axiom. Furthermore, we introduce a new query type which computes the most-probable 'coherent' world. Possible applications of log-linear description logics are mainly located in the area of ontology learning and data integration. With our novel log-linear description logic reasoner ELog, we experimentally show that more expressivity increases quality and that the solutions of optimal solving strategies have higher quality than the solutions of approximate solving strategies

    An analytic infrastructure for harvesting big data to enhance supply chain performance

    Get PDF
    Big data has already received a tremendous amount of attention from managers in every industry, policy and decision makers in governments, and researchers in many different areas. However, the current big data analytics have conspicuous limitations, especially when dealing with information silos. In this paper, we synthesise existing researches on big data analytics and propose an integrated infrastructure for breaking down the information silos, in order to enhance supply chain performance. The analytic infrastructure effectively leverages rich big data sources (i.e. databases, social media, mobile and sensor data) and quantifies the related information using various big data analytics. The information generated can be used to identify a required competence set (which refers to a collection of skills and knowledge used for specific problem solving) and to provide roadmaps to firms and managers in generating actionable supply chain strategies, facilitating collaboration between departments, and generating fact-based operational decisions. We showcase the usefulness of the analytic infrastructure by conducting a case study in a world-leading company that produces sports equipment. The results indicate that it enabled managers: (a) to integrate information silos in big data analytics to serve as inputs for new product ideas; (b) to capture and interrelate different competence sets to provide an integrated perspective of the firm’s operations capabilities; and (c) to generate a visual decision path that facilitated decision making regarding how to expand competence sets to support new product development

    Evaluation of indexing strategies for possibilistic queries based on indexing techniques available in traditional RDBMS

    Get PDF
    A common way to implement a fuzzy database is on top of a classical Relational Database Management Systems (RDBMS). Given that almost all RDBMS provide indexing mechanisms to enhance classical query processing performance, nding ways to use these mechanisms to enhance the performance of exible query processing is of enormous interest. This work proposes and evaluates a set of indexing strategies, implemented exclusively on top of classical RDBMS indexing structures, designed to improve exible query processing performance, focusing in the case of possibilities queries. Results show the best indexing strategies for di erent data a query scenarios, o ering e ective ways to implement fuzzy data indexes on top of a classical RDBMS

    The Use of Relation Valued Attributes in Support of Fuzzy Data

    Get PDF
    In his paper introducing fuzzy sets, L.A. Zadeh describes the difficulty of assigning some real-world objects to a particular class when the notion of class membership is ambiguous. If exact classification is not obvious, most people approximate using intuition and may reach agreement by placing an object in more than one class. Numbers or ‘degrees of membership’ within these classes are used to provide an approximation that supports this intuitive process. This results in a ‘fuzzy set’. This fuzzy set consists any number of ordered pairs to represent both the class and the class’s degree of membership to provide a formal representation that can be used to model this process. Although the fuzzy approach to reasoning and classification makes sense, it does not comply with two of the basic principles of classical logic. These principles are the laws of contradiction and excluded middle. While they play a significant role in logic, it is the violation of these principles that gives fuzzy logic its useful characteristics. The problem of this representation within a database system, however, is that the class and its degree of membership are represented by two separate, but indivisible attributes. Further, this representation may contain any number of such pairs of attributes. While the data for class and membership are maintained in individual attributes, neither of these attributes may exist without the other without sacrificing meaning. And, to maintain a variable number of such pairs within the representation is problematic. C. J. Date suggested a relation valued attribute (RVA) which can not only encapsulate the attributes associated with the fuzzy set and impose constraints on their use, but also provide a relation which may contain any number of such pairs. The goal of this dissertation is to establish a context in which the relational database model can be extended through the implementation of an RVA to support of fuzzy data on an actual system. This goal represents an opportunity to study through application and observation, the use of fuzzy sets to support imprecise and uncertain data using database queries which appropriately adhere to the relational model. The intent is to create a pathway that may extend the support of database applications that need fuzzy logic and/or fuzzy data
    corecore