72 research outputs found

    Ensuring the existence of a BCNF-decomposition that preserves functional dependencies in O (N2) time

    Get PDF
    A simple condition is presented that ensures that a relation scheme R with a set F of functional dependencies has a Boyce-Codd normal form (BCNF)-decomposition that has the lossless-join property and preserves functional dependencies

    Direct Product Decompositions of Lattices, Closures and Relation Schemes

    Get PDF
    In this paper we study direct product decompositions of closure operations and lattices of closed sets. We characterize direct product decompositions of lattices of closed sets in terms of closure operations, and find those decompositions of lattices which correspond to the decompositions of closures. If a closure on a finite set is represented by its implication base (i.e. a binary relation on a powerset), we construct a polynomial algorithm to find its direct product decompositions. The main characterization theorem is also applied to define direct product decompositions of relational database schemes and to find out what properties of relational databases and schemes are preserved under decompositions

    DCPP/POLYGAIT Inventory Control System

    Get PDF
    This report discusses a proposed system to improve upon inventory management issues experienced in the M&TE Tool room for the PG&E Diablo Canyon Power plant. Effective inventory tracking and management is an important characteristic of any organization handling physical assets, and without the proper system in place, companies may lose expensive items and waste time by not having equipment available when needed. The tool room is experiencing inventory shrinkage of M&TE equipment nearing 100,000 per year largely because of an inefficient checkout system that fails to keep employees accountable for the tools they check out. Even more costly than the shrinkage of inventory is the expense of downtime incurred by not having a tool ready when needed. Two main issues with the current system were identified as the reasons for the shrinkage and lack of accountability: 1 when no tool clerk is on staff, mainly nights and weekends, an unreliable paper-method for checkout is used, and 2, employees are not held responsible for checking their tools back in, resulting in tools being handed-off outside of the tool room. To combat these problems, a self-checkout/check-in system was developed, eliminating the need for the paper system, requiring an employee login for returning tools, and reducing the total number of steps in the process by 36%. PG&E was also interested in using RFID (Radio Frequency Identification) technology to further increase accountability and improve the tracking of tools in and out of the tool room. A working proof-of-concept model was designed, built, and tested at Cal Poly’s POLYGAIT Laboratory along with recommendations for a potential implementation at PG&E. The results of the portal testing indicate that the best RFID tags for larger items include the Confidex Ironside Slim or Xerafy Cargo Trak tags while the Confidex Captura G2XM should be used for cabled probes. In addition, a maximum of six tools should be carried through the portal at a single time. An economic analysis for the proposed RFID system with revised checkout was performed along with two other alternatives: an increase in staffing on nights and weekends with the revised checkout and regular staffing with the revised checkout. All three alternatives were compared to the current state, which includes regular staffing without the revised checkout. The results of the economic analysis suggest that the RFID system paired with the revised checkout provides the lowest total cost solution, with a payback period of 0.046 years and a cumulative four-year return of 1,442,914.00. The second total lowest cost solution, which is the revised checkout method alone without an RFID system or increase in staffing, provides the fastest payback period of all the alternatives, in 0.019 years, but provides less of a return on an investment than when paired with the RFID system

    A logic programming e-learning tool for teaching database dependency theory

    Get PDF
    In this paper, we describe an "intelligent" tool for helping to teach the principles of database design. The software that we present uses PROLOG to implement a teaching tool with which students can explore the concepts of dependency theory, and the normalization process. Students are able to construct their own learning environment and can develop their understanding of the material at a pace that is controlled by the individual student

    An analysis of different data base structures and management systems on Clickstream data collected for advocacy based marketing strategies experiments for Intel and GM

    Get PDF
    Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (leaves 82-83).Marketing on the Internet is the next big field in marketing research. Clickstream data is a great contribution to analyze the effects of advocacy based marketing strategies. Handling Clickstream data becomes a big issue. This paper will look at the problems caused by Clickstream data from a database perspective and consider several theories to alleviate the difficulties. Applications of modern database optimization techniques will be discussed and this paper will detail the implementation of these techniques for the Intel and GM project.by Yufei Wang.M.Eng.and S.B

    Acta Cybernetica : Volume 10. Number 3.

    Get PDF

    Normalization Techniques For Improving The Performance Of Knowledge Graph Creation Pipelines

    Get PDF
    With the rapid growth of data within the web, demands on discovering information within data and consecutively exploiting knowledge graphs rise much more than we think it does. Data integration systems can be of great help to meet this precious demand in that they offer transformation of data from various sources and with different volumes. To this end, a data integration system takes advantage of utilizing mapping rules-- specified in a language like RML -- to integrate data collected from various data sources into a knowledge graph. However, large data sources may suffer from various data quality issues, being redundant one of them. Regarding this, the Semantic Web community contributes to Knowledge Engineering with techniques to create a knowledge graph efficiently. The thesis reported in this document tackles creating knowledge graphs in the presence of data sources with redundant data, and a novel normalization theory is proposed to solve this problem. This theory covers not only the characteristics of the data sources but also mapping rules used to integrate the data sources into a knowledge graph. Based on this, three normal forms are proposed and an algorithm for transforming mapping rules and data sources into these normal forms. The proposed approach's performance is evaluated in different testbeds composed of real-world data and synthetic data. The observed results suggest that the proposed techniques can dramatically reduce the execution time of knowledge graph creation. Therefore, this thesis's normalization theory contributes to the repertoire of tools that facilitate the creation of knowledge graphs at scale

    Combining the functional and the relational model

    Get PDF
    • …
    corecore