18 research outputs found

    The exploration of a category theory-based virtual Geometrical product specification system for design and manufacturing

    Get PDF
    In order to ensure quality of products and to facilitate global outsourcing, almost all the so-called “world-class” manufacturing companies nowadays are applying various tools and methods to maintain the consistency of a product’s characteristics throughout its manufacturing life cycle. Among these, for ensuring the consistency of the geometric characteristics, a tolerancing language − the Geometrical Product Specification (GPS) has been widely adopted to precisely transform the functional requirements from customers into manufactured workpieces expressed as tolerance notes in technical drawings. Although commonly acknowledged by industrial users as one of the most successful efforts in integrating existing manufacturing life-cycle standards, current GPS implementations and software packages suffer from several drawbacks in their practical use, possibly the most significant, the difficulties in inferring the data for the “best” solutions. The problem stemmed from the foundation of data structures and knowledge-based system design. This indicates that there need to be a “new” software system to facilitate GPS applications. The presented thesis introduced an innovative knowledge-based system − the VirtualGPS − that provides an integrated GPS knowledge platform based on a stable and efficient database structure with knowledge generation and accessing facilities. The system focuses on solving the intrinsic product design and production problems by acting as a virtual domain expert through translating GPS standards and rules into the forms of computerized expert advices and warnings. Furthermore, this system can be used as a training tool for young and new engineers to understand the huge amount of GPS standards in a relative “quicker” manner. The thesis started with a detailed discussion of the proposed categorical modelling mechanism, which has been devised based on the Category Theory. It provided a unified mechanism for knowledge acquisition and representation, knowledge-based system design, and database schema modelling. As a core part for assessing this knowledge-based system, the implementation of the categorical Database Management System (DBMS) is also presented in this thesis. The focus then moved on to demonstrate the design and implementation of the proposed VirtualGPS system. The tests and evaluations of this system were illustrated in Chapter 6. Finally, the thesis summarized the contributions to knowledge in Chapter 7. After thoroughly reviewing the project, the conclusions reached construe that the III entire VirtualGPS system was designed and implemented to conform to Category Theory and object-oriented programming rules. The initial tests and performance analyses show that the system facilitates the geometric product manufacturing operations and benefits the manufacturers and engineers alike from function designs, to a manufacturing and verification

    Migrating relational databases into object-based and XML databases

    Get PDF
    Rapid changes in information technology, the emergence of object-based and WWW applications, and the interest of organisations in securing benefits from new technologies have made information systems re-engineering in general and database migration in particular an active research area. In order to improve the functionality and performance of existing systems, the re-engineering process requires identifying and understanding all of the components of such systems. An underlying database is one of the most important component of information systems. A considerable body of data is stored in relational databases (RDBs), yet they have limitations to support complex structures and user-defined data types provided by relatively recent databases such as object-based and XML databases. Instead of throwing away the large amount of data stored in RDBs, it is more appropriate to enrich and convert such data to be used by new systems. Most researchers into the migration of RDBs into object-based/XML databases have concentrated on schema translation, accessing and publishing RDB data using newer technology, while few have paid attention to the conversion of data, and the preservation of data semantics, e.g., inheritance and integrity constraints. In addition, existing work does not appear to provide a solution for more than one target database. Thus, research on the migration of RDBs is not fully developed. We propose a solution that offers automatic migration of an RDB as a source into the recent database technologies as targets based on available standards such as ODMG 3.0, SQL4 and XML Schema. A canonical data model (CDM) is proposed to bridge the semantic gap between an RDB and the target databases. The CDM preserves and enhances the metadata of existing RDBs to fit in with the essential characteristics of the target databases. The adoption of standards is essential for increased portability, flexibility and constraints preservation. This thesis contributes a solution for migrating RDBs into object-based and XML databases. The solution takes an existing RDB as input, enriches its metadata representation with the required explicit semantics, and constructs an enhanced relational schema representation (RSR). Based on the RSR, a CDM is generated which is enriched with the RDB's constraints and data semantics that may not have been explicitly expressed in the RDB metadata. The CDM so obtained facilitates both schema translation and data conversion. We design sets of rules for translating the CDM into each of the three target schemas, and provide algorithms for converting RDB data into the target formats based on the CDM. A prototype of the solution has been implemented, which generates the three target databases. Experimental study has been conducted to evaluate the prototype. The experimental results show that the target schemas resulting from the prototype and those generated by existing manual mapping techniques were comparable. We have also shown that the source and target databases were equivalent, and demonstrated that the solution, conceptually and practically, is feasible, efficient and correct

    Siêu đồ thị kết nối đối tượng – một cách tiếp cận trong tối ưu hoá câu truy vấn đối tượng lồng nhau

    Get PDF
    Trong cơ sở dữ liệu hướng đối tượng, truy vấn đối tượng lồng được sử dụng khá thường xuyên. Các cấu trúc lồng được biểu diễn ở biểu thức điều kiện của truy vấn dưới hai dạng: các truy vấn con lồng hoặc biểu thức đường dẫn có chứa các kết nối ẩn là các tân từ lồng nhau trong mệnh đề where. Đối với truy vấn lồng, khi phân tích ước lượng chi phí của biểu thức đại số lồng nhau, thì việc định giá sẽ cho kết quả chi phí không hiệu quả. Vì vậy, phương pháp của chúng tôi đưa ra trong bài báo này là làm phẳng các truy vấn con trong các truy vấn có cấu trúc lồng nhau sử dụng siêu đồ thị kết nối đối tượng, phương pháp này sẽ làm tăng tính hiệu quả cho việc thực thi xử lý truy vấn trên cơ sở dữ liệu hướng đối tượng

    The exploration of a category theory-based virtual geometrical product specification system for design and manufacturing

    Get PDF
    In order to ensure quality of products and to facilitate global outsourcing, almost all the so-called “world-class” manufacturing companies nowadays are applying various tools and methods to maintain the consistency of a product’s characteristics throughout its manufacturing life cycle. Among these, for ensuring the consistency of the geometric characteristics, a tolerancing language − the Geometrical Product Specification (GPS) has been widely adopted to precisely transform the functional requirements from customers into manufactured workpieces expressed as tolerance notes in technical drawings. Although commonly acknowledged by industrial users as one of the most successful efforts in integrating existing manufacturing life-cycle standards, current GPS implementations and software packages suffer from several drawbacks in their practical use, possibly the most significant, the difficulties in inferring the data for the “best” solutions. The problem stemmed from the foundation of data structures and knowledge-based system design. This indicates that there need to be a “new” software system to facilitate GPS applications. The presented thesis introduced an innovative knowledge-based system − the VirtualGPS − that provides an integrated GPS knowledge platform based on a stable and efficient database structure with knowledge generation and accessing facilities. The system focuses on solving the intrinsic product design and production problems by acting as a virtual domain expert through translating GPS standards and rules into the forms of computerized expert advices and warnings. Furthermore, this system can be used as a training tool for young and new engineers to understand the huge amount of GPS standards in a relative “quicker” manner. The thesis started with a detailed discussion of the proposed categorical modelling mechanism, which has been devised based on the Category Theory. It provided a unified mechanism for knowledge acquisition and representation, knowledge-based system design, and database schema modelling. As a core part for assessing this knowledge-based system, the implementation of the categorical Database Management System (DBMS) is also presented in this thesis. The focus then moved on to demonstrate the design and implementation of the proposed VirtualGPS system. The tests and evaluations of this system were illustrated in Chapter 6. Finally, the thesis summarized the contributions to knowledge in Chapter 7. After thoroughly reviewing the project, the conclusions reached construe that the III entire VirtualGPS system was designed and implemented to conform to Category Theory and object-oriented programming rules. The initial tests and performance analyses show that the system facilitates the geometric product manufacturing operations and benefits the manufacturers and engineers alike from function designs, to a manufacturing and verification.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Migrating relational databases into object-based and XML databases

    Get PDF
    Rapid changes in information technology, the emergence of object-based and WWW applications, and the interest of organisations in securing benefits from new technologies have made information systems re-engineering in general and database migration in particular an active research area. In order to improve the functionality and performance of existing systems, the re-engineering process requires identifying and understanding all of the components of such systems. An underlying database is one of the most important component of information systems. A considerable body of data is stored in relational databases (RDBs), yet they have limitations to support complex structures and user-defined data types provided by relatively recent databases such as object-based and XML databases. Instead of throwing away the large amount of data stored in RDBs, it is more appropriate to enrich and convert such data to be used by new systems. Most researchers into the migration of RDBs into object-based/XML databases have concentrated on schema translation, accessing and publishing RDB data using newer technology, while few have paid attention to the conversion of data, and the preservation of data semantics, e.g., inheritance and integrity constraints. In addition, existing work does not appear to provide a solution for more than one target database. Thus, research on the migration of RDBs is not fully developed. We propose a solution that offers automatic migration of an RDB as a source into the recent database technologies as targets based on available standards such as ODMG 3.0, SQL4 and XML Schema. A canonical data model (CDM) is proposed to bridge the semantic gap between an RDB and the target databases. The CDM preserves and enhances the metadata of existing RDBs to fit in with the essential characteristics of the target databases. The adoption of standards is essential for increased portability, flexibility and constraints preservation. This thesis contributes a solution for migrating RDBs into object-based and XML databases. The solution takes an existing RDB as input, enriches its metadata representation with the required explicit semantics, and constructs an enhanced relational schema representation (RSR). Based on the RSR, a CDM is generated which is enriched with the RDB's constraints and data semantics that may not have been explicitly expressed in the RDB metadata. The CDM so obtained facilitates both schema translation and data conversion. We design sets of rules for translating the CDM into each of the three target schemas, and provide algorithms for converting RDB data into the target formats based on the CDM. A prototype of the solution has been implemented, which generates the three target databases. Experimental study has been conducted to evaluate the prototype. The experimental results show that the target schemas resulting from the prototype and those generated by existing manual mapping techniques were comparable. We have also shown that the source and target databases were equivalent, and demonstrated that the solution, conceptually and practically, is feasible, efficient and correct.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Xcerpt: A Rule-Based Query and Transformation Language for the Web

    Get PDF
    This thesis investigates querying the Web and the Semantic Web. It proposes a new rulebased query language called Xcerpt. Xcerpt differs from other query languages in that it uses patterns instead of paths for the selection of data, and in that it supports both rule chaining and recursion. Rule chaining serves for structuring large queries, as well as for designing complex query programs (e.g. involving queries to the Semantic Web), and for modelling inference rules. Query patterns may contain special constructs like partial subqueries, optional subqueries, or negated subqueries that account for the particularly flexible structure of data on the Web. Furthermore, this thesis introduces the syntax of the language Xcerpt, which is illustrated on a large collection of use cases both from the conventional Web and the Semantic Web. In addition, a declarative semantics in form of a Tarski-style model theory is described, and an algorithm is proposed that performs a backward chaining evaluation of Xcerpt programs. This algorithm has also been implemented (partly) in a prototypical runtime system. A salient aspect of this algorithm is the specification of a non-standard unification algorithm called simulation unification that supports the new query constructs described above. This unification is symmetric in the sense that variables in both terms can be bound. On the other hand it is in contrast to standard unification assymmetric in the sense that the unification determines that the one term is a subterm of the other term.Diese Arbeit untersucht das Anfragen des Webs und des Semantischen Webs. Sie stellt eine neue regel-basierte Anfragesprache namens Xcerpt vor. Xcerpt unterscheidet sich von anderen Anfragesprachen insofern, als dass es zur Selektion von Daten sog. Pattern (,,Muster'') verwendet und sowohl Regelschliessen als auch Rekursion unterstützt, was sowohl zur Strukturierung größerer Anfragen als auch zur Erstellung komplexer Anfrageprogramme, und zur Modellierung von Inferenzregeln dient. Anfrage-Pattern können spezielle Konstrukte, wie partielle Teilanfragen, optionale Teilanfragen, oder negierte Teilanfragen, enthalten, die der besonders flexiblen Struktur von Daten im Web genügen. In dieser Arbeit wird weiterhin die Syntax von Xcerpt eingeführt, und mit Hilfe mehrerer Anwendungsszenarien sowohl aus dem konventionellen als auch aus dem semantischen Web erläutert. Ausserdem wird eine deklarative Semantik im Stil von Tarski's Modelltheorie beschrieben und ein Algorithmus vorgeschlagen, der eine rückwärtsschliessende Auswertung von Xcerpt durchführt und in einem prototypischen Laufzeitsystem implementiert wurde. Wesentlicher Bestandteil des Rückwärtsschliessens ist die Spezifikation eines nicht-standard Unifikations-Algorithmus, der die oben genannten speziellen Xcerpt-Konstrukte berücksichtigt. Diese Unifikation ist symmetrisch in dem Sinne, dass sie Variablen in beiden angeglichenen (,,unifizierten'') Termen binden kann. Andererseits ist sie im Gegensatz zur Standardunifikation assymmetrisch in dem Sinne, dass der dadurch geleistete Angleich den einen Term als ,,Teilterm'' des anderen erkennt

    An Algebraic Approach to XQuery Optimization

    Get PDF
    As more data is stored in XML and more applications need to process this data, XML query optimization becomes performance critical. While optimization techniques for relational databases have been developed over the last thirty years, the optimization of XML queries poses new challenges. Query optimizers for XQuery, the standard query language for XML data, need to consider both document order and sequence order. Nevertheless, algebraic optimization proved powerful in query optimizers in relational and object oriented databases. Thus, this dissertation presents an algebraic approach to XQuery optimization. In this thesis, an algebra over sequences is presented that allows for a simple translation of XQuery into this algebra. The formal definitions of the operators in this algebra allow us to reason formally about algebraic optimizations. This thesis leverages the power of this formalism when unnesting nested XQuery expressions. In almost all cases unnesting nested queries in XQuery reduces query execution times from hours to seconds or milliseconds. Moreover, this dissertation presents three basic algebraic patterns of nested queries. For every basic pattern a decision tree is developed to select the most effective unnesting equivalence for a given query. Query unnesting extends the search space that can be considered during cost-based optimization of XQuery. As a result, substantially more efficient query execution plans may be detected. This thesis presents two more important cases where the number of plan alternatives leads to substantially shorter query execution times: join ordering and reordering location steps in path expressions. Our algebraic framework detects cases where document order or sequence order is destroyed. However, state-of-the-art techniques for order optimization in cost-based query optimizers have efficient mechanisms to repair order in these cases. The results obtained for query unnesting and cost-based optimization of XQuery underline the need for an algebraic approach to XQuery optimization for efficient XML query processing. Moreover, they are applicable to optimization in relational databases where order semantics are considered

    Recherche d'information dans les documents XML : prise en compte des liens pour la sélection d'éléments pertinents

    Get PDF
    156 p. : ill. ; 30 cmNotre travail se situe dans le contexte de la recherche d'information (RI), plus particulièrement la recherche d'information dans des documents semi structurés de type XML. L'exploitation efficace des documents XML disponibles doit prendre en compte la dimension structurelle. Cette dimension a conduit à l'émergence de nouveaux défis dans le domaine de la RI. Contrairement aux approches classiques de RI qui mettent l'accent sur la recherche des contenus non structurés, la RI XML combine à la fois des informations textuelles et structurelles pour effectuer différentes tâches de recherche. Plusieurs approches exploitant les types d'évidence ont été proposées et sont principalement basées sur les modèles classiques de RI, adaptées à des documents XML. La structure XML a été utilisée pour fournir un accès ciblé aux documents, en retournant des composants de document (par exemple, sections, paragraphes, etc.), au lieu de retourner tout un document en réponse une requête de l'utilisateur. En RI traditionnelle, la mesure de similarité est généralement basée sur l'information textuelle. Elle permetle classement des documents en fonction de leur degré de pertinence en utilisant des mesures comme:" similitude terme " ou " probabilité terme ". Cependant, d'autres sources d'évidence peuvent être considérées pour rechercher des informations pertinentes dans les documents. Par exemple, les liens hypertextes ont été largement exploités dans le cadre de la RI sur le Web.Malgré leur popularité dans le contexte du Web, peud'approchesexploitant cette source d'évidence ont été proposées dans le contexte de la RI XML. Le but de notre travail est de proposer des approches pour l'utilisation de liens comme une source d'évidencedans le cadre de la recherche d'information XML. Cette thèse vise à apporter des réponses aux questions de recherche suivantes : 1. Peut-on considérer les liens comme une source d'évidence dans le contexte de la RIXML? 2. Est-ce que l'utilisation de certains algorithmes d'analyse de liensdans le contexte de la RI XML améliore la qualité des résultats, en particulier dans le cas de la collection Wikipedia? 3. Quels types de liens peuvent être utilisés pour améliorer le mieux la pertinence des résultats de recherche? 4. Comment calculer le score lien des différents éléments retournés comme résultats de recherche? Doit-on considérer lesliens de type "document-document" ou plus précisément les liens de type "élément-élément"? Quel est le poids des liens de navigation par rapport aux liens hiérarchiques? 5. Quel est l'impact d'utilisation de liens dans le contexte global ou local? 6. Comment intégrer le score lien dans le calcul du score final des éléments XML retournés? 7. Quel est l'impact de la qualité des premiers résultats sur le comportement des formules proposées? Pour répondre à ces questions, nous avons mené une étude statistique, sur les résultats de recherche retournés par le système de recherche d'information"DALIAN", qui a clairement montré que les liens représentent un signe de pertinence des éléments dans le contexte de la RI XML, et cecien utilisant la collection de test fournie par INEX. Aussi, nous avons implémenté trois algorithmes d'analyse des liens (Pagerank, HITS et SALSA) qui nous ont permis de réaliser une étude comparative montrant que les approches "query-dependent" sont les meilleures par rapport aux approches "global context" . Nous avons proposé durant cette thèse trois formules de calcul du score lien: Le premièreest appelée "Topical Pagerank"; la seconde est la formule : "distance-based"; et la troisième est :"weighted links based". Nous avons proposé aussi trois formules de combinaison, à savoir, la formule linéaire, la formule Dempster-Shafer et la formule fuzzy-based. Enfin, nous avons mené une série d'expérimentations. Toutes ces expérimentations ont montré que: les approches proposées ont permis d'améliorer la pertinence des résultats pour les différentes configurations testées; les approches "query-dependent" sont les meilleurescomparées aux approches global context; les approches exploitant les liens de type "élément-élément"ont obtenu de bons résultats; les formules de combinaison qui se basent sur le principe de l'incertitude pour le calcul des scores finaux des éléments XML permettent de réaliser de bonnes performance

    Scalable Automated Incrementalization for Real-Time Static Analyses

    Get PDF
    This thesis proposes a framework for easy development of static analyses, whose results are incrementalized to provide instantaneous feedback in an integrated development environment (IDE). Today, IDEs feature many tools that have static analyses as their foundation to assess software quality and catch correctness problems. Yet, these tools often fail to provide instantaneous feedback and are thus restricted to nightly build processes. This precludes developers from fixing issues at their inception time, i.e., when the problem and the developed solution are both still fresh in mind. In order to provide instantaneous feedback, incrementalization is a well-known technique that utilizes the fact that developers make only small changes to the code and, hence, analysis results can be re-computed fast based on these changes. Yet, incrementalization requires carefully crafted static analyses. Thus, a manual approach to incrementalization is unattractive. Automated incrementalization can alleviate these problems and allows analyses writers to formulate their analyses as queries with the full data set in mind, without worrying over the semantics of incremental changes. Existing approaches to automated incrementalization utilize standard technologies, such as deductive databases, that provide declarative query languages, yet also require to materialize the full dataset in main-memory, i.e., the memory is permanently blocked by the data required for the analyses. Other standard technologies such as relational databases offer better scalability due to persistence, yet require large transaction times for data. Both technologies are not a perfect match for integrating static analyses into an IDE, since the underlying data, i.e., the code base, is already persisted and managed by the IDE. Hence, transitioning the data into a database is redundant work. In this thesis a novel approach is proposed that provides a declarative query language and automated incrementalization, yet retains in memory only a necessary minimum of data, i.e., only the data that is required for the incrementalization. The approach allows to declare static analyses as incrementally maintained views, where the underlying formalism for incrementalization is the relational algebra with extensions for object-orientation and recursion. The algebra allows to deduce which data is the necessary minimum for incremental maintenance and indeed shows that many views are self-maintainable, i.e., do not require to materialize memory at all. In addition an optimization for the algebra is proposed that allows to widen the range of self-maintainable views, based on domain knowledge of the underlying data. The optimization works similar to declaring primary keys for databases, i.e., the optimization is declared on the schema of the data, and defines which data is incrementally maintained in the same scope. The scope makes all analyses (views) that correlate only data within the boundaries of the scope self-maintainable. The approach is implemented as an embedded domain specific language in a general-purpose programming language. The implementation can be understood as a database-like engine with an SQL-style query language and the execution semantics of the relational algebra. As such the system is a general purpose database-like query engine and can be used to incrementalize other domains than static analyses. To evaluate the approach a large variety of static analyses were sampled from real-world tools and formulated as incrementally maintained views in the implemented engine
    corecore