3,196 research outputs found

    Object-oriented querying of existing relational databases

    Get PDF
    In this paper, we present algorithms which allow an object-oriented querying of existing relational databases. Our goal is to provide an improved query interface for relational systems with better query facilities than SQL. This seems to be very important since, in real world applications, relational systems are most commonly used and their dominance will remain in the near future. To overcome the drawbacks of relational systems, especially the poor query facilities of SQL, we propose a schema transformation and a query translation algorithm. The schema transformation algorithm uses additional semantic information to enhance the relational schema and transform it into a corresponding object-oriented schema. If the additional semantic information can be deducted from an underlying entity-relationship design schema, the schema transformation may be done fully automatically. To query the created object-oriented schema, we use the Structured Object Query Language (SOQL) which provides declarative query facilities on objects. SOQL queries using the created object-oriented schema are much shorter, easier to write and understand and more intuitive than corresponding S Q L queries leading to an enhanced usability and an improved querying of the database. The query translation algorithm automatically translates SOQL queries into equivalent SQL queries for the original relational schema

    Ontologies on the semantic web

    Get PDF
    As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The “Semantic Web” was touted by its developers as equally revolutionary but has not yet achieved anything like the Web’s exponential uptake. This 17 000 word survey article explores why this might be so, from a perspective that bridges both philosophy and IT

    A Planning Approach to Migrating Domain-specific Legacy Systems into Service Oriented Architecture

    Get PDF
    The planning work prior to implementing an SOA migration project is very important for its success. Up to now, most of this kind of work has been manual work. An SOA migration planning approach based on intelligent information processing methods is addressed to semi-automate the manual work. This thesis will investigate the principle research question: “How can we obtain SOA migration planning schemas (semi-) automatically instead of by traditional manual work in order to determine if legacy software systems should be migrated to SOA computation environment?”. The controlled experiment research method has been adopted for directing research throughout the whole thesis. Data mining methods are used to analyse SOA migration source and migration targets. The mined information will be the supplementation of traditional analysis results. Text similarity measurement methods are used to measure the matching relationship between migration sources and migration targets. It implements the quantitative analysis of matching relationships instead of common qualitative analysis. Concretely, an association rule and sequence pattern mining algorithms are proposed to analyse legacy assets and domain logics for establishing a Service model and a Component model. These two algorithms can mine all motifs with any min-support number without assuming any ordering. It is better than the existing algorithms for establishing Service models and Component models in SOA migration situations. Two matching strategies based on keyword level and superficial semantic levels are described, which can calculate the degree of similarity between legacy components and domain services effectively. Two decision-making methods based on similarity matrix and hybrid information are investigated, which are for creating SOA migration planning schemas. Finally a simple evaluation method is depicted. Two case studies on migrating e-learning legacy systems to SOA have been explored. The results show the proposed approach is encouraging and applicable. Therefore, the SOA migration planning schemas can be created semi-automatically instead of by traditional manual work by using data mining and text similarity measurement methods

    Focus Issue on Legacy Information Systems and Business Process Change: Migrating Large-Scale Legacy Systems to Component-Based and Object Technology: The Evolution of a Pattern Language

    Get PDF
    The process of developing large-scale business critical software systems must boost the productivity both of the users and the developers of software, while at the same time responding flexibly to changing business requirements in the face of sharpening competition. Historically, these two forces were viewed as mutually hostile. Component-based software development using object technology promises a way of mediating the apparent contradiction. This paper presents a successful new approach which focuses primarily on the architecture of the software system to migrate an existing system to a new form. Best practice is captured by software patterns that address not only the design, but also the process and organizational issues. The approach was developed through four completed, successful live projects in different business and technical areas. It resulted in a still-evolving pattern language called ADAPTOR (Architecture-Driven and Pattern-based Techniques for Object Re-engineering). This article outlines the approach that underlies ADAPTOR. It challenges popular notions of legacy systems by emphasizing business requirements. Architectural approaches to migration are then contrasted with traditional reverse engineering approaches, including the weakness of reverse engineering in the face of paradigm shifts. The evolution of the ADAPTOR pattern language is outlined with a brief history of the projects from which the patterns were abstracted

    A Case Study for Business Integration as a Service

    No full text
    This paper presents Business Integration as a Service (BIaaS) to allow two services to work together in the Cloud to achieve a streamline process. We illustrate this integration using two services; Return on Investment (ROI) Measurement as a Service (RMaaS) and Risk Analysis as a Service (RAaaS) in the case study at the University of Southampton. The case study demonstrates the cost-savings and the risk analysis achieved, so two services can work as a single service. Advanced techniques are used to demonstrate statistical services and 3D Visualisation services under the remit of RMaaS and Monte Carlo Simulation as a Service behind the design of RAaaS. Computational results are presented with their implications discussed. Different types of risks associated with Cloud adoption can be calculated easily, rapidly and accurately with the use of BIaaS. This case study confirms the benefits of BIaaS adoption, including cost reduction and improvements in efficiency and risk analysis. Implementation of BIaaS in other organisations is also discussed. Important data arising from the integration of RMaaS and RAaaS are useful for management and stakeholders of University of Southampton

    Migrating microservices to graph database

    Get PDF
    Microservice architecture is a popular approach to structuring web backend services. Another emerging trend, after a period of hibernation, is utilizing modern graph database management systems for managing complex, richly connected data. The two approaches have rarely been used in tandem, as microservices emphasize modularization and decoupling of services, while graph data models favor data integration. In this study, literature on microservices and graph databases is reviewed and a synthesis between the two paradigms is presented. Based on the theoretical discussion, a software architecture combining the two elements is formulated and implemented using microservices serving content metadata at Yleisradio, the Finnish national broadcasting company. The architecture design follows the Design Science Research Process model. Finally, the renewed system is evaluated using quantitative and qualitative metrics. The performance of the system is measured using automated API queries and load tests. The new system was compared to an earlier version based on a PostgreSQL database. The tests gave slight indication that the renewed system performed better for complex queries, where a large number of relations were traversed, but worse in terms of throughput under heavy load. Based on the these findings, a number of performance-enhancing optimizations to the system are introduced. Observations and perpectives are also gathered in a project retrospective session. It is concluded that the resulting architecture holds promise for managing complex data rich in relations in a safe manner. In it, the different domains of the knowledge graph are decoupled into distinct named graphs managed by different microservices
    • …
    corecore