1,221 research outputs found

    Ensuring Query Compatibility with Evolving XML Schemas

    Get PDF
    During the life cycle of an XML application, both schemas and queries may change from one version to another. Schema evolutions may affect query results and potentially the validity of produced data. Nowadays, a challenge is to assess and accommodate the impact of theses changes in rapidly evolving XML applications. This article proposes a logical framework and tool for verifying forward/backward compatibility issues involving schemas and queries. First, it allows analyzing relations between schemas. Second, it allows XML designers to identify queries that must be reformulated in order to produce the expected results across successive schema versions. Third, it allows examining more precisely the impact of schema changes over queries, therefore facilitating their reformulation

    Developing Resource Usage Service in WLCG

    No full text
    According to the Memorandum of Understanding (MoU) of the World-wide LHC Computing Grid (WLCG) project, participating sites are required to provide resource usage or accounting data to the Grid Operational Centre (GOC) to enrich the understanding of how shared resources are used, and to provide information for improving the effectiveness of resource allocation. As a multi-grid environment, the accounting process of WLCG is currently enabled by four accounting systems, each of which was developed independently by constituent grid projects. These accounting systems were designed and implemented based on project-specific local understanding of requirements, and therefore lack interoperability. In order to automate the accounting process in WLCG, three transportation methods are being introduced for streaming accounting data metered by heterogeneous accounting systems into GOC at Rutherford Appleton Laboratory (RAL) in the UK, where accounting data are aggregated and accumulated throughout the year. These transportation methods, however, were introduced on a per accounting-system basis, i.e. targeting at a particular accounting system, making them hard to reuse and customize to new requirements. This paper presents the design of WLCG-RUS system, a standards-compatible solution providing a consistent process for streaming resource usage data across various accounting systems, while ensuring interoperability, portability, and customization

    Encoding models for scholarly literature

    Get PDF
    We examine the issue of digital formats for document encoding, archiving and publishing, through the specific example of "born-digital" scholarly journal articles. We will begin by looking at the traditional workflow of journal editing and publication, and how these practices have made the transition into the online domain. We will examine the range of different file formats in which electronic articles are currently stored and published. We will argue strongly that, despite the prevalence of binary and proprietary formats such as PDF and MS Word, XML is a far superior encoding choice for journal articles. Next, we look at the range of XML document structures (DTDs, Schemas) which are in common use for encoding journal articles, and consider some of their strengths and weaknesses. We will suggest that, despite the existence of specialized schemas intended specifically for journal articles (such as NLM), and more broadly-used publication-oriented schemas such as DocBook, there are strong arguments in favour of developing a subset or customization of the Text Encoding Initiative (TEI) schema for the purpose of journal-article encoding; TEI is already in use in a number of journal publication projects, and the scale and precision of the TEI tagset makes it particularly appropriate for encoding scholarly articles. We will outline the document structure of a TEI-encoded journal article, and look in detail at suggested markup patterns for specific features of journal articles

    A JBI Information Object Engineering Environment Utilizing Metadata Fragments for Refining Searches on Semantically-Related Object Types

    Get PDF
    The Joint Battlespace Infosphere (JBI) architecture defines the Information Object (IO) as its basic unit of data. This research proposes an IO engineering methodology that will introduce componentized IO type development. This enhancement will improve the ability of JBI users to create and store IO type schemas, and query and subscribe to information objects, which may be semantically related by their inclusion of common metadata elements. Several parallel efforts are being explored to enable efficient storage and retrieval of IOs. Utilizing relational database access methods, applying a component-based IO type development concept, and exploiting XML inclusion mechanisms, this research improves the means by which a JBI can deliver related IO types to subscribers from a single query or subscription. The proposal of this new IO type architecture also integrates IO type versioning, type coercion, and namespacing standards into the methodology. The combined proposed framework provides a better means by which a JBI can deliver the right information to the right users at the right time

    Towards a unified methodology for supporting the integration of data sources for use in web applications

    Get PDF
    Organisations are making increasing use of web applications and web-based systems as an integral part of providing services. Examples include personalised dynamic user content on a website, social media plug-ins or web-based mapping tools. For these types of applications to have maximum use for the user where the applications are fully functional, they require the integration of data from multiple sources. The focus of this thesis is in improving this integration process with a focus on web applications with multiple sources of data. Integration of data from multiple sources is problematic for many reasons. Current integration methods tend to be domain specific and application specific. They are often complex, have compatibility issues with different technologies, lack maturity, are difficult to re-use, and do not accommodate new and emerging models and integration technologies. Technologies to achieve integration, such as brokers and translators do exist, but they cannot be used as a generic solution for developing web-applications achieving the integration outcomes required for successful web application development due to their domain specificity. It is because of these difficulties with integration, and the wide variety of integration approaches that there is a need to provide assistance to the developer in selecting the integration approach most appropriate to their needs. This thesis proposes GIWeb, a unified top-down data integration methodology instantiated with a framework that will aid developers in their integration process. It will act as a conceptual structure to support the chosen technical approach. The framework will assist in the integration of data sources to support web application builders. The thesis presents the rationale for the need for the framework based on an examination of the range of applications, associated data sources and the range of potential solutions. The framework is evaluated using four case studies

    Gene Fusion Markup Language: a prototype for exchanging gene fusion data

    Full text link
    Abstract Background An avalanche of next generation sequencing (NGS) studies has generated an unprecedented amount of genomic structural variation data. These studies have also identified many novel gene fusion candidates with more detailed resolution than previously achieved. However, in the excitement and necessity of publishing the observations from this recently developed cutting-edge technology, no community standardization approach has arisen to organize and represent the data with the essential attributes in an interchangeable manner. As transcriptome studies have been widely used for gene fusion discoveries, the current non-standard mode of data representation could potentially impede data accessibility, critical analyses, and further discoveries in the near future. Results Here we propose a prototype, Gene Fusion Markup Language (GFML) as an initiative to provide a standard format for organizing and representing the significant features of gene fusion data. GFML will offer the advantage of representing the data in a machine-readable format to enable data exchange, automated analysis interpretation, and independent verification. As this database-independent exchange initiative evolves it will further facilitate the formation of related databases, repositories, and analysis tools. The GFML prototype is made available at http://code.google.com/p/gfml-prototype/ . Conclusion The Gene Fusion Markup Language (GFML) presented here could facilitate the development of a standard format for organizing, integrating and representing the significant features of gene fusion data in an inter-operable and query-able fashion that will enable biologically intuitive access to gene fusion findings and expedite functional characterization. A similar model is envisaged for other NGS data analyses.http://deepblue.lib.umich.edu/bitstream/2027.42/112901/1/12859_2011_Article_5754.pd

    A theory and model for the evolution of software services

    Get PDF
    Software services are subject to constant change and variation. To control service development, a service developer needs to know why a change was made, what are its implications and whether the change is complete. Typically, service clients do not perceive the upgraded service immediately. As a consequence, service-based applications may fail on the service client side due to changes carried out during a provider service upgrade. In order to manage changes in a meaningful and effective manner service clients must therefore be considered when service changes are introduced at the service provider's side. Otherwise such changes will most certainly result in severe application disruption. Eliminating spurious results and inconsistencies that may occur due to uncontrolled changes is therefore a necessary condition for the ability of services to evolve gracefully, ensure service stability, and handle variability in their behavior. Towards this goal, this work presents a model and a theoretical framework for the compatible evolution of services based on well-founded theories and techniques from a number of disparate fields.

    Coping with evolution in information systems: a database perspective

    Get PDF
    Business organisations today are faced with the complex problem of dealing with evolution in their software information systems. This effectively concerns the accommodation and facilitation of change, in terms of both changing user requirements and changing technological requirements. An approach that uses the software development life-cycle as a vehicle to study the problem of evolution is adopted. This involves the stages of requirements analysis, system specification, design, implementation, and finally operation and maintenance. The problem of evolution is one requiring proactive as well as reactive solutions for any given application domain. Measuring evolvability in conceptual models and the specification of changing requirements are considered. However, even "best designs" are limited in dealing with unanticipated evolution, and require implementation phase paradigms that can facilitate an evolution correctly (semantic integrity), efficiently (minimal disruption of services) and consistently (all affected parts are consistent following the change). These are also discussedComputingM. Sc. (Information Systems

    Safe API Evolution in a Microservice Architecture with a Pluggable and Transactionless Solution

    Get PDF
    In contrast to monolithic system designs, microservice architectures provide greater scala- bility, availability, and delivery by separating the elements of a large project into indepen- dent entities linked through a network of services. Because services are tied to one another via their interfaces, they can only evolve separately if their contracts remain consistent. There is a scarcity of mechanisms for safely evolving and discontinuing functionalities of services. In monolithic system design’s, changing the definition of an element can be accom- plished quickly with the aid of developer tools (such as IDE refactoring toolkits). In distributed systems there is a lack of comparable tools, developers are left with the burden of manually tracking down and resolving problems caused by uncontrolled updates. To ensure that microservices are working properly the general approach is to validate their behaviour through empirical tests. This thesis aims to supplement the conventional approach by providing mechanisms that support the automatic validation of deployment operations, and the evolution of mi- croservice interfaces. It ́s presented a microservice management system that verifies the safety of modifications to service interfaces and that enables the evolution of service con- tracts without impacting consumer services. The system use runtime-generated proxies, that dynamically convert the data sent between services to the format expected by static code, thereby relieving the developer of the need to manually adapt existing services.Em contraste com sistemas tradicionais monoliticos, as arquiteturas de microsserviços permitem grande escalabilidade, disponibilidade e capacidade de entrega, separando os elementos de um grande projeto em entidades independentes ligadas através de uma rede serviços. Como os serviços estão ligados uns aos outros através das suas interfaces, só podem evoluir separadamente se os seus contratos se mantiverem consistentes. Existe uma escassez de mecanismos para evoluir e descontinuar as funcionalidades dos serviços em segurança. Nos sistemas tradicionais monoliticos, a alteração da definição de um elemento pode ser realizada rapidamente com a ajuda de ferramentas automatizadas (tais como kits de ferramentas de refactoring IDE). Em sistemas distribuídos, existe falta de ferramentas comparáveis, os programadores ficam normalmente sobrecarregados com a resolução ma- nual de problemas causados por atualizações e pela validação do correcto funcionamento do sistema através de testes empíricos. O trabalho desenvolvido nesta tese procura complementar a abordagem convencional, fornecendo mecanismos que suportam a validação das operações de deployment. É apre- sentado um sistema de gestão de microsserviços que verifica a segurança das modificações das interfaces de serviço e a evolução dos contratos. A abordagem utiliza proxies, que convertem dinamicamente os dados enviados entre serviços ao formato esperado pelo código de serviço estático, minimizando a intervenção manual do programador
    corecore