8,252 research outputs found

    Modeling views in the layered view model for XML using UML

    Get PDF
    In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of Extensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user-defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi-structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three-fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a view-driven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction

    A general framework for positioning, evaluating and selecting the new generation of development tools.

    Get PDF
    This paper focuses on the evaluation and positioning of a new generation of development tools containing subtools (report generators, browsers, debuggers, GUI-builders, ...) and programming languages that are designed to work together and have a common graphical user interface and are therefore called environments. Several trends in IT have led to a pluriform range of developments tools that can be classified in numerous categories. Examples are: object-oriented tools, GUI-tools, upper- and lower CASE-tools, client/server tools and 4GL environments. This classification does not sufficiently cover the tools subject in this paper for the simple reason that only one criterion is used to distinguish them. Modern visual development environments often fit in several categories because to a certain extent, several criteria can be applied to evaluate them. In this study, we will offer a broad classification scheme with which tools can be positioned and which can be refined through further research.

    Extensible Modeling and Simulation Framework (XMSF) Opportunities for Web-Based Modeling and Simulation

    Get PDF
    Technical Opportunities Workshop Whitepaper, 14 June 2002Purpose: As the Department of Defense (DoD) is engaged in both warfighting and institutional transformation for the new millennium, DoD Modeling & Simulation (M&S) also needs to identify and adopt transformational technologies which provide direct tactical relevance to warfighters. Because the only software systems that composably scale to worldwide scope utilize the World Wide Web, it is evident that an extensible Web-based framework shows great promise to scale up the capabilities of M&S systems to meet the needs of training, analysis, acquisition, and the operational warfighter. By embracing commercial web technologies as a shared-communications platform and a ubiquitous-delivery framework, DoD M&S can fully leverage mainstream practices for enterprise-wide software development

    Information Technology Today and Tomorrow for Managing Water Resources

    Get PDF

    Towards structured sharing of raw and derived neuroimaging data across existing resources

    Full text link
    Data sharing efforts increasingly contribute to the acceleration of scientific discovery. Neuroimaging data is accumulating in distributed domain-specific databases and there is currently no integrated access mechanism nor an accepted format for the critically important meta-data that is necessary for making use of the combined, available neuroimaging data. In this manuscript, we present work from the Derived Data Working Group, an open-access group sponsored by the Biomedical Informatics Research Network (BIRN) and the International Neuroimaging Coordinating Facility (INCF) focused on practical tools for distributed access to neuroimaging data. The working group develops models and tools facilitating the structured interchange of neuroimaging meta-data and is making progress towards a unified set of tools for such data and meta-data exchange. We report on the key components required for integrated access to raw and derived neuroimaging data as well as associated meta-data and provenance across neuroimaging resources. The components include (1) a structured terminology that provides semantic context to data, (2) a formal data model for neuroimaging with robust tracking of data provenance, (3) a web service-based application programming interface (API) that provides a consistent mechanism to access and query the data model, and (4) a provenance library that can be used for the extraction of provenance data by image analysts and imaging software developers. We believe that the framework and set of tools outlined in this manuscript have great potential for solving many of the issues the neuroimaging community faces when sharing raw and derived neuroimaging data across the various existing database systems for the purpose of accelerating scientific discovery

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Cloud provider independence using DevOps methodologies with Infrastructure-as-Code

    Get PDF
    On choosing cloud computing infrastructure for IT needs there is a risk of becoming dependent and locked-in on a specific cloud provider from which it becomes difficult to switch should an entity decide to move all of the infrastructure resources into a different provider. There’s widespread information available on how to migrate existing infrastructure to the cloud notwithstanding common cloud solutions and providers don't have any clear path or framework for supporting their tenants to migrate off the cloud into another provider or cloud infrastructure with similar service levels should they decide to do so. Under these circumstances it becomes difficult to switch from cloud provider not just because of the technical complexity of recreating the entire infrastructure from scratch and moving related data but also because of the cost it may involve. One possible solution is to evaluate the use of Infrastructure-as-Code languages for defining infrastructure (“Infrastructure-as-Code”) combined with DevOps methodologies and technologies to create a mechanism that helps streamline the migration process between different cloud infrastructure especially if taken into account from the beginning of a project. A well-structured DevOps methodology combined with Infrastructure-as-Code may allow a more integrated control on cloud resources as those can be defined and controlled with specific languages and be submitted to automation processes. Such definitions must take into account what is currently available to support those operations under the chosen cloud infrastructure APIs, always seeking to guarantee the tenant an higher degree of control over its infrastructure and higher level of preparation of the necessary steps for the recreation or migration of such infrastructure should the need arise, somehow integrating cloud resources as part of a development model. The objective of this dissertation is to create a conceptual reference framework that can identify different forms for migration of IT infrastructure while always contemplating a higher provider independence by resorting to such mechanisms, as well as identify possible constraints or obstacles under this approach. Such a framework can be referenced from the beginning of a development project if foreseeable changes in infrastructure or provider are a possibility in the future, taking into account what the API’s provide in order to make such transitions easier.Ao optar-se por infraestruturas de computação em nuvem para soluções de TI existe um risco associado de se ficar dependente de um fornecedor de serviço específico, do qual se torna difícil mudar caso se decida posteriormente movimentar toda essa infraestrutura para um outro fornecedor. Encontra-se disponível extensa documentação sobre como migrar infraestrutura já  existente para modelos de computação em nuvem, de qualquer modo as soluções e os fornecedores de serviço não dispõem de formas ou metodologias claras que suportem os seus clientes em migrações para fora da nuvem, seja para outro fornecedor ou infraestrutura com semelhantes tipos de serviço, caso assim o desejem. Nestas circunstâncias torna-se difícil mudar de fornecedor de serviço não apenas pela complexidade técnica associada à criação de toda a infraestrutura de raiz e movimentação de todos os dados associados a esta mas também devido aos custos que envolve uma operação deste tipo. Uma possível solução é avaliar a utilização de linguagens para definição de infraestrutura como código (“Infrastructure-as-Code”) em conjunção com metodologias e tecnologias “DevOps” de forma a criar um mecanismo que permita flexibilizar um processo de migração entre diferentes infraestruturas de computação em nuvem, especialmente se for contemplado desde o início de um projecto. Uma metodologia “DevOps” devidamente estruturada quando combinada com definição de infraestrutura como código pode permitir um controlo mais integrado de recursos na nuvem uma vez que estes podem ser definidos e controlados através de linguagens específicas e submetidos a processos de automação. Tais definições terão de ter em consideração o que existe disponível para suportar as necessárias operações através das “API’s” das infraestruturas de computação em nuvem, procurando sempre garantir ao utilizador um elevado grau de controlo sobre a sua infraestrutura e um maior nível de preparação dos passos necessários para recriação ou migração da infraestrutura caso essa necessidade surja, integrando de certa forma os recursos de computação em nuvem como parte do modelo de desenvolvimento. Esta dissertação tem como objetivo a criação de um modelo de referência conceptual que identifique formas de migração de infraestruturas de computação procurando ao mesmo tempo uma maior independência do fornecedor de serviço com recurso a tais mecanismos, assim como identificar possíveis constrangimentos ou impedimentos nesta aproximação. Tal modelo poderá ser referenciado desde o início de um projecto de desenvolvimento caso seja necessário contemplar uma possível necessidade futura de alterações ao nível da infraestrutura ou de fornecedor, com base no que as “API’s” disponibilizam, de modo a facilitar essa operação.info:eu-repo/semantics/publishedVersio

    DESIGN OF AN INTEGRATED AGRARIAN DATA DIMENSIONAL DATA WAREHOUSE

    Get PDF
    The concept of the Data warehouse was developed to provide a single access point to data from a variety of sources. There isa need to have a single location for the storage and sharing of data that users can easily utilize to make effective and qualitybusiness decisions, rather than trying to traverse the multiple data sources that exist today. Although many frameworks havebeen developed to integrate these sources into a single database, a reliable framework has yet to be developed. A majorhindrance to achieving a reliable warehouse is the poor quality of data obtained from the data transformation stage in theextract, transfer and load process. This poor quality of data contributes to inaccurate and unreliable results and if this data isused for decision making, unforeseen critical business errors can occur. This work reviews the data integration andtransformation process in dimensional data warehouses and proposes a dual structure for data integration and metadata ofmulti-formatted data used for the design of dimensional data warehouse using Agrarian data collected from Ondo State,Nigeria as a case study.Keywords: Data Warehouse, Data Integration, Metadata, Agrarian data
    corecore