42,943 research outputs found

    Supporting Change-Aware Semantic Web Services

    Get PDF
    The Semantic Web is not only evolving into a provider of structured meaningful content and knowledge representation, but also into a provider of services. While most of these services support external users of the SW, we focus on a vital service within the SW – change management and adaptation. Change is a ubiquitous feature of the SW. In this paper, we propose a service architecture that embraces and utilises change to provide higher quality services. We introduce pilot implementations of two supporting services within this architecture

    XML in Motion from Genome to Drug

    Get PDF
    Information technology (IT) has emerged as a central to the solution of contemporary genomics and drug discovery problems. Researchers involved in genomics, proteomics, transcriptional profiling, high throughput structure determination, and in other sub-disciplines of bioinformatics have direct impact on this IT revolution. As the full genome sequences of many species, data from structural genomics, micro-arrays, and proteomics became available, integration of these data to a common platform require sophisticated bioinformatics tools. Organizing these data into knowledgeable databases and developing appropriate software tools for analyzing the same are going to be major challenges. XML (eXtensible Markup Language) forms the backbone of biological data representation and exchange over the internet, enabling researchers to aggregate data from various heterogeneous data resources. The present article covers a comprehensive idea of the integration of XML on particular type of biological databases mainly dealing with sequence-structure-function relationship and its application towards drug discovery. This e-medical science approach should be applied to other scientific domains and the latest trend in semantic web applications is also highlighted

    A Community-Driven Validation Service for Standard Medical Imaging Objects

    Get PDF
    Digital medical imaging laboratories contain many distinct types of equipment provided by different manufacturers. Interoperability is a critical issue and the DICOM protocol is a de facto standard in those environments. However, manufacturers' implementation of the standard may have non-conformities at several levels, which will hinder systems' integration. Moreover, medical staff may be responsible for data inconsistencies when entering data. Those situations severely affect the quality of healthcare services since they can disrupt system operations. The existence of software able to confirm data quality and compliance with the DICOM standard is important for programmers, IT staff and healthcare technicians. Although there are a few solutions that try to accomplish this goal, they are unable to deal with certain situations that require user input. Furthermore, these cases usually require the setup of a working environment, which makes the sharing of validation information more difficult. This article proposes and describes the development of a Web DICOM validation service for the community. This solution requires no configuration by the user, promotes validation results share-ability in the community and preserves patient data privacy since files are de-identified on the client side.Comment: Computer Standards & Interfaces, 201

    An XML standard for the dissemination of annotated 2D gel electrophoresis data complemented with mass spectrometry results

    Get PDF
    BACKGROUND: Many proteomics initiatives require a seamless bioinformatics integration of a range of analytical steps between sample collection and systems modeling immediately assessable to the participants involved in the process. Proteomics profiling by 2D gel electrophoresis to the putative identification of differentially expressed proteins by comparison of mass spectrometry results with reference databases, includes many components of sample processing, not just analysis and interpretation, are regularly revisited and updated. In order for such updates and dissemination of data, a suitable data structure is needed. However, there are no such data structures currently available for the storing of data for multiple gels generated through a single proteomic experiments in a single XML file. This paper proposes a data structure based on XML standards to fill the void that exists between data generated by proteomics experiments and storing of data. RESULTS: In order to address the resulting procedural fluidity we have adopted and implemented a data model centered on the concept of annotated gel (AG) as the format for delivery and management of 2D Gel electrophoresis results. An eXtensible Markup Language (XML) schema is proposed to manage, analyze and disseminate annotated 2D Gel electrophoresis results. The structure of AG objects is formally represented using XML, resulting in the definition of the AGML syntax presented here. CONCLUSION: The proposed schema accommodates data on the electrophoresis results as well as the mass-spectrometry analysis of selected gel spots. A web-based software library is being developed to handle data storage, analysis and graphic representation. Computational tools described will be made available at . Our development of AGML provides a simple data structure for storing 2D gel electrophoresis data

    Efficient Incremental Breadth-Depth XML Event Mining

    Full text link
    Many applications log a large amount of events continuously. Extracting interesting knowledge from logged events is an emerging active research area in data mining. In this context, we propose an approach for mining frequent events and association rules from logged events in XML format. This approach is composed of two-main phases: I) constructing a novel tree structure called Frequency XML-based Tree (FXT), which contains the frequency of events to be mined; II) querying the constructed FXT using XQuery to discover frequent itemsets and association rules. The FXT is constructed with a single-pass over logged data. We implement the proposed algorithm and study various performance issues. The performance study shows that the algorithm is efficient, for both constructing the FXT and discovering association rules

    An open standard for the exchange of information in the Australian timber sector

    Get PDF
    The purpose of this paper is to describe business-to-business (B2B) communication and the characteristics of an open standard for electronic communication within the Australian timber and wood products industry. Current issues, future goals and strategies for using business-to-business communication will be considered. From the perspective of the Timber industry sector, this study is important because supply chain efficiency is a key component in an organisation's strategy to gain a competitive advantage in the marketplace. Strong improvement in supply chain performance is possible with improved business-to-business communication which is used both for building trust and providing real time marketing data. Traditional methods such as electronic data interchange (EDI) used to facilitate B2B communication have a number of disadvantages, such as high implementation and running costs and a rigid and inflexible messaging standard. Information and communications technologies (ICT) have supported the emergence of web-based EDI which maintains the advantages of the traditional paradigm while negating the disadvantages. This has been further extended by the advent of the Semantic web which rests on the fundamental idea that web resources should be annotated with semantic markup that captures information about their meaning and facilitates meaningful machine-to-machine communication. This paper provides an ontology using OWL (Web Ontology Language) for the Australian Timber sector that can be used in conjunction with semantic web services to provide effective and cheap B2B communications
    corecore