570 research outputs found
Semantic-Based, Scalable, Decentralized and Dynamic Resource Discovery for Internet-Based Distributed System
Resource Discovery (RD) is a key issue in Internet-based distributed sytems such as
grid. RD is about locating an appropriate resource/service type that matches the user's
application requirements. This is very important, as resource reservation and task
scheduling are based on it. Unfortunately, RD in grid is very challenging as resources
and users are distributed, resources are heterogeneous in their platforms, status of the
resources is dynamic (resources can join or leave the system without any prior notice)
and most recently the introduction of a new type of grid called intergrid (grid of grids)
with the use of multi middlewares. Such situation requires an RD system that has rich
interoperability, scalability, decentralization and dynamism features. However,
existing grid RD systems have difficulties to attain these features. Not only that, they
lack the review and evaluation studies, which may highlight the gap in achieving the
required features. Therefore, this work discusses the problem associated with intergrid
RD from two perspectives. First, reviewing and classifying the current grid RD
systems in such a way that may be useful for discussing and comparing them. Second,
propose a novel RD framework that has the aforementioned required RD features. In
the former, we mainly focus on the studies that aim to achieve interoperability in the
first place, which are known as RD systems that use semantic information (semantic
technology). In particular, we classify such systems based on their qualitative use of
the semantic information. We evaluate the classified studies based on their degree of
accomplishment of interoperability and the other RD requirements, and draw the
future research direction of this field. Meanwhile in the latter, we name the new
framework as semantic-based scalable decentralized dynamic RD. The framework
further contains two main components which are service description, and service
registration and discovery models. The earlier consists of a set of ontologies and
services. Ontologies are used as a data model for service description, whereas the
services are to accomplish the description process. The service registration is also based on ontology, where nodes of the service (service providers) are classified to
some classes according to the ontology concepts, which means each class represents a
concept in the ontology. Each class has a head, which is elected among its own class
I
nodes/members. Head plays the role of a registry in its class and communicates with
I
the other heads of the classes in a peer to peer manner during the discovery process.
We further introduce two intelligent agents to automate the discovery process which
are Request Agent (RA) and Description Agent (DA). Eaclj. node is supposed to have
both agents. DA describes the service capabilities based on the ontology, and RA
I
carries the service requests based on the ontology as well. We design a service search
I
algorithm for the RA that starts the service look up from the class of request origin
first, then to the other classes.
We finally evaluate the performance of our framework ~ith extensive simulation
experiments, the result of which confirms the effectiveness of the proposed system in
satisfying the required RD features (interoperability, scalability, decentralization and
dynamism). In short, our main contributions are outlined new key taxonomy for the
semantic-based grid RD studies; an interoperable semantic description RD component
model for intergrid services metadata representation; a semantic distributed registry
architecture for indexing service metadata; and an agent-qased service search and
selection algorithm.
Vll
The INCF Digital Atlasing Program: Report on Digital Atlasing Standards in the Rodent Brain
The goal of the INCF Digital Atlasing Program is to provide the vision and direction necessary to make the rapidly growing collection of multidimensional data of the rodent brain (images, gene expression, etc.) widely accessible and usable to the international research community. This Digital Brain Atlasing Standards Task Force was formed in May 2008 to investigate the state of rodent brain digital atlasing, and formulate standards, guidelines, and policy recommendations.

Our first objective has been the preparation of a detailed document that includes the vision and specific description of an infrastructure, systems and methods capable of serving the scientific goals of the community, as well as practical issues for achieving
the goals. This report builds on the 1st INCF Workshop on Mouse and Rat Brain Digital Atlasing Systems (Boline et al., 2007, _Nature Preceedings_, doi:10.1038/npre.2007.1046.1) and includes a more detailed analysis of both the current state and desired state of digital atlasing along with specific recommendations for achieving these goals
The CAP cancer protocols – a case study of caCORE based data standards implementation to integrate with the Cancer Biomedical Informatics Grid
BACKGROUND: The Cancer Biomedical Informatics Grid (caBIG™) is a network of individuals and institutions, creating a world wide web of cancer research. An important aspect of this informatics effort is the development of consistent practices for data standards development, using a multi-tier approach that facilitates semantic interoperability of systems. The semantic tiers include (1) information models, (2) common data elements, and (3) controlled terminologies and ontologies. The College of American Pathologists (CAP) cancer protocols and checklists are an important reporting standard in pathology, for which no complete electronic data standard is currently available. METHODS: In this manuscript, we provide a case study of Cancer Common Ontologic Representation Environment (caCORE) data standard implementation of the CAP cancer protocols and checklists model – an existing and complex paper based standard. We illustrate the basic principles, goals and methodology for developing caBIG™ models. RESULTS: Using this example, we describe the process required to develop the model, the technologies and data standards on which the process and models are based, and the results of the modeling effort. We address difficulties we encountered and modifications to caCORE that will address these problems. In addition, we describe four ongoing development projects that will use the emerging CAP data standards to achieve integration of tissue banking and laboratory information systems. CONCLUSION: The CAP cancer checklists can be used as the basis for an electronic data standard in pathology using the caBIG™ semantic modeling methodology
State-of-the-art assessment on the implementations of international core data models for public administrations
Public administrations are often still organised in vertical, closed silos. The lack of common data standards (common data models and reference data) for exchanging information between administrations in a cross-domain and/or cross-border setting stands in the way of digital public services and automated flow of information between public administrations. Core data models address this issue, but are often created within the closed environment of a country or region and within one policy domain. A lack of insight exists in understanding and managing the life-cycle of these initiatives on public administration information systems for data modelling and data exchange. In this paper, we outline state-of-the-art implementations and vocabularies linked to the core data models. In particular we inventoried and selected existing core data models and identified tendencies in current practices based on the criteria creation, use, maintenance and coordination. Based on the analysis, this survey suggest research directions for policy and information management studies pointing to best practices regarding core data model implementations and their role in linking isolated data silos within a cross-country context. Finally we highlight the differences in their coordination and maintenance, depending on the state of creation and use
Semantic interoperability: ontological unpacking of a viral conceptual model
Background. Genomics and virology are unquestionably important, but complex, domains being investigated by a large number of scientists. The need to facilitate and support work within these domains requires sharing of databases, although it is often difficult to do so because of the different ways in which data is represented across the databases. To foster semantic interoperability, models are needed that provide a deep understanding and interpretation of the concepts in a domain, so that the data can be consistently interpreted among researchers.
Results. In this research, we propose the use of conceptual models to support semantic interoperability among databases and assess their ontological clarity to support their effective use. This modeling effort is illustrated by its application to the Viral Conceptual Model (VCM) that captures and represents the sequencing of viruses, inspired by the need to understand the genomic aspects of the virus responsible for COVID-19. For achieving semantic clarity on the VCM, we leverage the “ontological unpacking” method, a process of ontological analysis that reveals the ontological foundation of the information that is represented in a conceptual model. This is accomplished by applying the stereotypes of the OntoUML ontology-driven conceptual modeling language.As a result, we propose a new OntoVCM, an ontologically grounded model, based on the initial VCM, but with guaranteed interoperability among the data sources that employ it.
Conclusions. We propose and illustrate how the unpacking of the Viral Conceptual Model resolves several issues related to semantic interoperability, the importance of which is recognized by the “I” in FAIR principles. The research addresses conceptual uncertainty within the domain of SARS-CoV-2 data and knowledge.The method employed provides the basis for further analyses of complex models currently used in life science applications, but lacking ontological grounding, subsequently hindering the interoperability needed for scientists to progress their research
Semantic-Based, Scalable, Decentralized and Dynamic Resource Discovery for Internet-Based Distributed System
Resource Discovery (RD) is a key issue in Internet-based distributed sytems such as
grid. RD is about locating an appropriate resource/service type that matches the user's
application requirements. This is very important, as resource reservation and task
scheduling are based on it. Unfortunately, RD in grid is very challenging as resources
and users are distributed, resources are heterogeneous in their platforms, status of the
resources is dynamic (resources can join or leave the system without any prior notice)
and most recently the introduction of a new type of grid called intergrid (grid of grids)
with the use of multi middlewares. Such situation requires an RD system that has rich
interoperability, scalability, decentralization and dynamism features. However,
existing grid RD systems have difficulties to attain these features. Not only that, they
lack the review and evaluation studies, which may highlight the gap in achieving the
required features. Therefore, this work discusses the problem associated with intergrid
RD from two perspectives. First, reviewing and classifying the current grid RD
systems in such a way that may be useful for discussing and comparing them. Second,
propose a novel RD framework that has the aforementioned required RD features. In
the former, we mainly focus on the studies that aim to achieve interoperability in the
first place, which are known as RD systems that use semantic information (semantic
technology). In particular, we classify such systems based on their qualitative use of
the semantic information. We evaluate the classified studies based on their degree of
accomplishment of interoperability and the other RD requirements, and draw the
future research direction of this field. Meanwhile in the latter, we name the new
framework as semantic-based scalable decentralized dynamic RD. The framework
further contains two main components which are service description, and service
registration and discovery models. The earlier consists of a set of ontologies and
services. Ontologies are used as a data model for service description, whereas the
services are to accomplish the description process. The service registration is also based on ontology, where nodes of the service (service providers) are classified to
some classes according to the ontology concepts, which means each class represents a
concept in the ontology. Each class has a head, which is elected among its own class
I
nodes/members. Head plays the role of a registry in its class and communicates with
I
the other heads of the classes in a peer to peer manner during the discovery process.
We further introduce two intelligent agents to automate the discovery process which
are Request Agent (RA) and Description Agent (DA). Eaclj. node is supposed to have
both agents. DA describes the service capabilities based on the ontology, and RA
I
carries the service requests based on the ontology as well. We design a service search
I
algorithm for the RA that starts the service look up from the class of request origin
first, then to the other classes.
We finally evaluate the performance of our framework ~ith extensive simulation
experiments, the result of which confirms the effectiveness of the proposed system in
satisfying the required RD features (interoperability, scalability, decentralization and
dynamism). In short, our main contributions are outlined new key taxonomy for the
semantic-based grid RD studies; an interoperable semantic description RD component
model for intergrid services metadata representation; a semantic distributed registry
architecture for indexing service metadata; and an agent-qased service search and
selection algorithm.
Vll
Interoperability and Information Brokers in Public Safety: An Approach toward Seamless Emergency Communications
When a disaster occurs, the rapid gathering and sharing of crucial information among public safety agencies, emergency response units, and the public can save lives and reduce the scope of the problem; yet, this is seldom achieved. The lack of interoperability hinders effective collaboration across organizational and jurisdictional boundaries. In this article, we propose a general architecture for emergency communications that incorporates (1) an information broker, (2) events and event-driven processes, and (3) interoperability. This general architecture addresses the question of how an information broker can overcome obstacles, breach boundaries for seamless communication, and empower the public to become active participants in emergency communications. Our research is based on qualitative case studies on emergency communications, workshops with public safety agencies, and a comparative analysis of interoperability issues in the European public sector. This article features a conceptual approach toward proposing a way in which public safety agencies can achieve optimal interoperability and thereby enable seamless communication and crowdsourcing in emergency prevention and response
Community next steps for making globally unique identifiers work for biocollections data
Biodiversity data is being digitized and made available online at a rapidly increasing rate but current practices typically do not preserve linkages between these data, which impedes interoperation, provenance tracking, and assembly of larger datasets. For data associated with biocollections, the biodiversity community has long recognized that an essential part of establishing and preserving linkages is to apply globally unique identifiers at the point when data are generated in the field and to persist these identifiers downstream, but this is seldom implemented in practice. There has neither been coalescence towards one single identifier solution (as in some other domains), nor even a set of recommended best practices and standards to support multiple identifier schemes sharing consistent responses. In order to further progress towards a broader community consensus, a group of biocollections and informatics experts assembled in Stockholm in October 2014 to discuss community next steps to overcome current roadblocks. The workshop participants divided into four groups focusing on: identifier practice in current field biocollections; identifier application for legacy biocollections; identifiers as applied to biodiversity data records as they are published and made available in semantically marked-up publications; and cross-cutting identifier solutions that bridge across these domains. The main outcome was consensus on key issues, including recognition of differences between legacy and new biocollections processes, the need for identifier metadata profiles that can report information on identifier persistence missions, and the unambiguous indication of the type of object associated with the identifier. Current identifier characteristics are also summarized, and an overview of available schemes and practices is provided
Comparative study of healthcare messaging standards for interoperability in ehealth systems
Advances in the information and communication technology have created the field of "health informatics," which amalgamates healthcare, information technology and business. The use of information systems in healthcare organisations dates back to 1960s, however the use of technology for healthcare records, referred to as Electronic Medical Records (EMR), management has surged since 1990’s (Net-Health, 2017) due to advancements the internet and web technologies. Electronic Medical Records (EMR) and sometimes referred to as Personal Health Record (PHR) contains the patient’s medical history, allergy information, immunisation status, medication, radiology images and other medically related billing information that is relevant. There are a number of benefits for healthcare industry when sharing these data recorded in EMR and PHR systems between medical institutions (AbuKhousa et al., 2012). These benefits include convenience for patients and clinicians, cost-effective healthcare solutions, high quality of care, resolving the resource shortage and collecting a large volume of data for research and educational needs. My Health Record (MyHR) is a major project funded by the Australian government, which aims to have all data relating to health of the Australian population stored in digital format, allowing clinicians to have access to patient data at the point of care. Prior to 2015, MyHR was known as Personally Controlled Electronic Health Record (PCEHR). Though the Australian government took consistent initiatives there is a significant delay (Pearce and Haikerwal, 2010) in implementing eHealth projects and related services. While this delay is caused by many factors, interoperability is identified as the main problem (Benson and Grieve, 2016c) which is resisting this project delivery. To discover the current interoperability challenges in the Australian healthcare industry, this comparative study is conducted on Health Level 7 (HL7) messaging models such as HL7 V2, V3 and FHIR (Fast Healthcare Interoperability Resources). In this study, interoperability, security and privacy are main elements compared. In addition, a case study conducted in the NSW Hospitals to understand the popularity in usage of health messaging standards was utilised to understand the extent of use of messaging standards in healthcare sector. Predominantly, the project used the comparative study method on different HL7 (Health Level Seven) messages and derived the right messaging standard which is suitable to cover the interoperability, security and privacy requirements of electronic health record. The issues related to practical implementations, change over and training requirements for healthcare professionals are also discussed
- …