37,194 research outputs found

    Change Management: The Core Task of Ontology Versioning and Evolution

    No full text
    Change management as a key issue in ontology versioning and evolution is still not fully addressed, which to some extent forms a barrier against the smooth process of ontology evolution. The key issue in the support of evolving ontologies is to distinguish and recognize the changes during the process of ontology evolution. Most of the current popular work on ontology versioning do not keep a record of the changes in the ontology, thus preventing the user from tracking those changes back and forward, or to at least understand the rational behind those changes. We are proposing an approach to get the evidences of ontology changes, keep track of them, and manage them in an engineering fashion

    Evolutionary intelligent agents for e-commerce: Generic preference detection with feature analysis

    Get PDF
    Product recommendation and preference tracking systems have been adopted extensively in e-commerce businesses. However, the heterogeneity of product attributes results in undesired impediment for an efficient yet personalized e-commerce product brokering. Amid the assortment of product attributes, there are some intrinsic generic attributes having significant relation to a customer’s generic preference. This paper proposes a novel approach in the detection of generic product attributes through feature analysis. The objective is to provide an insight to the understanding of customers’ generic preference. Furthermore, a genetic algorithm is used to find the suitable feature weight set, hence reducing the rate of misclassification. A prototype has been implemented and the experimental results are promising

    An Approach to Cope with Ontology Changes for Ontology-based Applications

    No full text
    Keeping track of ontology changes is becoming a critical issue for ontology-based applications because updating an ontology that is in use may result in inconsistencies between the ontology and the knowledge base, dependent ontologies and dependent applications/services. Current research concentrates on the creation of ontologies and how to manage ontology changes in terms of the attempts to ease the communications between ontology versions and keep consistent with the instances, and there is little work available on controlling the impact to dependent applications/services which is the aims of the system presented in this paper. The approach we propose in this paper is to manually capture and log ontology changes, use this log to analyse incoming RDQL queries and amend them as necessary. Revised queries can then be used to query the knowledge base of the applications/services. We present the infrastructure of our approach based on the problems and scenarios identified within ontology-based systems. We discuss the issues met during our design and implementation, and consider some problems whose solutions will be beneficial to the development of our approach

    The Research Object Suite of Ontologies: Sharing and Exchanging Research Data and Methods on the Open Web

    Get PDF
    Research in life sciences is increasingly being conducted in a digital and online environment. In particular, life scientists have been pioneers in embracing new computational tools to conduct their investigations. To support the sharing of digital objects produced during such research investigations, we have witnessed in the last few years the emergence of specialized repositories, e.g., DataVerse and FigShare. Such repositories provide users with the means to share and publish datasets that were used or generated in research investigations. While these repositories have proven their usefulness, interpreting and reusing evidence for most research results is a challenging task. Additional contextual descriptions are needed to understand how those results were generated and/or the circumstances under which they were concluded. Because of this, scientists are calling for models that go beyond the publication of datasets to systematically capture the life cycle of scientific investigations and provide a single entry point to access the information about the hypothesis investigated, the datasets used, the experiments carried out, the results of the experiments, the people involved in the research, etc. In this paper we present the Research Object (RO) suite of ontologies, which provide a structured container to encapsulate research data and methods along with essential metadata descriptions. Research Objects are portable units that enable the sharing, preservation, interpretation and reuse of research investigation results. The ontologies we present have been designed in the light of requirements that we gathered from life scientists. They have been built upon existing popular vocabularies to facilitate interoperability. Furthermore, we have developed tools to support the creation and sharing of Research Objects, thereby promoting and facilitating their adoption.Comment: 20 page

    HandyBroker - An intelligent product-brokering agent for M-commerce applications with user preference tracking

    Get PDF
    One of the potential applications for agent-based systems is m-commerce. A lot of research has been done on making such systems intelligent to personalize their services for users. In most systems, user-supplied keywords are generally used to help generate profiles for users. In this paper, an evolutionary ontology-based product-brokering agent has been designed for m-commerce applications. It uses an evaluation function to represent a user’s preference instead of the usual keyword-based profile. By using genetic algorithms, the agent tracks the user’s preferences for a particular product by tuning some parameters inside its evaluation function. A prototype called “Handy Broker” has been implemented in Java and the results obtained from our experiments looks promising for m-commerce use

    A pattern-based approach to a cell tracking ontology

    No full text
    Time-lapse microscopy has thoroughly transformed our understanding of biological motion and developmental dynamics from single cells to entire organisms. The increasing amount of cell tracking data demands the creation of tools to make extracted data searchable and interoperable between experiment and data types. In order to address that problem, the current paper reports on the progress in building the Cell Tracking Ontology (CTO): An ontology framework for describing, querying and integrating data from complementary experimental techniques in the domain of cell tracking experiments. CTO is based on a basic knowledge structure: the cellular genealogy serving as a backbone model to integrate specific biological ontologies into tracking data. As a first step we integrate the Phenotype and Trait Ontology (PATO) as one of the most relevant ontologies to annotate cell tracking experiments. The CTO requires both the integration of data on various levels of generality as well as the proper structuring of collected information. Therefore, in order to provide a sound foundation of the ontology, we have built on the rich body of work on top-level ontologies and established three generic ontology design patterns addressing three modeling challenges for properly representing cellular genealogies, i.e. representing entities existing in time, undergoing changes over time and their organization into more complex structures such as situations

    Graph-based discovery of ontology change patterns

    Get PDF
    Ontologies can support a variety of purposes, ranging from capturing conceptual knowledge to the organisation of digital content and information. However, information systems are always subject to change and ontology change management can pose challenges. We investigate ontology change representation and discovery of change patterns. Ontology changes are formalised as graph-based change logs. We use attributed graphs, which are typed over a generic graph with node and edge attribution.We analyse ontology change logs, represented as graphs, and identify frequent change sequences. Such sequences are applied as a reference in order to discover reusable, often domain-specific and usagedriven change patterns. We describe the pattern discovery algorithms and measure their performance using experimental result

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Warranted Diagnosis

    Get PDF
    A diagnostic process is an investigative process that takes a clinical picture as input and outputs a diagnosis. We propose a method for distinguishing diagnoses that are warranted from those that are not, based on the cognitive processes of which they are the outputs. Processes designed and vetted to reliably produce correct diagnoses will output what we shall call ‘warranted diagnoses’. The latter are diagnoses that should be trusted even if they later turn out to have been wrong. Our work is based on the recently developed Cognitive Process Ontology and further develops the Ontology of General Medical Science. It also has applications in fields such as intelligence, forensics, and predictive maintenance, all of which rely on vetted processes designed to secure the reliability of their outputs
    corecore