39,005 research outputs found

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Data DNA: The Next Generation of Statistical Metadata

    Get PDF
    Describes the components of a complete statistical metadata system and suggests ways to create and structure metadata for better access and understanding of data sets by diverse users

    Ontology-Based Question Answering System in Restricted Domain

    Get PDF
    The complexity of natural language presents difficult challenges that traditional Questions and Answers (Q&A) system such as Frequently Asked Questions, relied on the collective predefined questions and answers, unable to address. Traditional Q&A system is unable to retrieve exact answer in response to different kind of natural language questions asked by the user. Therefore, this paper aims to present an architecture of Ontology-based Question Answering (OQA) system, applied to library domain. The main task of OQA system is to parse question expressed in natural language with respect to restricted domain ontology and retrieve the matched answer. Restricted ontology model is designed as a knowledge base to assist the process based on the effective information derived from the questions. In addition, ontology matching algorithm is developed to deal with the questionanswer matching process. A case study is taken from the library of Sultanah Nur Zahirah of Universiti Malaysia Terengganu. A prototype of Sultanah Nur Zahirah Digital Learning ONtologybased FAQ System (SONFAQS) is developed. The experimental result shows that the architecture is feasible and significantly improves man-machine interaction by shortening the searching time

    A Linked Data representation of the Nomenclature of Territorial Units for Statistics

    No full text
    The recent publication of public sector information (PSI) data sets has brought to the attention of the scientific community the redundant presence of location based context. At the same time it stresses the inadequacy of current Linked Data services for exploiting the semantics of such contextual dimensions for easing entity retrieval and browsing. In this paper describes our approach for supporting the publication of geographical subdivisions in Linked Data format for supporting the e-government and public sector in publishing their data sets. The topological knowledge published can be reused in order to enrich the geographical context of other data sets, in particular we propose an exploitation scenario using statistical data sets described with the SCOVO ontology. The topological knowledge is then exploited within a service that supports the navigation and retrieval of statistical geographical entities for the EU territory. Geographical entities, in the extent of this paper, are linked data resources that describe objects that have a geographical extension. The data and services presented in this paper allows the discovery of resources that contain or are contained by a given entity URI and their representation within map widgets. We present an approach for a geography based service that helps in querying qualitative spatial relations for the EU statistical geography (proper containment so far). We also provide a rationale for publishing geographical information in Linked Data format based on our experience, within the EnAKTing project, in publishing UK PSI data

    Semantic Models as Knowledge Repositories for Data Modellers in the Financial Industry

    Get PDF
    Data modellers working in the financial industry are expected to use both technical and business knowledge to transform data into the information required to meet regulatory reporting requirements. This dissertation explores the role that semantic models such as ontologies and concept maps can play in the acquisition of financial and regulatory concepts by data modellers. While there is widespread use of semantic models in the financial industry to specify how information is exchanged between IT systems, there is limited use of these models as knowledge repositories. The objective of this research is to evaluate the use of a semantic model based knowledge repository using a combination of interviews, model implementation and experimental evaluation. A semantic model implementation is undertaken to represent the knowledge required to understand sample banking regulatory reports. An iterative process of semantic modelling and knowledge acquisition is followed to create a representation of technical and business domain knowledge in the repository. The completed repository is made up of three concept maps hyper-linked to an ontology. An experimental evaluation of the usefulness of the repository is made by asking both expert and novice financial data modellers to answer questions that required both banking knowledge and an understating of the information in regulatory reports. The research suggests that both novice and expert data modellers found the knowledge in the ontology and concept maps to be accessible, effective and useful. The combination of model types allowing for variations in individual styles of knowledge acquisition. The research suggests that the financial trend in the financial industry for semantic models and ontologies would benefit from knowledge management and modelling techniques

    Ontologies on the semantic web

    Get PDF
    As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The “Semantic Web” was touted by its developers as equally revolutionary but has not yet achieved anything like the Web’s exponential uptake. This 17 000 word survey article explores why this might be so, from a perspective that bridges both philosophy and IT

    The Semantic Web Paradigm for a Real-Time Agent Control (Part I)

    Get PDF
    For the Semantic Web point of view, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning. Adding logic to the Web, the means to use rules to make inferences, choose courses of action and answer questions, is the actual task for the distributed IT community. The real power of Intelligent Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results with other programs. The first part of this paper is an introductory of Semantic Web properties, and summarises agent characteristics and their actual importance in digital economy. The second part presents the predictability of a multiagent system used in a learning process for a control problem.Semantic Web, agents, fuzzy knowledge, evolutionary computing

    GPCR-OKB: the G protein coupled receptor oligomer knowledge base

    Get PDF
    Rapid expansion of available data about G Protein Coupled Receptor (GPCR) dimers/oligomers over the past few years requires an effective system to organize this information electronically. Based on an ontology derived from a community dialog involving colleagues using experimental and computational methodologies, we developed the GPCR-Oligomerization Knowledge Base (GPCR-OKB). GPCR-OKB is a system that supports browsing and searching for GPCR oligomer data. Such data were manually derived from the literature. While focused on GPCR oligomers, GPCR-OKB is seamlessly connected to GPCRDB, facilitating the correlation of information about GPCR protomers and oligomers
    corecore