710 research outputs found

    Semantic Model Alignment for Business Process Integration

    Get PDF
    Business process models describe an enterprise’s way of conducting business and in this form the basis for shaping the organization and engineering the appropriate supporting or even enabling IT. Thereby, a major task in working with models is their analysis and comparison for the purpose of aligning them. As models can differ semantically not only concerning the modeling languages used, but even more so in the way in which the natural language for labeling the model elements has been applied, the correct identification of the intended meaning of a legacy model is a non-trivial task that thus far has only been solved by humans. In particular at the time of reorganizations, the set-up of B2B-collaborations or mergers and acquisitions the semantic analysis of models of different origin that need to be consolidated is a manual effort that is not only tedious and error-prone but also time consuming and costly and often even repetitive. For facilitating automation of this task by means of IT, in this thesis the new method of Semantic Model Alignment is presented. Its application enables to extract and formalize the semantics of models for relating them based on the modeling language used and determining similarities based on the natural language used in model element labels. The resulting alignment supports model-based semantic business process integration. The research conducted is based on a design-science oriented approach and the method developed has been created together with all its enabling artifacts. These results have been published as the research progressed and are presented here in this thesis based on a selection of peer reviewed publications comprehensively describing the various aspects

    SPEIR: Scottish Portals for Education, Information and Research. Final Project Report: Elements and Future Development Requirements of a Common Information Environment for Scotland

    Get PDF
    The SPEIR (Scottish Portals for Education, Information and Research) project was funded by the Scottish Library and Information Council (SLIC). It ran from February 2003 to September 2004, slightly longer than the 18 months originally scheduled and was managed by the Centre for Digital Library Research (CDLR). With SLIC's agreement, community stakeholders were represented in the project by the Confederation of Scottish Mini-Cooperatives (CoSMiC), an organisation whose members include SLIC, the National Library of Scotland (NLS), the Scottish Further Education Unit (SFEU), the Scottish Confederation of University and Research Libraries (SCURL), regional cooperatives such as the Ayrshire Libraries Forum (ALF)1, and representatives from the Museums and Archives communities in Scotland. Aims; A Common Information Environment For Scotland The aims of the project were to: o Conduct basic research into the distributed information infrastructure requirements of the Scottish Cultural Portal pilot and the public library CAIRNS integration proposal; o Develop associated pilot facilities by enhancing existing facilities or developing new ones; o Ensure that both infrastructure proposals and pilot facilities were sufficiently generic to be utilised in support of other portals developed by the Scottish information community; o Ensure the interoperability of infrastructural elements beyond Scotland through adherence to established or developing national and international standards. Since the Scottish information landscape is taken by CoSMiC members to encompass relevant activities in Archives, Libraries, Museums, and related domains, the project was, in essence, concerned with identifying, researching, and developing the elements of an internationally interoperable common information environment for Scotland, and of determining the best path for future progress

    Making Study Populations Visible through Knowledge Graphs

    Full text link
    Treatment recommendations within Clinical Practice Guidelines (CPGs) are largely based on findings from clinical trials and case studies, referred to here as research studies, that are often based on highly selective clinical populations, referred to here as study cohorts. When medical practitioners apply CPG recommendations, they need to understand how well their patient population matches the characteristics of those in the study cohort, and thus are confronted with the challenges of locating the study cohort information and making an analytic comparison. To address these challenges, we develop an ontology-enabled prototype system, which exposes the population descriptions in research studies in a declarative manner, with the ultimate goal of allowing medical practitioners to better understand the applicability and generalizability of treatment recommendations. We build a Study Cohort Ontology (SCO) to encode the vocabulary of study population descriptions, that are often reported in the first table in the published work, thus they are often referred to as Table 1. We leverage the well-used Semanticscience Integrated Ontology (SIO) for defining property associations between classes. Further, we model the key components of Table 1s, i.e., collections of study subjects, subject characteristics, and statistical measures in RDF knowledge graphs. We design scenarios for medical practitioners to perform population analysis, and generate cohort similarity visualizations to determine the applicability of a study population to the clinical population of interest. Our semantic approach to make study populations visible, by standardized representations of Table 1s, allows users to quickly derive clinically relevant inferences about study populations.Comment: 16 pages, 4 figures, 1 table, accepted to the ISWC 2019 Resources Track (https://iswc2019.semanticweb.org/call-for-resources-track-papers/

    Data protection regulation ontology for compliance

    Get PDF
    The GDPR is the current data protection regulation in Europe. A significant market demand has been created ever since GDPR came into force. This is mostly due to the fact that it can go outside of European borders if the data processed belongs to European citizens. The number of companies who require some type of regulation or standard compliance is ever-increasing and the need for cyber security and privacy specialists has never been greater. Moreover, the GDPR has inspired a series of similar regulations all over the world. This further increases the market demand and makes the work of companies who work internationally more complicated and difficult to scale. The purpose of this thesis is to help consultancy companies to automate their work by using semantic structures known as ontologies. By doing this, they can increase productivity and reduce costs. Ontologies can store data and their semantics (meaning) in a machine-readable format. In this thesis, an ontology has been designed which is meant to help consultants generate checklists (or runbooks) which they are required to deliver to their clients. The ontology is designed to handle concepts such as security measures, company information, company architecture, data sensitivity, privacy mechanisms, distinction between technical and organisational measures, and even conditionality. The ontology was evaluated using a litmus test. In the context of this ontology, the litmus test was composed of a collection of competency questions. Competency questions were collected based on the use-cases of the ontology. These questions were later translated to SPARQL queries which were run against a test ontology. The ontology has successfully passed the given litmus test. Thus, it can be concluded that the implemented functionality matches the proposed design

    Extracting a Body of Knowledge as a First Step Towards Defining a United Software Engineering Curriculum Guideline

    Get PDF
    In general, the computing field is a rapidly changing environment, and as such, software engineering education must be able to adjust quickly to new needs. Industry adapts to technologies as fast as it can, but the critical issue is a need for recent graduates with the necessary expertise and knowledge of new trends, technologies, and practical experience. The industries that employ graduates of computing degree programs aim to hire those who are familiar with the latest technical traits, tools, and methodologies to meet these needs, and the software engineering curriculum needs to respond quickly to these needs. Still, unfortunately, software engineering curriculums cannot change and adopt new technologies fast. Modifying the curriculum to serve industry needs better is a long and tedious process in an academic setting. It is essential to give software engineers top-notch education and training to make sure they have the information and abilities needed to succeed in their careers. In addition, there are multiple computing curriculum recommendations endorsed by computing professional organizations that provide guidelines for curriculum design. The work proposed for this research plans to develop a method of extracting a body of knowledge and generating an ontology using Natural Language Processing algorithms. This will automate the process of extracting information from curriculum guidelines and models and storing that information in one unified ontology. It is then envisioned that the resulting ontology will be used in future research to assist in creating or validating a Software Engineering curriculum to ensure that all knowledge areas are covered and that the outcomes match the established guidelines and models. This automated extracting a body of knowledge process is the first and fundamental step in defining the United Software engineering Curriculum Guideline

    Beyond actions : exploring the discovery of tactics from user logs

    Get PDF
    Search log analysis has become a common practice to gain insights into user search behaviour; it helps gain an understanding of user needs and preferences, as well as an insight into how well a system supports such needs. Currently, log analysis is typically focused on low-level user actions, i.e. logged events such as issued queries and clicked results, and often only a selection of such events are logged and analysed. However, types of logged events may differ widely from interface to interface, making comparison between systems difficult. Further, the interpretation of the meaning of and subsequent analysis of a selection of events may lead to conclusions out of context—e.g. the statistics of observed query reformulations may be influenced by the existence of a relevance feedback component. Alternatively, in lab studies user activities can be analysed at a higher level, such as search tactics and strategies, abstracted away from detailed interface implementation. Unfortunately, until now the required manual codings that map logged events to higher-level interpretations have prevented large-scale use of this type of analysis. In this paper, we propose a new method for analysing search logs by (semi-)automatically identifying user search tactics from logged events, allowing large-scale analysis that is comparable across search systems. In addition, as the resulting analysis is at a tactical level we reduce potential issues surrounding the need for interpretation of low-level user actions for log analysis. We validate the efficiency and effectiveness of the proposed tactic identification method using logs of two reference search systems of different natures: a product search system and a video search system. With the identified tactics, we perform a series of novel log analyses in terms of entropy rate of user search tactic sequences, demonstrating how this type of analysis allows comparisons of user search behaviours across systems of different nature and design. This analysis provides insights not achievable with traditional log analysis

    Entity-Oriented Search

    Get PDF
    This open access book covers all facets of entity-oriented search—where “search” can be interpreted in the broadest sense of information access—from a unified point of view, and provides a coherent and comprehensive overview of the state of the art. It represents the first synthesis of research in this broad and rapidly developing area. Selected topics are discussed in-depth, the goal being to establish fundamental techniques and methods as a basis for future research and development. Additional topics are treated at a survey level only, containing numerous pointers to the relevant literature. A roadmap for future research, based on open issues and challenges identified along the way, rounds out the book. The book is divided into three main parts, sandwiched between introductory and concluding chapters. The first two chapters introduce readers to the basic concepts, provide an overview of entity-oriented search tasks, and present the various types and sources of data that will be used throughout the book. Part I deals with the core task of entity ranking: given a textual query, possibly enriched with additional elements or structural hints, return a ranked list of entities. This core task is examined in a number of different variants, using both structured and unstructured data collections, and numerous query formulations. In turn, Part II is devoted to the role of entities in bridging unstructured and structured data. Part III explores how entities can enable search engines to understand the concepts, meaning, and intent behind the query that the user enters into the search box, and how they can provide rich and focused responses (as opposed to merely a list of documents)—a process known as semantic search. The final chapter concludes the book by discussing the limitations of current approaches, and suggesting directions for future research. Researchers and graduate students are the primary target audience of this book. A general background in information retrieval is sufficient to follow the material, including an understanding of basic probability and statistics concepts as well as a basic knowledge of machine learning concepts and supervised learning algorithms

    Personalizable edge services for Web accessibility

    Get PDF
    Web Content Accessibility guidelines by W3C (W3C Recommendation, May 1999. http://www.w3.org/TR/WCAG10/) provide several suggestions for Web designers regarding how to author Web pages in order to make them accessible to everyone. In this context, this paper proposes the use of edge services as an efficient and general solution to promote accessibility and breaking down the digital barriers that inhibit users with disabilities to actively participate to any aspect of society. The idea behind edge services mainly affect the advantages of a personalized navigation in which contents are tailored according to different issues, such as client’s devices capabilities, communication systems and network conditions and, finally, preferences and/or abilities of the growing number of users that access the Web. To meet these requirements, Web designers have to efficiently provide content adaptation and personalization functionalities mechanisms in order to guarantee universal access to the Internet content. The so far dominant paradigm of communication on theWWW, due to its simple request/responsemodel, cannot efficiently address such requirements. Therefore, it must be augmented with new components that attempt to enhance the scalability, the performances and the ubiquity of the Web. Edge servers, acting on the HTTP data flow exchanged between client and server, allow on-the-fly content adaptation as well as other complex functionalities beyond the traditional caching and content replication services. These value-added services are called edge services and include personalization and customization, aggregation from multiple sources, geographical personalization of the navigation of pages (with insertion/emphasis of content that can be related to the user’s geographical location), translation services, group navigation and awareness for social navigation, advanced services for bandwidth optimization such as adaptive compression and format transcoding, mobility, and ubiquitous access to Internet content. This paper presents Personalizable Accessible Navigation (PAN) that is a set of edge services designed to improveWeb pages accessibility, developed and deployed on top of a programmable intermediary framework. The characteristics and the location of the services, i.e., provided by intermediaries, as well as the personalization and the opportunities to select multiple profiles make PAN a platform that is especially suitable for accessing the Web seamlessly also from mobile terminals
    corecore