400 research outputs found

    Semantic Benchmarking of Process Models - An Ontology-Based Approach

    Get PDF
    This article suggests an approach which allows the costly analysis of processes (e.g., in serviceoriented architectures) for benchmarking to be partially automated, so that the performance indicators, as well as qualitative differences between processes become apparent. The approach is based on using appropriate ontologies, which make the process models both syntactically and semantically comparable. In this article, we present a conceptual model for this new approach to process benchmarking, a framework, as well as a software prototype for analyzing and comparing individual process models. We provide an overview of our multi-method evaluation methodology and delineate the technical, conceptual, and economic evaluation perspectives with their respective outcomes. This analysis allowed us to determine whether our approach is generally suitable for generating novel and useful information on different process models that describe the same problem domain

    Generation and Applications of Knowledge Graphs in Systems and Networks Biology

    Get PDF
    The acceleration in the generation of data in the biomedical domain has necessitated the use of computational approaches to assist in its interpretation. However, these approaches rely on the availability of high quality, structured, formalized biomedical knowledge. This thesis has the two goals to improve methods for curation and semantic data integration to generate high granularity biological knowledge graphs and to develop novel methods for using prior biological knowledge to propose new biological hypotheses. The first two publications describe an ecosystem for handling biological knowledge graphs encoded in the Biological Expression Language throughout the stages of curation, visualization, and analysis. Further, the second two publications describe the reproducible acquisition and integration of high-granularity knowledge with low contextual specificity from structured biological data sources on a massive scale and support the semi-automated curation of new content at high speed and precision. After building the ecosystem and acquiring content, the last three publications in this thesis demonstrate three different applications of biological knowledge graphs in modeling and simulation. The first demonstrates the use of agent-based modeling for simulation of neurodegenerative disease biomarker trajectories using biological knowledge graphs as priors. The second applies network representation learning to prioritize nodes in biological knowledge graphs based on corresponding experimental measurements to identify novel targets. Finally, the third uses biological knowledge graphs and develops algorithmics to deconvolute the mechanism of action of drugs, that could also serve to identify drug repositioning candidates. Ultimately, the this thesis lays the groundwork for production-level applications of drug repositioning algorithms and other knowledge-driven approaches to analyzing biomedical experiments

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Process-Oriented Information Logistics: Aligning Process Information with Business Processes

    Get PDF
    During the last decade, research in the field of business process management (BPM) has focused on the design, modeling, execution, monitoring, and optimization of business processes. What has been neglected, however, is the provision of knowledge workers and decision makers with needed information when performing knowledge-intensive business processes such as product engineering, customer support, or strategic management. Today, knowledge workers and decision makers are confronted with a massive load of data, making it difficult for them to discover the information relevant for performing their tasks. Particularly challenging in this context is the alignment of process-related information (process information for short), such as e-mails, office files, forms, checklists, guidelines, and best practices, with business processes and their tasks. In practice, process information is not only stored in large, distributed and heterogeneous sources, but usually managed separately from business processes. For example, shared drives, databases, enterprise portals, and enterprise information systems are used to store process information. In turn, business processes are managed using advanced process management technology. As a consequence, process information and business processes often need to be manually linked; i.e., process information is hard-wired to business processes, e.g., in enterprise portals associating specific process information with process tasks. This approach often fails due to high maintenance efforts and missing support for the individual demands of knowledge workers and decision makers. In response to this problem, this thesis introduces process-oriented information logistics(POIL) as new paradigm for delivering the right process information, in the right format and quality, at the right place and the right point in time, to the right people. In particular, POIL allows for the process-oriented, context-aware (i.e., personalized) delivery of process information to process participants. The goal is to no longer manually hard-wire process information to business processes, but to automatically identify and deliver relevant process information to knowledge workers and decision makers. The core component of POIL is a semantic information network (SIN), which comprises homogeneous information objects (e.g., e-mails, offce files, guidelines), process objects (e.g., tasks, events, roles), and relationships between them. In particular, a SIN allows discovering objects linked with each other in different ways, e.g., objects addressing the same topic or needed when performing a particular process task. The SIN not only enables an integrated formal representation of process information and business processes, but also allows determining the relevance of process information for a given work context based on novel techniques and algorithms. Note that this becomes crucial in order to achieve the aforementioned overall goal of this thesis

    GEOINTERPRET: AN ONTOLOGICAL ENGINEERING METHODOLOGY FOR AUTOMATED INTERPRETATION OF GEOSPATIAL QUERIES

    Get PDF
    Despite advances in GIS technology, solving geospatial problems using current GIS platforms involves complex tasks requiring specialized skills and knowledge that are attainable through formal training and experience in implementing GIS projects. These requisite skills and knowledge include: understanding domain-specific geospatial problems; understanding GIS representation of real-world objects, concepts, and activities; knowing how to identify, locate, retrieve, and integrate geospatial data sets into GIS projects; knowing specific geoprocessing capabilities available on specific GIS platforms; and skills in utilizing geoprocessing tools in GIS with appropriate data sets to solve problems effectively and efficiently. Users interested in solving application-domain problems often lack such skills and knowledge and resort to GIS experts (this is especially true for applications dealing with diverse geospatial data sets and complex problems). Therefore, there is a gap between users' knowledge about geoprocessing and GIS tools and the GIS knowledge and skills needed to solve geospatial problems. To fill this gap, a new approach that automates the tasks involved in geospatial problem solving is needed. Of these tasks, the most important is geospatial query (usually expressed in application-specific concepts and terminologies) interpretation and mapping to geoprocessing operations implementable by GIS. The goal of this research is to develop an ontological engineering methodology, called GeoInterpret, to automate the task of geospatial query interpretation and mapping. This methodology encompasses: a conceptualization of geospatial queries; a multiple-ontology approach for representing knowledge needed to solve geospatial queries; a set of techniques for mapping elements between different ontologies; and a set of algorithms for geospatial query interpretation, mapping, and geoprocessing workflow composition. A proof of concept was developed to demonstrate the working of GeoInterpret

    Applying Wikipedia to Interactive Information Retrieval

    Get PDF
    There are many opportunities to improve the interactivity of information retrieval systems beyond the ubiquitous search box. One idea is to use knowledge bases—e.g. controlled vocabularies, classification schemes, thesauri and ontologies—to organize, describe and navigate the information space. These resources are popular in libraries and specialist collections, but have proven too expensive and narrow to be applied to everyday webscale search. Wikipedia has the potential to bring structured knowledge into more widespread use. This online, collaboratively generated encyclopaedia is one of the largest and most consulted reference works in existence. It is broader, deeper and more agile than the knowledge bases put forward to assist retrieval in the past. Rendering this resource machine-readable is a challenging task that has captured the interest of many researchers. Many see it as a key step required to break the knowledge acquisition bottleneck that crippled previous efforts. This thesis claims that the roadblock can be sidestepped: Wikipedia can be applied effectively to open-domain information retrieval with minimal natural language processing or information extraction. The key is to focus on gathering and applying human-readable rather than machine-readable knowledge. To demonstrate this claim, the thesis tackles three separate problems: extracting knowledge from Wikipedia; connecting it to textual documents; and applying it to the retrieval process. First, we demonstrate that a large thesaurus-like structure can be obtained directly from Wikipedia, and that accurate measures of semantic relatedness can be efficiently mined from it. Second, we show that Wikipedia provides the necessary features and training data for existing data mining techniques to accurately detect and disambiguate topics when they are mentioned in plain text. Third, we provide two systems and user studies that demonstrate the utility of the Wikipedia-derived knowledge base for interactive information retrieval
    corecore