4,227 research outputs found

    A Framework for Design and Composition of Semantic Web Services

    Get PDF
    Semantic Web Services (SWS) are Web Services (WS) whose description is semantically enhanced with markup languages (e.g., OWL-S). This semantic description will enable external agents and programs to discover, compose and invoke SWSs. However, as a previous step to the specification of SWSs in a language, it must be designed at a conceptual level to guarantee its correctness and avoid inconsistencies among its internal components. In this paper, we present a framework for design and (semi) automatic composition of SWSs at a language-independent and knowledge level. This framework is based on a stack of ontologies that (1) describe the different parts of a SWS; and (2) contain a set of axioms that are really design rules to be verified by the ontology instances. Based on these ontologies, design and composition of SWSs can be viewed as the correct instantiation of the ontologies themselves. Once these instances have been created they will be exported to SWS languages such as OWL-S

    NiftyNet: a deep-learning platform for medical imaging

    Get PDF
    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this application requires substantial implementation effort. Thus, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D and 3D images and computational graphs by default. We present 3 illustrative medical image analysis applications built using NiftyNet: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6 figures; Update includes additional applications, updated author list and formatting for journal submissio

    Interchanging lexical resources on the Semantic Web

    Get PDF
    Lexica and terminology databases play a vital role in many NLP applications, but currently most such resources are published in application-specific formats, or with custom access interfaces, leading to the problem that much of this data is in ‘‘data silos’’ and hence difficult to access. The Semantic Web and in particular the Linked Data initiative provide effective solutions to this problem, as well as possibilities for data reuse by inter-lexicon linking, and incorporation of data categories by dereferencable URIs. The Semantic Web focuses on the use of ontologies to describe semantics on the Web, but currently there is no standard for providing complex lexical information for such ontologies and for describing the relationship between the lexicon and the ontology. We present our model, lemon, which aims to address these gap

    Multilevel Modeling

    Get PDF
    Domain-specific modeling languages (DSMLs) promise clear advantages over general-purpose modeling languages. However, their design poses a fundamental challenge. While economies of scale advocate the development of DSMLs that can be used in a wide range of cases, modeling productivity demands more specific language concepts tuned to individual requirements. Inspired by the actual use of technical languages (German: “Fachsprachen”), this paper presents a novel multilevel modeling approach to conceptual modeling and to the design of information systems. Unlike traditional language architectures such as Meta Object Facility (MOF), it features a recursive architecture that allows for an arbitrary number of classification levels and, hence, for the design of hierarchies of DSMLs ranging from reference DSMLs to “local” DSMLs. It can not only diminish the conflict inherent in designing DSMLs, but enables the reuse and integration of software artifacts in general. It also helps reduce modeling complexity by relaxing the rigid dichotomy between specialization and instantiation. Furthermore, it integrates a meta-modeling language with a metamodel of a reflective meta-programming language, thereby allowing for executable models. The specification of the language architecture is supplemented by the description of use scenarios that illustrate the potential of multilevel modeling and a critical discussion of its peculiarities

    A Systematic Review of Tracing Solutions in Software Product Lines

    Get PDF
    Software Product Lines are large-scale, multi-unit systems that enable massive, customized production. They consist of a base of reusable artifacts and points of variation that provide the system with flexibility, allowing generating customized products. However, maintaining a system with such complexity and flexibility could be error prone and time consuming. Indeed, any modification (addition, deletion or update) at the level of a product or an artifact would impact other elements. It would therefore be interesting to adopt an efficient and organized traceability solution to maintain the Software Product Line. Still, traceability is not systematically implemented. It is usually set up for specific constraints (e.g. certification requirements), but abandoned in other situations. In order to draw a picture of the actual conditions of traceability solutions in Software Product Lines context, we decided to address a literature review. This review as well as its findings is detailed in the present article.Comment: 22 pages, 9 figures, 7 table

    Extensibility of Enterprise Modelling Languages

    Get PDF
    Die Arbeit adressiert insgesamt drei Forschungsschwerpunkte. Der erste Schwerpunkt setzt sich mit zu entwickelnden BPMN-Erweiterungen auseinander und stellt deren methodische Implikationen im Rahmen der bestehenden Sprachstandards dar. Dies umfasst zum einen ganz konkrete Spracherweiterungen wie z. B. BPMN4CP, eine BPMN-Erweiterung zur multi-perspektivischen Modellierung von klinischen Behandlungspfaden. Zum anderen betrifft dieser Teil auch modellierungsmethodische Konsequenzen, um parallel sowohl die zugrunde liegende Sprache (d. h. das BPMN-Metamodell) als auch die Methode zur Erweiterungsentwicklung zu verbessern und somit den festgestellten UnzulĂ€nglichkeiten zu begegnen. Der zweite Schwerpunkt adressiert die Untersuchung von sprachunabhĂ€ngigen Fragen der Erweiterbarkeit, welche sich entweder wĂ€hrend der Bearbeitung des ersten Teils ergeben haben oder aus dessen Ergebnissen induktiv geschlossen wurden. Der Forschungsschwerpunkt fokussiert dabei insbesondere eine Konsolidierung bestehender Terminologien, die Beschreibung generisch anwendbarer Erweiterungsmechanismen sowie die nutzerorientierte Analyse eines potentiellen Erweiterungsbedarfs. Dieser Teil bereitet somit die Entwicklung einer generischen Erweiterungsmethode grundlegend vor. Hierzu zĂ€hlt auch die fundamentale Auseinandersetzung mit Unternehmensmodellierungssprachen generell, da nur eine ganzheitliche, widerspruchsfreie und integrierte Sprachdefinition Erweiterungen ĂŒberhaupt ermöglichen und gelingen lassen kann. Dies betrifft beispielsweise die Spezifikation der intendierten Semantik einer Sprache
    • 

    corecore