76 research outputs found

    A MAUT approach for reusing ontologies

    Get PDF
    Knowledge resource reuse has become a popular approach within the ontology engineering field, mainly because it can speed up the ontology development process, saving time and money and promoting the application of good practices. The NeOn Methodology provides guidelines for reuse. These guidelines include the selection of the most appropriate knowledge resources for reuse in ontology development. This is a complex decision-making problem where different conflicting objectives, like the reuse cost, understandability, integration workload and reliability, have to be taken into account simultaneously. GMAA is a PC-based decision support system based on an additive multi-attribute utility model that is intended to allay the operational difficulties involved in the Decision Analysis methodology. The paper illustrates how it can be applied to select multimedia ontologies for reuse to develop a new ontology in the multimedia domain. It also demonstrates that the sensitivity analyses provided by GMAA are useful tools for making a final recommendation

    A Maut aprroach for reusing domain ontologies on the basis of the NeOn Methodlogy

    Get PDF
    Knowledge resource reuse has become a popular approach within the ontology engineering field, mainly because it can speed up the ontology development process, saving time and money and promoting the application of good practices. The NeOn Methodology provides guidelines for reuse. These guidelines include the selection of the most appropriate knowledge resources for reuse in ontology development. This is a complex decision-making problem where different conflicting objectives, like the reuse cost, understandability, integration workload and reliability, have to be taken into account simultaneously. GMAA is a PC-based decision support system based on an additive multi-attribute utility model that is intended to allay the operational difficulties involved in the Decision Analysis methodology. The paper illustrates how it can be applied to select multimedia ontologies for reuse to develop a new ontology in the multimedia domain. It also demonstrates that the sensitivity analyses provided by GMAA are useful tools for making a final recommendation

    Contribution des services dirigés par l’ontologie pour l’interopérabilité de la gestion opérationnelle multi-acteurs des situations des crises

    Get PDF
    La gestion opérationnelle de situations de crise nécessite, selon l’importance et l’étendue de la crise, la mobilisation rapide et la coordination des différents services de secours. Malheureusement, cette coordination interservices est un exercice très délicat du fait de la diversité des acteurs intervenant sur le terrain et de l’hétérogénéité des différentes organisations. Aujourd’hui, il y a un manque de coordination, l’information n’est que très peu partagée entre les acteurs opérationnels et la communication n’est pas formalisée. Ces inconvénients conduisent au dysfonctionnement des réponses aux situations de crise. Afin de mieux répondre aux situations de crise, nous proposons POLARISC, une plateforme interopérable de coordination interservices pour la gestion opérationnelle de catastrophes visualisant en temps réel le théâtre des opérations. L’objectif de POLARISC est d’aider à la décision quel que soit le niveau de commandement. Pour atteindre ces objectifs, le premier enjeu de cette thèse est de garantir une interopérabilité sémantique entre les différents acteurs métiers pour assurer l’échange et le partage des informations. À cet égard, l’idée est de formaliser sémantiquement les connaissances des acteurs métiers de la gestion opérationnelle à l’aide des ontologies. En effet, nous proposons une approche fédérée qui représente les données, les services, les processus et les métiers de chaque acteur. Nous avons modélisé les connaissances des acteurs de secours en développant une ontologie modulaire (POLARISCO) comportant un module ontologique pour chaque acteur de secours et intégré ces derniers pour proposer un vocabulaire partagé. L’utilisation des ontologies de haut niveaux et des ontologies intermédiaires, respectivement « Basic Formel Ontology » et « Common Core Ontologies », facilitent l’intégration de ces modules et de leurs mappings. Le deuxième enjeu est d’exploiter ces ontologies afin de diminuer l’ambigüité et d’éviter la mal interprétation des informations échangées. Par conséquent, nous proposons un service de messagerie appelé PROMES transformant sémantiquement le message envoyé par un acteur émetteur selon le module ontologique de l’acteur destinataire. En effet, PROMES se base sur l’ontologie POLARISCO et sert à enrichir sémantiquement le message pour éviter tout type d’ambiguïté. Le fonctionnement de PROMES est basé principalement sur deux algorithmes ; un algorithme de transformation textuelle, et par la suite, un algorithme de transformation sémantique. Ainsi, nous avons instancié l’ontologie POLARISCO avec des données réelles de la réponse aux attaques terroristes de Paris en 2015 afin d’évaluer l’ontologie et le service de messagerie. Le troisième et dernier enjeu est de proposer un service d’aide à la décision multicritère qui permet de proposer des stratégies d’évacuation des victimes après le lancement du plan blanc. L’objectif est de trouver les structures hospitalières les plus adaptées à l’état de la victime. Le choix de l’hôpital le plus approprié dépend de la durée du transport, et surtout de la disponibilité des ressources matérielles et humaines, de façon à prendre en charge les victimes le plus rapide que possible. Notre étude comprend deux étapes : la première étape consiste à développer un module ontologique qui associe à chaque pathologie les ressources indispensables pour une meilleure prise en charge des victimes selon leurs états. La deuxième étape consiste à développer un algorithme qui permet de vérifier la disponibilité des ressources nécessaires, calculer le temps d’attente pour que la victime soit prise en charge dans chaque hôpital et par la suite choisir l’hôpital le plus appropri

    An Investigation into Dynamic Web Service Composition Using a Simulation Framework

    Get PDF
    [Motivation] Web Services technology has emerged as a promising solution for creat- ing distributed systems with the potential to overcome the limitation of former distrib- uted system technologies. Web services provide a platform-independent framework that enables companies to run their business services over the internet. Therefore, many techniques and tools are being developed to create business to business/business to customer applications. In particular, researchers are exploring ways to build new services from existing services by dynamically composing services from a range of resources. [Aim] This thesis aims to identify the technologies and strategies cur- rently being explored for organising the dynamic composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. In addition, the thesis will study the matchmaking and selection processes which are essential processes for Web service composition. [Research Method] We under- took a mapping study of empirical papers that had been published over the period 2000 to 2009. The aim of the mapping study was to identify the technologies and strategies currently being explored for organising the composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. We then built a simulation framework to carry out some experiments on composition strategies. The rst experiment compared the results of a close replication of an ex- isting study with the original results in order to evaluate our close replication study. The simulation framework was then used to investigate the use of a QoS model for supporting the selection process, comparing this with the ranking technique in terms of their performance. [Results] The mapping study found 1172 papers that matched our search terms, from which 94 were classied as providing practical demonstration of ideas related to dynamic composition. We have analysed 68 of these in more detail. Only 29 provided a `formal' empirical evaluation. From these, we selected a `baseline' study to test our simulation model. Running the experiments using simulated data- sets have shown that in the rst experiment the results of the close replication study and the original study were similar in terms of their prole. In the second experiment, the results demonstrated that the QoS model was better than the ranking mechanism in terms of selecting a composite plan that has highest quality score. [Conclusions] No one approach to service composition seemed to meet all needs, but a number has been investigated more. The similarity between the results of the close replication and the original study showed the validity of our simulation framework and a proof that the results of the original study can be replicated. Using the simulation it was demonstrated that the performance of the QoS model was better than the ranking mechanism in terms of the overall quality for a selected plan. The overall objectives of this research are to develop a generic life-cycle model for Web service composition from a mapping study of the literature. This was then used to run simulations to replicate studies on matchmaking and compare selection methods

    Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects

    Get PDF
    These are the Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    Identifying the Goal, User model and Conditions of Recommender Systems for Formal and Informal Learning

    Get PDF
    Drachsler, H., Hummel, H. G. K., & Koper, R. (2009). Identifying the Goal, User model and Conditions of Recommender Systems for Formal and Informal Learning. Journal of Digital Information, 10(2), 4-24.The following article addresses open questions of the discussions in the first SIRTEL workshop at the EC-TEL conference 2007. It argues why personal recommender systems have to be adjusted to the specific characteristics of learning to support lifelong learners. Personal recommender systems strongly depend on the context or domain they operate in, and it is often not possible to take one recommender system from one context and transfer it to another context or domain. The article describes a number of distinct differences for personalized recommendation to consumers in contrast to recommendations to learners. Similarities and differences are translated into specific demands for learning and specific requirements for personal recommendation systems. It further suggests an evaluation approach for recommender systems in technology-enhanced learning.The work on this publication has been sponsored by the TENCompetence Integrated Project that is funded by the European Commission's 6th Framework Programme, priority IST/Technology Enhanced Learning. Contract 027087 [http://www.tencompetence.org

    Proceedings of the 3rd Workshop on Social Information Retrieval for Technology-Enhanced Learning

    Get PDF
    Learning and teaching resource are available on the Web - both in terms of digital learning content and people resources (e.g. other learners, experts, tutors). They can be used to facilitate teaching and learning tasks. The remaining challenge is to develop, deploy and evaluate Social information retrieval (SIR) methods, techniques and systems that provide learners and teachers with guidance in potentially overwhelming variety of choices. The aim of the SIRTEL’09 workshop is to look onward beyond recent achievements to discuss specific topics, emerging research issues, new trends and endeavors in SIR for TEL. The workshop will bring together researchers and practitioners to present, and more importantly, to discuss the current status of research in SIR and TEL and its implications for science and teaching

    Strategies for the intelligent selection of components

    Get PDF
    It is becoming common to build applications as component-intensive systems - a mixture of fresh code and existing components. For application developers the selection of components to incorporate is key to overall system quality - so they want the `best\u27. For each selection task, the application developer will de ne requirements for the ideal component and use them to select the most suitable one. While many software selection processes exist there is a lack of repeatable, usable, exible, automated processes with tool support. This investigation has focussed on nding and implementing strategies to enhance the selection of software components. The study was built around four research elements, targeting characterisation, process, strategies and evaluation. A Post-positivist methodology was used with the Spiral Development Model structuring the investigation. Data for the study is generated using a range of qualitative and quantitative methods including a survey approach, a range of case studies and quasiexperiments to focus on the speci c tuning of tools and techniques. Evaluation and review are integral to the SDM: a Goal-Question-Metric (GQM)-based approach was applied to every Spiral
    • …
    corecore