30,964 research outputs found

    Integrating multiple criteria decision analysis in participatory forest planning

    Get PDF
    Forest planning in a participatory context often involves multiple stakeholders with conflicting interests. A promising approach for handling these complex situations is to integrate participatory planning and multiple criteria decision analysis (MCDA). The objective of this paper is to analyze strengths and weaknesses of such an integrated approach, focusing on how the use of MCDA has influenced the participatory process. The paper outlines a model for a participatory MCDA process with five steps: stakeholder analysis, structuring of the decision problem, generation of alternatives, elicitation of preferences, and ranking of alternatives. This model was applied in a case study of a planning process for the urban forest in Lycksele, Sweden. In interviews with stakeholders, criteria for four different social groups were identified. Stakeholders also identified specific areas important to them and explained what activities the areas were used for and the forest management they wished for there. Existing forest data were combined with information from interviews to create a map in which the urban forest was divided into zones of different management classes. Three alternative strategic forest plans were produced based on the zonal map. The stakeholders stated their preferences individually by the Analytic Hierarchy Process in inquiry forms and a ranking of alternatives and consistency ratios were determined for each stakeholder. Rankings of alternatives were aggregated; first, for each social group using the arithmetic mean, and then an overall aggregated ranking was calculated from the group rankings using the weighted arithmetic mean. The participatory MCDA process in Lycksele is assessed against five social goals: incorporating public values into decisions, improving the substantive quality of decisions, resolving conflict among competing interests, building trust in institutions, and educating and informing the public. The results and assessment of the case study support the integration of participatory planning and MCDA as a viable option for handling complex forest-management situations. Key issues related to the MCDA methodology that need to be explored further were identified: 1) The handling of place-specific criteria, 2) development of alternatives, 3) the aggregation of individual preferences into a common preference, and 4) application and evaluation of the integrated approach in real case studies

    Datalog± Ontology Consolidation

    Get PDF
    Knowledge bases in the form of ontologies are receiving increasing attention as they allow to clearly represent both the available knowledge, which includes the knowledge in itself and the constraints imposed to it by the domain or the users. In particular, Datalog ± ontologies are attractive because of their property of decidability and the possibility of dealing with the massive amounts of data in real world environments; however, as it is the case with many other ontological languages, their application in collaborative environments often lead to inconsistency related issues. In this paper we introduce the notion of incoherence regarding Datalog± ontologies, in terms of satisfiability of sets of constraints, and show how under specific conditions incoherence leads to inconsistent Datalog ± ontologies. The main contribution of this work is a novel approach to restore both consistency and coherence in Datalog± ontologies. The proposed approach is based on kernel contraction and restoration is performed by the application of incision functions that select formulas to delete. Nevertheless, instead of working over minimal incoherent/inconsistent sets encountered in the ontologies, our operators produce incisions over non-minimal structures called clusters. We present a construction for consolidation operators, along with the properties expected to be satisfied by them. Finally, we establish the relation between the construction and the properties by means of a representation theorem. Although this proposal is presented for Datalog± ontologies consolidation, these operators can be applied to other types of ontological languages, such as Description Logics, making them apt to be used in collaborative environments like the Semantic Web.Fil: Deagustini, Cristhian Ariel David. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Martinez, Maria Vanina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Falappa, Marcelo Alejandro. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Simari, Guillermo Ricardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentin

    Towards a competency model for adaptive assessment to support lifelong learning

    No full text
    Adaptive assessment provides efficient and personalised routes to establishing the proficiencies of learners. We can envisage a future in which learners are able to maintain and expose their competency profile to multiple services, throughout their life, which will use the competency information in the model to personalise assessment. Current competency standards tend to over simplify the representation of competency and the knowledge domain. This paper presents a competency model for evaluating learned capability by considering achieved competencies to support adaptive assessment for lifelong learning. This model provides a multidimensional view of competencies and provides for interoperability between systems as the learner progresses through life. The proposed competency model is being developed and implemented in the JISC-funded Placement Learning and Assessment Toolkit (mPLAT) project at the University of Southampton. This project which takes a Service-Oriented approach will contribute to the JISC community by adding mobile assessment tools to the E-framework

    Collaboratively Patching Linked Data

    Full text link
    Today's Web of Data is noisy. Linked Data often needs extensive preprocessing to enable efficient use of heterogeneous resources. While consistent and valid data provides the key to efficient data processing and aggregation we are facing two main challenges: (1st) Identification of erroneous facts and tracking their origins in dynamically connected datasets is a difficult task, and (2nd) efforts in the curation of deficient facts in Linked Data are exchanged rather rarely. Since erroneous data often is duplicated and (re-)distributed by mashup applications it is not only the responsibility of a few original publishers to keep their data tidy, but progresses to be a mission for all distributers and consumers of Linked Data too. We present a new approach to expose and to reuse patches on erroneous data to enhance and to add quality information to the Web of Data. The feasibility of our approach is demonstrated by example of a collaborative game that patches statements in DBpedia data and provides notifications for relevant changes.Comment: 2nd International Workshop on Usage Analysis and the Web of Data (USEWOD2012) in the 21st International World Wide Web Conference (WWW2012), Lyon, France, April 17th, 201

    Reason Maintenance - Conceptual Framework

    Get PDF
    This paper describes the conceptual framework for reason maintenance developed as part of WP2

    Institutional audit : Queen Mary, University of London

    Get PDF

    A Machine Learning Based Analytical Framework for Semantic Annotation Requirements

    Full text link
    The Semantic Web is an extension of the current web in which information is given well-defined meaning. The perspective of Semantic Web is to promote the quality and intelligence of the current web by changing its contents into machine understandable form. Therefore, semantic level information is one of the cornerstones of the Semantic Web. The process of adding semantic metadata to web resources is called Semantic Annotation. There are many obstacles against the Semantic Annotation, such as multilinguality, scalability, and issues which are related to diversity and inconsistency in content of different web pages. Due to the wide range of domains and the dynamic environments that the Semantic Annotation systems must be performed on, the problem of automating annotation process is one of the significant challenges in this domain. To overcome this problem, different machine learning approaches such as supervised learning, unsupervised learning and more recent ones like, semi-supervised learning and active learning have been utilized. In this paper we present an inclusive layered classification of Semantic Annotation challenges and discuss the most important issues in this field. Also, we review and analyze machine learning applications for solving semantic annotation problems. For this goal, the article tries to closely study and categorize related researches for better understanding and to reach a framework that can map machine learning techniques into the Semantic Annotation challenges and requirements
    corecore