79 research outputs found

    Quantitative Argumentation Debates with Votes for Opinion Polling

    Get PDF
    Opinion polls are used in a variety of settings to assess the opinions of a population, but they mostly conceal the reasoning behind these opinions. Argumentation, as understood in AI, can be used to evaluate opinions in dialectical exchanges, transparently articulating the reasoning behind the opinions. We give a method integrating argumentation within opinion polling to empower voters to add new statements that render their opinions in the polls individually rational while at the same time justifying them. We then show how these poll results can be amalgamated to give a collectively rational set of voters in an argumentation framework. Our method relies upon Quantitative Argumentation Debate for Voting (QuAD-V) frameworks, which extend QuAD frameworks (a form of bipolar argumentation frameworks in which arguments have an intrinsic strength) with votes expressing individuals’ opinions on arguments

    Erratum to: Database preference queries - a possibilistic logic approach with symbolic priorities.

    Get PDF
    International audienceThis note corrects a claim made in the above-mentioned paper about the exact representation of a conditional preference network by means of a possibilistic logic base with partially ordered symbolic weights. We provide a counter-example that shows that the possibilistic logic representation is indeed not always exact. This is the basis of a short discussion on the difficulty of obtaining an exact representation

    Automatic Concept Extraction in Semantic Summarization Process

    Get PDF
    The Semantic Web offers a generic infrastructure for interchange, integration and creative reuse of structured data, which can help to cross some of the boundaries that Web 2.0 is facing. Currently, Web 2.0 offers poor query possibilities apart from searching by keywords or tags. There has been a great deal of interest in the development of semantic-based systems to facilitate knowledge representation and extraction and content integration [1], [2]. Semantic-based approach to retrieving relevant material can be useful to address issues like trying to determine the type or the quality of the information suggested from a personalized environment. In this context, standard keyword search has a very limited effectiveness. For example, it cannot filter for the type of information, the level of information or the quality of information. Potentially, one of the biggest application areas of content-based exploration might be personalized searching framework (e.g., [3],[4]). Whereas search engines provide nowadays largely anonymous information, new framework might highlight or recommend web pages related to key concepts. We can consider semantic information representation as an important step towards a wide efficient manipulation and retrieval of information [5], [6], [7]. In the digital library community a flat list of attribute/value pairs is often assumed to be available. In the Semantic Web community, annotations are often assumed to be an instance of an ontology. Through the ontologies the system will express key entities and relationships describing resources in a formal machine-processable representation. An ontology-based knowledge representation could be used for content analysis and object recognition, for reasoning processes and for enabling user-friendly and intelligent multimedia content search and retrieval. Text summarization has been an interesting and active research area since the 60’s. The definition and assumption are that a small portion or several keywords of the original long document can represent the whole informatively and/or indicatively. Reading or processing this shorter version of the document would save time and other resources [8]. This property is especially true and urgently needed at present due to the vast availability of information. Concept-based approach to represent dynamic and unstructured information can be useful to address issues like trying to determine the key concepts and to summarize the information exchanged within a personalized environment. In this context, a concept is represented with a Wikipedia article. With millions of articles and thousands of contributors, this online repository of knowledge is the largest and fastest growing encyclopedia in existence. The problem described above can then be divided into three steps: • Mapping of a series of terms with the most appropriate Wikipedia article (disambiguation). • Assigning a score for each item identified on the basis of its importance in the given context. • Extraction of n items with the highest score. Text summarization can be applied to many fields: from information retrieval to text mining processes and text display. Also in personalized searching framework text summarization could be very useful. The chapter is organized as follows: the next Section introduces personalized searching framework as one of the possible application areas of automatic concept extraction systems. Section three describes the summarization process, providing details on system architecture, used methodology and tools. Section four provides an overview about document summarization approaches that have been recently developed. Section five summarizes a number of real-world applications which might benefit from WSD. Section six introduces Wikipedia and WordNet as used in our project. Section seven describes the logical structure of the project, describing software components and databases. Finally, Section eight provides some consideration..

    Constraint capture and maintenance in engineering design

    Get PDF
    The Designers' Workbench is a system, developed by the Advanced Knowledge Technologies (AKT) consortium to support designers in large organizations, such as Rolls-Royce, to ensure that the design is consistent with the specification for the particular design as well as with the company's design rule book(s). In the principal application discussed here, the evolving design is described against a jet engine ontology. Design rules are expressed as constraints over the domain ontology. Currently, to capture the constraint information, a domain expert (design engineer) has to work with a knowledge engineer to identify the constraints, and it is then the task of the knowledge engineer to encode these into the Workbench's knowledge base (KB). This is an error prone and time consuming task. It is highly desirable to relieve the knowledge engineer of this task, and so we have developed a system, ConEditor+ that enables domain experts themselves to capture and maintain these constraints. Further we hypothesize that in order to appropriately apply, maintain and reuse constraints, it is necessary to understand the underlying assumptions and context in which each constraint is applicable. We refer to them as “application conditions” and these form a part of the rationale associated with the constraint. We propose a methodology to capture the application conditions associated with a constraint and demonstrate that an explicit representation (machine interpretable format) of application conditions (rationales) together with the corresponding constraints and the domain ontology can be used by a machine to support maintenance of constraints. Support for the maintenance of constraints includes detecting inconsistencies, subsumption, redundancy, fusion between constraints and suggesting appropriate refinements. The proposed methodology provides immediate benefits to the designers and hence should encourage them to input the application conditions (rationales)

    Risk information recommendation for engineering workers.

    Get PDF
    Within any sufficiently expertise-reliant and work-driven domain there is a requirement to understand the similarities between specific work tasks. Though mechanisms to develop similarity models for these areas do exist, in practice they have been criticised within various domains by experts who feel that the output is not indicative of their viewpoint. In field service provision for telecommunication organisations, it can be particularly challenging to understand task similarity from the perspective of an expert engineer. With that in mind, this paper demonstrates a similarity model developed from text recorded by engineer’s themselves to develop a metric directly indicative of expert opinion. We evaluate several methods of learning text representations on a classification task developed from engineers' notes. Furthermore, we introduce a means to make use of the complex and multi-faceted aspect of the notes to recommend additional information to support engineers in the field

    Explainability through transparency and user control: a case-based recommender for engineering workers.

    Get PDF
    Within the service providing industries, field engineers can struggle to access tasks which are suited to their individual skills and experience. There is potential for a recommender system to improve access to information while being on site. However the smooth adoption of such a system is superseded by a challenge for exposing the human understandable proof of the machine reasoning.With that in mind, this paper introduces an explainable recommender system to facilitate transparent retrieval of task information for field engineers in the context of service delivery. The presented software adheres to the five goals of an explainable intelligent system and incorporates elements of both Case-Based Reasoning and heuristic techniques to develop a recommendation ranking of tasks. In addition we evaluate methods of building justifiable representations for similarity-based return on a classification task developed from engineers' notes. Our conclusion highlights the trade-off between performance and explainability
    • …
    corecore