205 research outputs found

    Knowledge management for more sustainable water systems

    Get PDF
    The management and sharing of complex data, information and knowledge is a fundamental and growing concern in the Water and other Industries for a variety of reasons. For example, risks and uncertainties associated with climate, and other changes require knowledge to prepare for a range of future scenarios and potential extreme events. Formal ways in which knowledge can be established and managed can help deliver efficiencies on acquisition, structuring and filtering to provide only the essential aspects of the knowledge really needed. Ontologies are a key technology for this knowledge management. The construction of ontologies is a considerable overhead on any knowledge management programme. Hence current computer science research is investigating generating ontologies automatically from documents using text mining and natural language techniques. As an example of this, results from application of the Text2Onto tool to stakeholder documents for a project on sustainable water cycle management in new developments are presented. It is concluded that by adopting ontological representations sooner, rather than later in an analytical process, decision makers will be able to make better use of highly knowledgeable systems containing automated services to ensure that sustainability considerations are included

    Contelog: A Formal Declarative Framework for Contextual Knowledge Representation and Reasoning

    Get PDF
    Context-awareness is at the core of providing timely adaptations in safety-critical secure applications of pervasive computing and Artificial Intelligence (AI) domains. In the current AI and application context-aware frameworks, the distinction between knowledge and context are blurred and not formally integrated. As a result, adaptation behaviors based on contextual reasoning cannot be formally derived and reasoned about. Also, in many smart systems such as automated manufacturing, decision making, and healthcare, it is essential for context-awareness units to synchronize with contextual reasoning modules to derive new knowledge in order to adapt, alert, and predict. A rigorous formalism is therefore essential to (1) represent contextual domain knowledge as well as application rules, and (2) efficiently and effectively reason to draw contextual conclusions. This thesis is a contribution in this direction. The thesis introduces first a formal context representation and a context calculus used to build context models for applications. Then, it introduces query processing and optimization techniques to perform context-based reasoning. The formal framework that achieves these two tasks is called Contelog Framework, obtained by a conservative extension of the syntax and semantics of Datalog. It models contextual knowledge and infers new knowledge. In its design, contextual knowledge and contextual reasoning are loosely coupled, and hence contextual knowledge is reusable on its own. The significance is that by fixing the contextual knowledge, rules in the program and/or query may be changed. Contelog provides a theory of context, in a way that is independent of the application logic rules. The context calculus developed in this thesis allows exporting knowledge inferred in one context to be used in another context. Following the idea of Magic sets from Datalog, Magic Contexts together with query rewriting algorithms are introduced to optimize bottom-up query evaluation of Contelog programs. A Book of Examples has been compiled for Contelog, and these examples are implemented to showcase a proof of concept for the generality, expressiveness, and rigor of the proposed Contelog framework. A variety of experiments that compare the performance of Contelog with earlier Datalog implementations reveal a significant improvement and bring out practical merits of current stage of Contelog and its potential for future extensions in context representation and reasoning of emerging applications of context-aware computing

    International Workshop "What can FCA do for Artificial Intelligence?" (FCA4AI at IJCAI 2013, Beijing, China, August 4 2013)

    Get PDF
    International audienceThis second edition of the FCA4AI workshop (the first edition was associated to the ECAI 2012 Conference, see http://www.fca4ai.hse.ru/), shows again that there are many AI researchers interested in FCA. Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classification. FCA allows one to build a concept lattice and a system of dependencies (implications) which can be used for many AI needs, e.g. knowledge processing involving learning, knowledge discovery, knowledge representation and reasoning, ontology engineering, as well as information retrieval and text processing. Thus, there exist many natural links between FCA and AI. Accordingly, the focus in this workshop was on how can FCA support AI activities (knowledge processing) and how can FCA be extended in order to help AI researchers to solve new and complex problems in their domains

    Knowledge Components and Methods for Policy Propagation in Data Flows

    Get PDF
    Data-oriented systems and applications are at the centre of current developments of the World Wide Web (WWW). On the Web of Data (WoD), information sources can be accessed and processed for many purposes. Users need to be aware of any licences or terms of use, which are associated with the data sources they want to use. Conversely, publishers need support in assigning the appropriate policies alongside the data they distribute. In this work, we tackle the problem of policy propagation in data flows - an expression that refers to the way data is consumed, manipulated and produced within processes. We pose the question of what kind of components are required, and how they can be acquired, managed, and deployed, to support users on deciding what policies propagate to the output of a data-intensive system from the ones associated with its input. We observe three scenarios: applications of the Semantic Web, workflow reuse in Open Science, and the exploitation of urban data in City Data Hubs. Starting from the analysis of Semantic Web applications, we propose a data-centric approach to semantically describe processes as data flows: the Datanode ontology, which comprises a hierarchy of the possible relations between data objects. By means of Policy Propagation Rules, it is possible to link data flow steps and policies derivable from semantic descriptions of data licences. We show how these components can be designed, how they can be effectively managed, and how to reason efficiently with them. In a second phase, the developed components are verified using a Smart City Data Hub as a case study, where we developed an end-to-end solution for policy propagation. Finally, we evaluate our approach and report on a user study aimed at assessing both the quality and the value of the proposed solution

    Context-based Grouping and Recommendation in MANETs

    No full text
    International audienceWe propose in this chapter a context grouping mechanism for context distribution over MANETs. Context distribution is becoming a key aspect for successful context-aware applications in mobile and ubiquitous computing environments. Such applications need, for adaptation purposes, context information that is acquired by multiple context sensors distributed over the environment. Nevertheless, applications are not interested in all available context information. Context distribution mechanisms have to cope with the dynamicity that characterizes MANETs and also prevent context information to be delivered to nodes (and applications) that are not interested in it. Our grouping mechanism organizes the distribution of context information in groups whose definition is context based: each context group is defined based on a criteria set (e.g. the shared location and interest) and has a dissemination set, which controls the information that can be shared in the group. We propose a personalized and dynamic way of defining and joining groups by providing a lattice-based classification and recommendation mechanism that analyzes the interrelations between groups and users, and recommend new groups to users, based on the interests and preferences of the user

    A Tight Coupling Context-Based Framework for Dataset Discovery

    Get PDF
    Discovering datasets of relevance to meet research goals is at the core of different analysis tasks in order to prove proposed hypothesis and theories. In particular, researchers in Artificial Intelligence (AI) and Machine Learning (ML) research domains where relevant datasets are essential for precise predictions have identified how the absence of methods to discover quality datasets are leading to delay and in many cases failure, of ML projects. Many research reports have brought out the absence of dataset discovery methods that fills the gap between analysis requirements and available datasets, and have given statistics to show how it hinders the process of analysis, with completion rate less than 2%. To the best of our knowledge, removing the above inadequacies remains “an open problem of great importance”. It is in this context that the thesis is making a contribution on context-based tightly coupled framework that will tightly couple dataset providers and data analytics teams. Through this framework, dataset providers publish the metadata descriptions of their datasets and analysts formulate and submit rich queries with goal specifications and quality requirements. The dataset search engine component tightly couples the query specification with metadata specifications datasets through a formal contextualized semantic matching and quality-based ranking and discover all datasets that are relevant to analyst requirements. The thesis gives a proof of concept prototype implementation and reports on its performance and efficiency through a case study

    A knowledge acquisition tool to assist case authoring from texts.

    Get PDF
    Case-Based Reasoning (CBR) is a technique in Artificial Intelligence where a new problem is solved by making use of the solution to a similar past problem situation. People naturally solve problems in this way, without even thinking about it. For example, an occupational therapist (OT) that assesses the needs of a new disabled person may be reminded of a previous person in terms of their disabilities. He may or may not decide to recommend the same devices based on the outcome of an earlier (disabled) person. Case-based reasoning makes use of a collection of past problem-solving experiences thus enabling users to exploit the information of others successes and failures to solve their own problem(s). This project has developed a CBR tool to assist in matching SmartHouse technology to the needs of the elderly and people with disabilities. The tool makes suggestions of SmartHouse devices that could assist with given impairments. SmartHouse past problem-solving textual reports have been used to obtain knowledge for the CBR system. Creating a case-based reasoning system from textual sources is challenging because it requires that the text be interpreted in a meaningful way in order to create cases that are effective in problem-solving and to be able to reasonably interpret queries. Effective case retrieval and query interpretation is only possible if a domain-specific conceptual model is available and if the different meanings that a word can take can be recognised in the text. Approaches based on methods in information retrieval require large amounts of data and typically result in knowledge-poor representations. The costs become prohibitive if an expert is engaged to manually craft cases or hand tag documents for learning. Furthermore, hierarchically structured case representations are preferred to flat-structured ones for problem-solving because they allow for comparison at different levels of specificity thus resulting in more effective retrieval than flat structured cases. This project has developed SmartCAT-T, a tool that creates knowledge-rich hierarchically structured cases from semi-structured textual reports. SmartCAT-T highlights important phrases in the textual SmartHouse problem-solving reports and uses the phrases to create a conceptual model of the domain. The model then becomes a standard structure onto which each semi-structured SmartHouse report is mapped in order to obtain the correspondingly structured case. SmartCAT-T also relies on an unsupervised methodology that recognises word synonyms in text. The methodology is used to create a uniform vocabulary for the textual reports and the resulting harmonised text is used to create the standard conceptual model of the domain. The technique is also employed in query interpretation during problem solving. SmartCAT-T does not require large sets of tagged data for learning, and the concepts in the conceptual model are interpretable, allowing for expert refinement of knowledge. Evaluation results show that the created cases contain knowledge that is useful for problem solving. An improvement in results is also observed when the text and queries are harmonised. A further evaluation highlights a high potential for the techniques developed in this research to be useful in domains other than SmartHouse. All this has been implemented in the Smarter case-based reasoning system

    Graph Structures for Knowledge Representation and Reasoning

    Get PDF
    This open access book constitutes the thoroughly refereed post-conference proceedings of the 6th International Workshop on Graph Structures for Knowledge Representation and Reasoning, GKR 2020, held virtually in September 2020, associated with ECAI 2020, the 24th European Conference on Artificial Intelligence. The 7 revised full papers presented together with 2 invited contributions were reviewed and selected from 9 submissions. The contributions address various issues for knowledge representation and reasoning and the common graph-theoretic background, which allows to bridge the gap between the different communities

    Decisioning 2022 : Collaboration in knowledge discovery and decision making: Applications to sustainable agriculture

    Get PDF
    Sustainable agriculture is one of the Sustainable Development Goals (SDG) proposed by UN (United Nations), but little systematic work on Knowledge Discovery and Decision Making has been applied to it. Knowledge discovery and decision making are becoming active research areas in the last years. The era of FAIR (Findable, Accessible, Interoperable, Reusable) data science, in which linked data with a high degree of variety and different degrees of veracity can be easily correlated and put in perspective to have an empirical and scientific perception of best practices in sustainable agricultural domain. This requires combining multiple methods such as elicitation, specification, validation, technologies from semantic web, information retrieval, formal concept analysis, collaborative work, semantic interoperability, ontological matching, specification, smart contracts, and multiple decision making. Decisioning 2022 is the first workshop on Collaboration in knowledge discovery and decision making: Applications to sustainable agriculture. It has been organized by six research teams from France, Argentina, Colombia and Chile, to explore the current frontier of knowledge and applications in different areas related to knowledge discovery and decision making. The format of this workshop aims at the discussion and knowledge exchange between the academy and industry members.Laboratorio de Investigación y Formación en Informática Avanzad

    Software Technologies - 8th International Joint Conference, ICSOFT 2013 : Revised Selected Papers

    Get PDF
    corecore