14,231 research outputs found

    The Learning-Knowledge-Reasoning Paradigm for Natural Language Understanding and Question Answering

    Get PDF
    Given a text, several questions can be asked. For some of these questions, the answer can be directly looked up from the text. However for several other questions, one might need to use additional knowledge and sophisticated reasoning to find the answer. Developing AI agents that can answer these kinds of questions and can also justify their answer is the focus of this research. Towards this goal, we use the language of Answer Set Programming as the knowledge representation and reasoning language for the agent. The question then arises, is how to obtain the additional knowledge? In this work we show that using existing Natural Language Processing parsers and a scalable Inductive Logic Programming algorithm it is possible to learn this additional knowledge (containing mostly commonsense knowledge) from question-answering datasets which then can be used for inference

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa

    Informed selection and use of training examples for knowledge refinement.

    Get PDF
    Knowledge refinement tools seek to correct faulty rule-based systems by identifying and repairing faults indicated by training examples that provide evidence of faults. This thesis proposes mechanisms that improve the effectiveness and efficiency of refinement tools by the best use and selection of training examples. The refinement task is sufficiently complex that the space of possible refinements demands a heuristic search. Refinement tools typically use hill-climbing search to identify suitable repairs but run the risk of getting caught in local optima. A novel contribution of this thesis is solving the local optima problem by converting the hill-climbing search into a best-first search that can backtrack to previous refinement states. The thesis explores how different backtracking heuristics and training example ordering heuristics affect refinement effectiveness and efficiency. Refinement tools rely on a representative set of training examples to identify faults and influence repair choices. In real environments it is often difficult to obtain a large set of training examples, since each problem-solving task must be labelled with the expert's solution. Another novel aspect introduced in this thesis is informed selection of examples for knowledge refinement, where suitable examples are selected from a set of unlabelled examples, so that only the subset requires to be labelled. Conversely, if a large set of labelled examples is available, it still makes sense to have mechanisms that can select a representative set of examples beneficial for the refinement task, thereby avoiding unnecessary example processing costs. Finally, an experimental evaluation of example utilisation and selection strategies on two artificial domains and one real application are presented. Informed backtracking is able to effectively deal with local optima by moving search to more promising areas, while informed ordering of training examples reduces search effort by ensuring that more pressing faults are dealt with early on in the search. Additionally, example selection methods achieve similar refinement accuracy with significantly fewer examples

    Sustainable Change: Education for Sustainable Development in the Business School

    Get PDF
    This paper examines the implementation of education for sustainable development (ESD) within a business school. ESD is of growing importance for business schools, yet its implementation remains a challenge. The paper examines how barriers to ESD's implementation are met through organisational change as a sustainable process. It evaluates change brought about through ESD in a UK-based business school, through the lens of Beer and Eisenstat's three principles of effective strategy implementation and organisational adaptation, which state: 1) the change process should be systemic; 2) the change process should encourage open discussion of barriers to effective strategy implementation and adaptation; and 3) the change process should develop a partnership among all relevant stakeholders. The case incorporates, paradoxically, both elements of a top-down and an emergent strategy that resonates with elements of life-cycle, teleological and dialectic frames for process change. Insights are offered into the role of individuals as agents and actors of institutional change in business schools. In particular, the importance of academic integrity is highlighted for enabling and sustaining integration. Findings also suggest a number of implications for policy-makers who promote ESD, and for faculty and business school managers implementing, adopting and delivering ESD programmes

    Developing a new generation MOOC (ngMOOC): a design-based implementation research project with cognitive architecture and student feedback in mind

    Get PDF
    This paper describes a design-based implementation research (DBIR) approach to the development and trialling of a new generation massive open online course (ngMOOC) situated in an instructional setting of undergraduate mathematics at a regional Australian university. This process is underscored by two important innovations: (a) a basis in a well-established human cognitive architecture in terms of cognitive load theory; and (b) point-of-contact feedback based in a well-tested online system dedicated to enhancing the learning process. Analysis of preliminary trials suggests that the DBIR approach to the ngMOOC construction and development supports theoretical standpoints that argue for an understanding of how design for optimal learning can utilise conditions, such as differing online or blended educational contexts, in order to be effective and scalable. The ngMOOC development described in this paper marks the adoption of a cognitive architecture in conjunction with feedback systems, offering the groundwork for use of adaptive systems that cater for learner expertise. This approach seems especially useful in constructing and developing online learning that is self-paced and curriculum-based

    The FormAI Dataset: Generative AI in Software Security Through the Lens of Formal Verification

    Full text link
    This paper presents the FormAI dataset, a large collection of 112, 000 AI-generated compilable and independent C programs with vulnerability classification. We introduce a dynamic zero-shot prompting technique constructed to spawn diverse programs utilizing Large Language Models (LLMs). The dataset is generated by GPT-3.5-turbo and comprises programs with varying levels of complexity. Some programs handle complicated tasks like network management, table games, or encryption, while others deal with simpler tasks like string manipulation. Every program is labeled with the vulnerabilities found within the source code, indicating the type, line number, and vulnerable function name. This is accomplished by employing a formal verification method using the Efficient SMT-based Bounded Model Checker (ESBMC), which uses model checking, abstract interpretation, constraint programming, and satisfiability modulo theories to reason over safety/security properties in programs. This approach definitively detects vulnerabilities and offers a formal model known as a counterexample, thus eliminating the possibility of generating false positive reports. We have associated the identified vulnerabilities with Common Weakness Enumeration (CWE) numbers. We make the source code available for the 112, 000 programs, accompanied by a separate file containing the vulnerabilities detected in each program, making the dataset ideal for training LLMs and machine learning algorithms. Our study unveiled that according to ESBMC, 51.24% of the programs generated by GPT-3.5 contained vulnerabilities, thereby presenting considerable risks to software safety and security.Comment: https://github.com/FormAI-Datase

    CHARACTERIZING ENABLING INNOVATIONS AND ENABLING THINKING

    Get PDF
    The pursuit of innovation is engrained throughout society whether in business via the introduction of offerings, non-profits in their mission-driven initiatives, universities and agencies in their drive for discoveries and inventions, or governments in their desire to improve the quality of life of their citizens. Yet, despite these pursuits, innovations with long-lasting, significant impact represent an infrequent outcome in most domains. The seemingly random nature of these results stems, in part, from the definitions of innovation and the models based on such definitions. Although there is debate on this topic, a comprehensive and pragmatic perspective developed in this work defines innovation as the introduction of a novel or different idea into practice that has a positive impact on society. To date, models of innovation have focused on, for example, new technological advances, new approaches to connectivity in systems, new conceptual frameworks, or even new dimensions of performance - all effectively building on the first half of the definition of innovation and encouraging its pursuit based on the novelty of ideas. However, as explored herein, achieving profound results by innovating on demand might require a perspective that focuses on the impact of an innovation. In this view, innovation does not only entail doing new things, but consciously driving them towards achieving impact through proactive design behaviors. Explicit consideration of the impact dimension in innovation models has been missing, even though it may arguably be the most important since it represents the outcome of innovation

    Probabilistic Methodology and Techniques for Artefact Conception and Development

    Get PDF
    The purpose of this paper is to make a state of the art on probabilistic methodology and techniques for artefact conception and development. It is the 8th deliverable of the BIBA (Bayesian Inspired Brain and Artefacts) project. We first present the incompletness problem as the central difficulty that both living creatures and artefacts have to face: how can they perceive, infer, decide and act efficiently with incomplete and uncertain knowledge?. We then introduce a generic probabilistic formalism called Bayesian Programming. This formalism is then used to review the main probabilistic methodology and techniques. This review is organized in 3 parts: first the probabilistic models from Bayesian networks to Kalman filters and from sensor fusion to CAD systems, second the inference techniques and finally the learning and model acquisition and comparison methodologies. We conclude with the perspectives of the BIBA project as they rise from this state of the art

    A recommendation framework based on automated ranking for selecting negotiation agents. Application to a water market

    Full text link
    This thesis presents an approach which relies on automatic learning and data mining techniques in order to search the best group of items from a set, according to the behaviour observed in previous groups. The approach is applied to a framework of a water market system, which aims to develop negotiation processes, where trading tables are built in order to trade water rights from users. Our task will focus on predicting which agents will show the most appropriate behaviour when are invited to participate in a trading table, with the purpose of achieving the most bene cial agreement. This way, a model is developed and learns from past transactions occurred in the market. Then, when a new trading table is opened in order to trade a water right, the model predicts, taking into account the individual features of the trading table, the behaviour of all the agents that can be invited to join the negotiation, and thus, becoming potential buyers of the water right. Once the model has made the predictions for a trading table, the agents are ranked according to their probability (which has been assigned by the model) of becoming a buyer in that negotiation. Two di erent methods are proposed in the thesis for dealing with the ranked participants. Depending on the method used, from this ranking we can select the desired number of participants for making the group, or choose only the top user of the list and rebuild the model adding some aggregate information in order to throw a more detailed prediction.Dura Garcia, EM. (2011). A recommendation framework based on automated ranking for selecting negotiation agents. Application to a water market. http://hdl.handle.net/10251/15875Archivo delegad
    • …
    corecore