104 research outputs found

    A survey of cost-sensitive decision tree induction algorithms

    Get PDF
    The past decade has seen a significant interest on the problem of inducing decision trees that take account of costs of misclassification and costs of acquiring the features used for decision making. This survey identifies over 50 algorithms including approaches that are direct adaptations of accuracy based methods, use genetic algorithms, use anytime methods and utilize boosting and bagging. The survey brings together these different studies and novel approaches to cost-sensitive decision tree learning, provides a useful taxonomy, a historical timeline of how the field has developed and should provide a useful reference point for future research in this field

    Inductive logic programming at 30: a new introduction

    Full text link
    Inductive logic programming (ILP) is a form of machine learning. The goal of ILP is to induce a hypothesis (a set of logical rules) that generalises training examples. As ILP turns 30, we provide a new introduction to the field. We introduce the necessary logical notation and the main learning settings; describe the building blocks of an ILP system; compare several systems on several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol); highlight key application areas; and, finally, summarise current limitations and directions for future research.Comment: Paper under revie

    Boosting gets full Attention for Relational Learning

    Full text link
    More often than not in benchmark supervised ML, tabular data is flat, i.e. consists of a single mĂ—dm \times d (rows, columns) file, but cases abound in the real world where observations are described by a set of tables with structural relationships. Neural nets-based deep models are a classical fit to incorporate general topological dependence among description features (pixels, words, etc.), but their suboptimality to tree-based models on tabular data is still well documented. In this paper, we introduce an attention mechanism for structured data that blends well with tree-based models in the training context of (gradient) boosting. Each aggregated model is a tree whose training involves two steps: first, simple tabular models are learned descending tables in a top-down fashion with boosting's class residuals on tables' features. Second, what has been learned progresses back bottom-up via attention and aggregation mechanisms, progressively crafting new features that complete at the end the set of observation features over which a single tree is learned, boosting's iteration clock is incremented and new class residuals are computed. Experiments on simulated and real-world domains display the competitiveness of our method against a state of the art containing both tree-based and neural nets-based models

    Learning Action-State Representation Forests for Implicitly Relational Worlds

    Get PDF
    Real world tasks, in homes or other unstructured environments, require interacting with objects (including people) and understanding the variety of physical relationships between them. For example, choosing where to place a fork at a table requires knowing the correct position and orientation relative to the plate. Further, the quantity of objects and the roles they play might change from one occasion to the next; the variables are not fixed and predefined. For an intelligent agent to navigate this complex space, it needs to be able to identify and focus on just those variables that are relevant. Also, if a robot or other artificial agent can learn such physical relations from its own experience in a task, it can save manual engineering effort and automatically adapt to new situations. Relational learning, while often focused on discrete domains, applies to situations with arbitrary numbers of objects by using existential and/or universal quantifiers from first-order logic. The field of reinforcement learning (RL) addresses learning task execution from scalar rewards based on agent state and action. Relational reinforcement learning (RRL) combines these two fields. In this dissertation, I present an RRL technique emphasizing relations that are merely implicit in multidimensional, continuous object attributes, such as position, color, and size. This technique requires analyzing permutations of possible object comparisons while simultaneously working in the multidimensional spaces defined by their attributes. Existing similar RRL methods query only one dimension at a time, which limits effectiveness when multiple dimensions are correlated. Specifically, I present a representation policy iteration (RPI) method using the spatiotemporal multidimensional relational framework (SMRF) for learning relational decision trees from object attributes. This SMRF-RPI algorithm interleaves the learning of relational representations and of policies for agent action. Further, SMRF-RPI includes support for continuous actions. As a component of the SMRF framework, I also present a novel multiple instance learning (MIL) algorithm, which is able to learn parametric, existential decision volumes within a feature space in a robust manner. Finally, I demonstrate SMRF-RPI on a variety of developmentally motivated blocks world tasks, as well as effective transfer and sample efficient learning in a standard keepaway soccer benchmark task. Both domains involve complicated, simulated world dynamics in continuous space. These experiments demonstrate SMRF-RPI as a promising method for applying RRL techniques in multidimensional, continuous domains

    BOOST: Harnessing Black-Box Control to Boost Commonsense in LMs' Generation

    Full text link
    Large language models (LLMs) such as GPT-3 have demonstrated a strong capability to generate coherent and contextually relevant text. However, amidst their successes, a crucial issue persists: their generated outputs still lack commonsense at times. Moreover, fine-tuning the entire LLM towards more commonsensical outputs is computationally expensive if not infeasible. In this paper, we present a computation-efficient framework that steers a frozen Pre-Trained Language Model (PTLM) towards more commonsensical generation (i.e., producing a plausible output that incorporates a list of concepts in a meaningful way). Specifically, we first construct a reference-free evaluator that assigns a sentence with a commonsensical score by grounding the sentence to a dynamic commonsense knowledge base from four different relational aspects. We then use the scorer as the oracle for commonsense knowledge, and extend the controllable generation method called NADO to train an auxiliary head that guides a fixed PTLM to better satisfy the oracle. We test our framework on a series of GPT-2-, Flan-T5-, and Alpaca-based language models (LMs) on two constrained concept-to-sentence benchmarks. Human evaluation results demonstrate that our method consistently leads to the most commonsensical outputs.Comment: EMNLP 202

    Automatic Algorithm Selection for Complex Simulation Problems

    Get PDF
    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. The thesis consists of three parts. The first part surveys existing approaches to solve the algorithm selection problem and discusses techniques to analyze simulation algorithm performance.The second part introduces a software framework for automatic simulation algorithm selection, which is evaluated in the third part.Die Auswahl des passendsten Simulationsalgorithmus fĂĽr eine bestimmte Aufgabe ist oftmals schwierig. Dies liegt an der komplexen Interaktion zwischen Modelleigenschaften, Implementierungsdetails und Laufzeitumgebung. Die Arbeit ist in drei Teile gegliedert. Der erste Teil befasst sich eingehend mit Vorarbeiten zur automatischen Algorithmenauswahl, sowie mit der Leistungsanalyse von Simulationsalgorithmen. Der zweite Teil der Arbeit stellt ein Rahmenwerk zur automatischen Auswahl von Simulationsalgorithmen vor, welches dann im dritten Teil evaluiert wird

    Plan Projection, Execution, and Learning for Mobile Robot Control

    Get PDF
    Most state-of-the-art hybrid control systems for mobile robots are decomposed into different layers. While the deliberation layer reasons about the actions required for the robot in order to achieve a given goal, the behavioral layer is designed to enable the robot to quickly react to unforeseen events. This decomposition guarantees a safe operation even in the presence of unforeseen and dynamic obstacles and enables the robot to cope with situations it was not explicitly programmed for. The layered design, however, also leaves us with the problem of plan execution. The problem of plan execution is the problem of arbitrating between the deliberation- and the behavioral layer. Abstract symbolic actions have to be translated into streams of local control commands. Simultaneously, execution failures have to be handled on an appropriate level of abstraction. It is now widely accepted that plan execution should form a third layer of a hybrid robot control system. The resulting layered architectures are called three-tiered architectures, or 3T architectures for short. Although many high level programming frameworks have been proposed to support the implementation of the intermediate layer, there is no generally accepted algorithmic basis for plan execution in three-tiered architectures. In this thesis, we propose to base plan execution on plan projection and learning and present a general framework for the self-supervised improvement of plan execution. This framework has been implemented in APPEAL, an Architecture for Plan Projection, Execution And Learning, which extends the well known RHINO control system by introducing an execution layer. This thesis contributes to the field of plan-based mobile robot control which investigates the interrelation between planning, reasoning, and learning techniques based on an explicit representation of the robot's intended course of action, a plan. In McDermott's terminology, a plan is that part of a robot control program, which the robot cannot only execute, but also reason about and manipulate. According to that broad view, a plan may serve many purposes in a robot control system like reasoning about future behavior, the revision of intended activities, or learning. In this thesis, plan-based control is applied to the self-supervised improvement of mobile robot plan execution
    • …
    corecore