1,382 research outputs found

    A Formal Framework for Speedup Learning from Problems and Solutions

    Full text link
    Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned knowledge, namely control rules and macro-operators, and prove theorems that identify sufficient conditions for learning in each representation. Our proofs are constructive in that they are accompanied with learning algorithms. Our framework captures both empirical and explanation-based speedup learning in a unified fashion. We illustrate our framework with implementations in two domains: symbolic integration and Eight Puzzle. This work integrates many strands of experimental and theoretical work in machine learning, including empirical learning of control rules, macro-operator learning, Explanation-Based Learning (EBL), and Probably Approximately Correct (PAC) Learning.Comment: See http://www.jair.org/ for any accompanying file

    D-rules: learning & planning

    Get PDF
    One current research goal of Artificial Intelligence and Machine Learning is to improve the problem-solving performance of systems with their own experience or from external teaching. The work presented in this paper concentrates on the learning of decomposition rules, also called d-rules, i.e., given some examples learn rules that guide the planning process, in new problems, by determining what operators are to be included in the solution plan. Also a planning algorithm is presented that uses the learned d-rules in order to obtain the desired plan. The learning algorithm includes a value function approximation, which gives each learned rule an associated function. If the planner finds more than one applicable d-rule, it discriminates among them using this feature. Decomposition rules have been learned in the blocks world domain, and those d-rules have been used by the planner to solve new problems.VI Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    Building and Refining Abstract Planning Cases by Change of Representation Language

    Full text link
    ion is one of the most promising approaches to improve the performance of problem solvers. In several domains abstraction by dropping sentences of a domain description -- as used in most hierarchical planners -- has proven useful. In this paper we present examples which illustrate significant drawbacks of abstraction by dropping sentences. To overcome these drawbacks, we propose a more general view of abstraction involving the change of representation language. We have developed a new abstraction methodology and a related sound and complete learning algorithm that allows the complete change of representation language of planning cases from concrete to abstract. However, to achieve a powerful change of the representation language, the abstract language itself as well as rules which describe admissible ways of abstracting states must be provided in the domain model. This new abstraction approach is the core of Paris (Plan Abstraction and Refinement in an Integrated System), a system in which abstract planning cases are automatically learned from given concrete cases. An empirical study in the domain of process planning in mechanical engineering shows significant advantages of the proposed reasoning from abstract cases over classical hierarchical planning.Comment: See http://www.jair.org/ for an online appendix and other files accompanying this articl

    Parallel Support Vector Machines

    Get PDF
    The Support Vector Machine (SVM) is a supervised algorithm for the solution of classification and regression problems. SVMs have gained widespread use in recent years because of successful applications like character recognition and the profound theoretical underpinnings concerning generalization performance. Yet, one of the remaining drawbacks of the SVM algorithm is its high computational demands during the training and testing phase. This article describes how to efficiently parallelize SVM training in order to cut down execution times. The parallelization technique employed is based on a decomposition approach, where the inner quadratic program (QP) is solved using Sequential Minimal Optimization (SMO). Thus all types of SVM formulations can be solved in parallel, including C-SVC and nu-SVC for classification as well as epsilon-SVR and nu-SVR for regression. Practical results show, that on most problems linear or even superlinear speedups can be attained

    D-rules: learning & planning

    Get PDF
    One current research goal of Artificial Intelligence and Machine Learning is to improve the problem-solving performance of systems with their own experience or from external teaching. The work presented in this paper concentrates on the learning of decomposition rules, also called d-rules, i.e., given some examples learn rules that guide the planning process, in new problems, by determining what operators are to be included in the solution plan. Also a planning algorithm is presented that uses the learned d-rules in order to obtain the desired plan. The learning algorithm includes a value function approximation, which gives each learned rule an associated function. If the planner finds more than one applicable d-rule, it discriminates among them using this feature. Decomposition rules have been learned in the blocks world domain, and those d-rules have been used by the planner to solve new problems.VI Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    Portable performance on heterogeneous architectures

    Get PDF
    Trends in both consumer and high performance computing are bringing not only more cores, but also increased heterogeneity among the computational resources within a single machine. In many machines, one of the greatest computational resources is now their graphics coprocessors (GPUs), not just their primary CPUs. But GPU programming and memory models differ dramatically from conventional CPUs, and the relative performance characteristics of the different processors vary widely between machines. Different processors within a system often perform best with different algorithms and memory usage patterns, and achieving the best overall performance may require mapping portions of programs across all types of resources in the machine. To address the problem of efficiently programming machines with increasingly heterogeneous computational resources, we propose a programming model in which the best mapping of programs to processors and memories is determined empirically. Programs define choices in how their individual algorithms may work, and the compiler generates further choices in how they can map to CPU and GPU processors and memory systems. These choices are given to an empirical autotuning framework that allows the space of possible implementations to be searched at installation time. The rich choice space allows the autotuner to construct poly-algorithms that combine many different algorithmic techniques, using both the CPU and the GPU, to obtain better performance than any one technique alone. Experimental results show that algorithmic changes, and the varied use of both CPUs and GPUs, are necessary to obtain up to a 16.5x speedup over using a single program configuration for all architectures.United States. Dept. of Energy (Award DE-SC0005288)United States. Defense Advanced Research Projects Agency (Award HR0011-10-9-0009)National Science Foundation (U.S.) (Award CCF-0632997

    Topic driven testing

    Get PDF
    Modern interactive applications offer so many interaction opportunities that automated exploration and testing becomes practically impossible without some domain specific guidance towards relevant functionality. In this dissertation, we present a novel fundamental graphical user interface testing method called topic-driven testing. We mine the semantic meaning of interactive elements, guide testing, and identify core functionality of applications. The semantic interpretation is close to human understanding and allows us to learn specifications and transfer knowledge across multiple applications independent of the underlying device, platform, programming language, or technology stack—to the best of our knowledge a unique feature of our technique. Our tool ATTABOY is able to take an existing Web application test suite say from Amazon, execute it on ebay, and thus guide testing to relevant core functionality. Tested on different application domains such as eCommerce, news pages, mail clients, it can trans- fer on average sixty percent of the tested application behavior to new apps—without any human intervention. On top of that, topic-driven testing can go with even more vague instructions of how-to descriptions or use-case descriptions. Given an instruction, say “add item to shopping cart”, it tests the specified behavior in an application–both in a browser as well as in mobile apps. It thus improves state-of-the-art UI testing frame- works, creates change resilient UI tests, and lays the foundation for learning, transfer- ring, and enforcing common application behavior. The prototype is up to five times faster than existing random testing frameworks and tests functions that are hard to cover by non-trained approaches.Moderne interaktive Anwendungen bieten so viele Interaktionsmöglichkeiten, dass eine vollständige automatische Exploration und das Testen aller Szenarien praktisch unmöglich ist. Stattdessen muss die Testprozedur auf relevante Kernfunktionalität ausgerichtet werden. Diese Arbeit stellt ein neues fundamentales Testprinzip genannt thematisches Testen vor, das beliebige Anwendungen u ̈ber die graphische Oberfläche testet. Wir untersuchen die semantische Bedeutung von interagierbaren Elementen um die Kernfunktionenen von Anwendungen zu identifizieren und entsprechende Tests zu erzeugen. Statt typischen starren Testinstruktionen orientiert sich diese Art von Tests an menschlichen Anwendungsfällen in natürlicher Sprache. Dies erlaubt es, Software Spezifikationen zu erlernen und Wissen von einer Anwendung auf andere zu übertragen unabhängig von der Anwendungsart, der Programmiersprache, dem Testgerät oder der -Plattform. Nach unserem Kenntnisstand ist unser Ansatz der Erste dieser Art. Wir präsentieren ATTABOY, ein Programm, das eine existierende Testsammlung für eine Webanwendung (z.B. für Amazon) nimmt und in einer beliebigen anderen Anwendung (sagen wir ebay) ausführt. Dadurch werden Tests für Kernfunktionen generiert. Bei der ersten Ausführung auf Anwendungen aus den Domänen Online Shopping, Nachrichtenseiten und eMail, erzeugt der Prototyp sechzig Prozent der Tests automatisch. Ohne zusätzlichen manuellen Aufwand. Darüber hinaus interpretiert themen- getriebenes Testen auch vage Anweisungen beispielsweise von How-to Anleitungen oder Anwendungsbeschreibungen. Eine Anweisung wie "Fügen Sie das Produkt in den Warenkorb hinzu" testet das entsprechende Verhalten in der Anwendung. Sowohl im Browser, als auch in einer mobilen Anwendung. Die erzeugten Tests sind robuster und effektiver als vergleichbar erzeugte Tests. Der Prototyp testet die Zielfunktionalität fünf mal schneller und testet dabei Funktionen die durch nicht spezialisierte Ansätze kaum zu erreichen sind

    On quantum bayesian networks

    Get PDF
    Dissertação de mestrado em Computer ScienceAs a compact representation of joint probability distributions over a dependence graph of random variables, and a tool for modeling and reasoning in the presence of uncertainty, Bayesian networks are becoming increasingly relevant both for natural and social sciences, for example, to combine domain knowledge, capture causal relationships, or learn from in complete datasets. Known as an NP- hard problem in a classical setting, Bayesian inference pops up as a class of algorithms worth to explore in a quantum framework. The present dissertation explores this research field and extends the previous algorithm by embedding them in decision-making processes. In this regard, several attempts were made in order to find new and enhanced ways to deal with these processes. In a first at tempt, the quantum device was considered to run a subprocess of the decision-making pro cess, resulting in a quadratic speed-up for that subprocess. Afterward, “decision-networks” were taken into account and allowed a fully quantum implementation of a decision-making process, benefiting from a quadratic speed-up during the whole process. Lastly, a solution was found. It differs from the existing ones by the judicious use of the utility function in an entangled configuration. This algorithm explores the structure of input data to efficiently compute a solution. In addition, for each one of the algorithms developed, their computa tional complexity was determined in order to provide the information necessary to choose the most efficient one for a concrete decision problem. A prototype implementation in Qiskit (a Python-based program development language for the IBM Q machines) was developed as a proof-of-concept. If Qiskit offered a simulation platform for the algorithm considered in this dissertation, string diagrams provided the verification framework for algorithmic proprieties. Further, string diagrams were studied with the intention to obtain formal proofs about the algorithms developed. This framework provided relevant examples and the proof that two different implementations for the same algorithm are equivalent.As redes Bayesianas tem-se tornado cada vez mais importantes no domínio das ciências naturais e sociais, na medida em que permitem inferir relações de causalidade entre variáveis e aprender através de conjuntos incompletos de dados. Trata-se de uma representação compacta de distribuição de probabilidade conjunta feita sobre um grafo que representa dependências entre variáveis. Num contexto clássico, inferência sobre estas estruturas é visto como um problema de complexidade NP destacando-se como uma das classes de algoritmos a explorar num enquadramento quântico. Esta dissertação explora este domínio de investigação e insere as redes Bayesianas num processo de tomada de decisão. Neste sentido, foram feitas várias tentativas para se encontrarem novas e melhores formas de lidar com estes processos. Numa primeira tentativa, considerou-se que o dispositivo quântico executava um subprocesso do processo de tomada de decisão, resultando numa aceleração quadrática do mesmo. Posteriormente, foram consideradas decision networks que permitiram uma implementação totalmente quântica de um processo de tomada de decisão. Através desta implementação foi possível obter uma aceleração quadrática durante todo o processo. Por fim, foi encontrada uma solução viável. Difere das já existentes pelo uso criterioso da função de utilidade num estado emaranhado. Este algoritmo explora a estrutura dos dados de entrada para calcular de forma eficaz uma solução. Além disso, para cada um dos algoritmos desenvolvidos, foi determinada a respetiva complexidade computacional de modo a que fossem conhecidas todas as informações necessárias para escolher o algoritmo mais eficiente para um determinado problema de decisão. Foi desenvolvida uma implementação inicial no Qiskit (um software que permite o desenvolvimento de programas baseados em Python para as máquinas IBM Q) como prova de conceito. Se o Qiskit ofereceu uma plataforma de simulação para o algoritmo considerado nesta dissertação, os string diagrams forneceram a estrutura de verificação para propriedades algorítmicas. Além disso, estes diagramas foram estudados com a intenção de se obter provas formais sobre os algoritmos desenvolvidos. Esta estrutura forneceu exemplos relevantes e a prova de que duas implementações diferentes para o mesmo algoritmo são equivalentes

    Proceedings, MSVSCC 2014

    Get PDF
    Proceedings of the 8th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 17, 2014 at VMASC in Suffolk, Virginia
    corecore