112 research outputs found

    Recycling Computed Answers in Rewrite Systems for Abduction

    Full text link
    In rule-based systems, goal-oriented computations correspond naturally to the possible ways that an observation may be explained. In some applications, we need to compute explanations for a series of observations with the same domain. The question whether previously computed answers can be recycled arises. A yes answer could result in substantial savings of repeated computations. For systems based on classic logic, the answer is YES. For nonmonotonic systems however, one tends to believe that the answer should be NO, since recycling is a form of adding information. In this paper, we show that computed answers can always be recycled, in a nontrivial way, for the class of rewrite procedures that we proposed earlier for logic programs with negation. We present some experimental results on an encoding of the logistics domain.Comment: 20 pages. Full version of our IJCAI-03 pape

    A Survey of Symbolic Execution Techniques

    Get PDF
    Many security and software testing applications require checking whether certain properties of a program hold for any possible usage scenario. For instance, a tool for identifying software vulnerabilities may need to rule out the existence of any backdoor to bypass a program's authentication. One approach would be to test the program using different, possibly random inputs. As the backdoor may only be hit for very specific program workloads, automated exploration of the space of possible inputs is of the essence. Symbolic execution provides an elegant solution to the problem, by systematically exploring many possible execution paths at the same time without necessarily requiring concrete inputs. Rather than taking on fully specified input values, the technique abstractly represents them as symbols, resorting to constraint solvers to construct actual instances that would cause property violations. Symbolic execution has been incubated in dozens of tools developed over the last four decades, leading to major practical breakthroughs in a number of prominent software reliability applications. The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in the area, distilling them for a broad audience. The present survey has been accepted for publication at ACM Computing Surveys. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5Fv

    From gamestorming to mobile learning : a conceptual framework and a gaming proposition to explore the design of flourishing business models

    Get PDF
    Cette démarche de thèse débute par la mise au point d'un cadre conceptuel à propos de la durabilité ('sustainability') et du MA (modèle d'affaires), pour cadrer une recherche sur la définition et la conception de MA durable. Grâce notamment à Ehrenfeld (2005), le MAF (modèle d'affaires pour un avenir florissant ou 'flourishing future') est défini. La question est maintenant de savoir comment introduire les gestionnaires à la théorie et la pratique du MAF? Quelle est la nature de l'effort cognitif exigé? Et l'apprentissage peut-il être stimulé par le 'gamestorming' en proposant un espace d'apprentissage ouvert à la formation de nouveaux concepts. Le premier chapitre présente les origines du MA suite à l'affrontement dans les années 1980 entre la finance d'entreprise et la stratégie d'entreprise lors de la naissance du premier logiciel de tableur. Dès lors, le chapitre un propose d'envisager l'histoire du MA en trois périodes : d'abord le MA pour la valeur numérique, ensuite, le MA architectural et finalement, le MA durable. Mais les académiciens et les praticiens ne s'entendent pas sur la définition de MA durable. Il existe une opposition entre les approches faible et forte. Nous adoptons dans cette thèse la définition et l'engagement d'Ehrenfeld (2005) à un avenir florissant, définissant ainsi le MAF ou modèle d'affaires (pour un avenir) florissant. Le chapitre un montre que le MA pour la valeur numérique implique le calcul comme un mode cognitif, le MA architectural est plus associé à l'interprétation comme mode cognitif, tandis que MAF devrait être conçu grâce à la cognition située et à la macrocognition. Le chapitre deux oppose le MA développé sous une vision cognitive plus traditionnelle de computation-interprétation à la construction du MAF exigeant de nouvelles conditions préalables nécessaires à la cognition située et à la macrocognition. De cette façon, les acteurs conçoivent un MAF via leur interface sensorimotrice où le sens se dégage de multiples interactions avec la matérialité sociale et la matérialité physique du modèle. Aussi un MAF devient un objet public partagé, ouvert au développement de la compétence sociale dans une situation où les principes de macrocognition s'appliquent. Le chapitre trois fait le bilan d'une expérience d'enseignement / apprentissage avec une classe d'étudiants au MBA dans laquelle les étudiants devaient gérer dans le même cours, à la fois le canevas dédié au MA (CMA) et une modélisation organisationnelle plutôt abstraite reliée à la gestion des connaissances (Morabito et al., 1999). Cette expérience d'apprentissage est un cas de conception dense ('thick design') à l'intérieur d'une salle de classe inversée qui permet d'explorer l'idée suivante : si la matérialité sociale et physique fait partie du domaine de conception, les exigences de la cognition et la charge cognitive seront plus lourdes. Le chapitre se termine en associant durabilité faible avec un design mince ('thin') et la durabilité forte avec la conception dense ('thick'). Le chapitre quatre plonge plus profondément dans les questions de durabilité. Ce chapitre présente une expérience jeu avec Logim@s© qui s'est produite dans la division du développement durable d'une grande ville canadienne : les quatre joueurs étaient gestionnaires de développement durable ou professionnels dans le domaine. Le jeu est basé sur le livre de Steven Moore (2007) qui expose les scénarios, les modes logiques et les discours qui permettent à trois villes très différentes (Curitiba, Austin et Francfort) de déployer leur leadership en matière de durabilité. Un défi de conception dense est au cœur de l'expérience : comment un joueur peut-il utiliser l'approche CMA alors que des discours contradictoires risquent de le bloquer cognitivement? Les joueurs sont dans un mode logique inductif / déductif. Vont-ils passer en mode abductif? Le chapitre cinq examine comment le jeu Logim@s© pourrait devenir une plateforme ouverte de gamestorming, appelons-la SustAbd©. Ce chapitre comporte deux parties : la première partie est une réflexion sur le processus de conception de jeu pour justifier une approche plate-forme d'architecture composé du noyau SustAbd© et de sa périphérie, et une seconde partie, où cinq cas d'utilisation UML sont proposés. Le chapitre six s'appuie sur l'expérience du chercheur comme un tuteur humain dans les expériences d'enseignement inversé et de 'gamestorming.' Le but de ce chapitre est d'adopter la modélisation cognitive (MC) comme approche pour remplacer un tuteur humain par un robot 'situé.' Ce chapitre se poursuit avec des développements au sujet du caractère situé des robots. Ces idées permettent de concevoir SustAbdPLAY© conformément au caractère situé et aux conditions de macrocognition propres au design d'un MAF. La modélisation sociale avec iStar permet de clarifier la conception. Le chapitre sept termine la thèse. Il décrit les leçons apprises, les limites de l'étude ainsi que la suggestion de recherches futures. Une conclusion générale clôt le chapitre.\ud ______________________________________________________________________________ \ud MOTS-CLÉS DE L’AUTEUR : Business model, modèle d'affaires, soutenabilité, développement durable, cognition, matérialité, gamestorming, apprentissage mobile, recherche action, desig

    Hypothesis Generation and Pursuit in Scientific Reasoning

    Get PDF
    This thesis draws a distinction between (i) reasoning about which scientific hypothesis to accept, (ii) reasoning concerned with generating new hypotheses and (iii) reasoning about which hypothesis to pursue. I argue that (ii) and (iii) should be evaluated according to the same normative standard, namely whether the hypotheses generated/selected are pursuit worthy. A consequentialist account of pursuit worthiness is defended, based on C. S. Peirce’s notion of ‘abduction’ and the ‘economy of research’, and developed as a family of formal, decision-theoretic models. This account is then deployed to discuss four more specific topics concerning scientific reasoning. First, I defend an account according to which explanatory reasoning (including the ‘inference to the best explanation’) mainly provides reasons for pursuing hypotheses, and criticise empirical arguments for the view that it also provides reasons for acceptance. Second, I discuss a number of pursuit worthiness accounts of analogical reasoning in science, arguing that, in some cases, analogies allow scientists to transfer an already well-understood modelling framework to a new domain. Third, I discuss the use of analogies within archaeological theorising, arguing that the distinction between using analogies for acceptance, generation and pursuit is implicit in methodological discussions in archaeology. A philosophical analysis of these uses is presented. Fourth, diagnostic reasoning in medicine is analysed from the perspective of Peircean abduction, where the conception of abduction as strategic reasoning is shown to be particularly important

    Seventh Annual Workshop on Space Operations Applications and Research (SOAR 1993), volume 1

    Get PDF
    This document contains papers presented at the Space Operations, Applications and Research Symposium (SOAR) Symposium hosted by NASA/Johnson Space Center (JSC) on August 3-5, 1993, and held at JSC Gilruth Recreation Center. SOAR included NASA and USAF programmatic overview, plenary session, panel discussions, panel sessions, and exhibits. It invited technical papers in support of U.S. Army, U.S. Navy, Department of Energy, NASA, and USAF programs in the following areas: robotics and telepresence, automation and intelligent systems, human factors, life support, and space maintenance and servicing. SOAR was concerned with Government-sponsored research and development relevant to aerospace operations. More than 100 technical papers, 17 exhibits, a plenary session, several panel discussions, and several keynote speeches were included in SOAR '93

    Effective information integration and reutilization : solutions to technological deficiency and legal uncertainty

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology, Management, and Policy Program, February 2006."September 2005."Includes bibliographical references (p. 141-148).The amount of electronically accessible information has been growing exponentially. How to effectively use this information has become a significant challenge. A post 9/11 study indicated that the deficiency of semantic interoperability technology hindered the ability to integrate information from disparate sources in a meaningful and timely fashion to allow for preventive precautions. Meanwhile, organizations that provided useful services by combining and reusing information from publicly accessible sources have been legally challenged. The Database Directive has been introduced in the European Union and six legislative proposals have been made in the U.S. to provide legal protection for non-copyrightable database contents, but the Directive and the proposals have differing and sometimes conflicting scope and strength, which creates legal uncertainty for valued-added data reuse practices. The need for clearer data reuse policy will become more acute as information integration technology improves to make integration much easier. This Thesis takes an interdisciplinary approach to addressing both the technology and the policy challenges, identified above, in the effective use and reuse of information from disparate sources.(cont.) The technology component builds upon the existing Context Interchange (COIN) framework for large-scale semantic interoperability. We focus on the problem of temporal semantic heterogeneity where data sources and receivers make time-varying assumptions about data semantics. A collection of time-varying assumptions are called a temporal context. We extend the existing COIN representation formalism to explicitly represent temporal contexts, and the COIN reasoning mechanism to reconcile temporal semantic heterogeneity in the presence of semantic heterogeneity of time. We also perform a systematic and analytic evaluation of the flexibility and scalability of the COIN approach. Compared with several traditional approaches, the COIN approach has much greater flexibility and scalability. For the policy component, we develop an economic model that formalizes the policy instruments in one of the latest legislative proposals in the U.S. The model allows us to identify the circumstances under which legal protection for non-copyrightable content is needed, the different conditions, and the corresponding policy choices.(cont.) Our analysis indicates that depending on the cost level of database creation, the degree of differentiation of the reuser database, and the efficiency of policy administration, the optimal policy choice can be protecting a legal monopoly, encouraging competition via compulsory licensing, discouraging voluntary licensing, or even allowing free riding. The results provide useful insights for the formulation of a socially beneficial database protection policy.by Hongwei Zhu.Ph.D

    Automated Software Transplantation

    Get PDF
    Automated program repair has excited researchers for more than a decade, yet it has yet to find full scale deployment in industry. We report our experience with SAPFIX: the first deployment of automated end-to-end fault fixing, from test case design through to deployed repairs in production code. We have used SAPFIX at Facebook to repair 6 production systems, each consisting of tens of millions of lines of code, and which are collectively used by hundreds of millions of people worldwide. In its first three months of operation, SAPFIX produced 55 repair candidates for 57 crashes reported to SAPFIX, of which 27 have been deem as correct by developers and 14 have been landed into production automatically by SAPFIX. SAPFIX has thus demonstrated the potential of the search-based repair research agenda by deploying, to hundreds of millions of users worldwide, software systems that have been automatically tested and repaired. Automated software transplantation (autotransplantation) is a form of automated software engineering, where we use search based software engineering to be able to automatically move a functionality of interest from a ‘donor‘ program that implements it into a ‘host‘ program that lacks it. Autotransplantation is a kind of automated program repair where we repair the ‘host‘ program by augmenting it with the missing functionality. Automated software transplantation would open many exciting avenues for software development: suppose we could autotransplant code from one system into another, entirely unrelated, system, potentially written in a different programming language. Being able to do so might greatly enhance the software engineering practice, while reducing the costs. Automated software transplantation manifests in two different flavors: monolingual, when the languages of the host and donor programs is the same, or multilingual when the languages differ. This thesis introduces a theory of automated software transplantation, and two algorithms implemented in two tools that achieve this: µSCALPEL for monolingual software transplantation and τSCALPEL for multilingual software transplantation. Leveraging lightweight annotation, program analysis identifies an organ (interesting behavior to transplant); testing validates that the organ exhibits the desired behavior during its extraction and after its implantation into a host. We report encouraging results: in 14 of 17 monolingual transplantation experiments involving 6 donors and 4 hosts, popular real-world systems, we successfully autotransplanted 6 new functionalities; and in 10 out of 10 multlingual transplantation experiments involving 10 donors and 10 hosts, popular real-world systems written in 4 different programming languages, we successfully autotransplanted 10 new functionalities. That is, we have passed all the test suites that validates the new functionalities behaviour and the fact that the initial program behaviour is preserved. Additionally, we have manually checked the behaviour exercised by the organ. Autotransplantation is also very useful: in just 26 hours computation time we successfully autotransplanted the H.264 video encoding functionality from the x264 system to the VLC media player, a task that is currently done manually by the developers of VLC, since 12 years ago. We autotransplanted call graph generation and indentation for C programs into Kate, (a popular KDE based test editor used as an IDE by a lot of C developers) two features currently missing from Kate, but requested by the users of Kate. Autotransplantation is also efficient: the total runtime across 15 monolingual transplants is 5 hours and a half; the total runtime across 10 multilingual transplants is 33 hours
    • …
    corecore