5,827 research outputs found

    k-Step Relative Inductive Generalization

    Full text link
    We introduce a new form of SAT-based symbolic model checking. One common idea in SAT-based symbolic model checking is to generate new clauses from states that can lead to property violations. Our previous work suggests applying induction to generalize from such states. While effective on some benchmarks, the main problem with inductive generalization is that not all such states can be inductively generalized at a given time in the analysis, resulting in long searches for generalizable states on some benchmarks. This paper introduces the idea of inductively generalizing states relative to kk-step over-approximations: a given state is inductively generalized relative to the latest kk-step over-approximation relative to which the negation of the state is itself inductive. This idea motivates an algorithm that inductively generalizes a given state at the highest level kk so far examined, possibly by generating more than one mutually kk-step relative inductive clause. We present experimental evidence that the algorithm is effective in practice.Comment: 14 page

    Self-Actualization through Conscientization

    Get PDF
    This article takes a critical look at how Theatre of the Oppressed is assisting the self and social transformation of severely disadvantaged groups that fall into the legal and political gaps. Specifically, it will look at the ideological essence of Freire’s conscientization, as prompted through Theatre of the Oppressed workshops run with asylum seekers in Melbourne. It will focus on the early stages of the conscientization process which centers on the notion of ‘self’ through reflection and contextual orientation. This will encompass an analysis into the old, new and evolving definitions of ‘oppressed’ and oppressive dynamics with relation to Theatre of the Oppressed poetics. Discussion will then move beyond this notion of recognition of the oppressed self to verbalization as an acknowledgement of oppression. Later it discusses workshops and group dynamics as conscientizing elements that promote transformation of the self

    Use of data mining for investigation of crime patterns

    Get PDF
    Lot of research is being done to improve the utilization of crime data. This thesis deals with the design and implementation of a crime database and associated search methods to identify crime patterns from the database. The database was created in Microsoft SQL Server (back end). The user interface (front end) and the crime pattern identification software (middle tier) were implemented in ASP.NET. Such a web based approach enables the user to utilize the database from anywhere and at anytime. A general ARFF file can also be generated, for the user in Windows based format to use other Data Mining software such as WEKA for detailed analysis. Further, an effective navigation was provided to make use of the software in a user friendly way

    A Field Guide to Genetic Programming

    Get PDF
    xiv, 233 p. : il. ; 23 cm.Libro ElectrónicoA Field Guide to Genetic Programming (ISBN 978-1-4092-0073-4) is an introduction to genetic programming (GP). GP is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. The authorsIntroduction -- Representation, initialisation and operators in Tree-based GP -- Getting ready to run genetic programming -- Example genetic programming run -- Alternative initialisations and operators in Tree-based GP -- Modular, grammatical and developmental Tree-based GP -- Linear and graph genetic programming -- Probalistic genetic programming -- Multi-objective genetic programming -- Fast and distributed genetic programming -- GP theory and its applications -- Applications -- Troubleshooting GP -- Conclusions.Contents xi 1 Introduction 1.1 Genetic Programming in a Nutshell 1.2 Getting Started 1.3 Prerequisites 1.4 Overview of this Field Guide I Basics 2 Representation, Initialisation and GP 2.1 Representation 2.2 Initialising the Population 2.3 Selection 2.4 Recombination and Mutation Operators in Tree-based 3 Getting Ready to Run Genetic Programming 19 3.1 Step 1: Terminal Set 19 3.2 Step 2: Function Set 20 3.2.1 Closure 21 3.2.2 Sufficiency 23 3.2.3 Evolving Structures other than Programs 23 3.3 Step 3: Fitness Function 24 3.4 Step 4: GP Parameters 26 3.5 Step 5: Termination and solution designation 27 4 Example Genetic Programming Run 4.1 Preparatory Steps 29 4.2 Step-by-Step Sample Run 31 4.2.1 Initialisation 31 4.2.2 Fitness Evaluation Selection, Crossover and Mutation Termination and Solution Designation Advanced Genetic Programming 5 Alternative Initialisations and Operators in 5.1 Constructing the Initial Population 5.1.1 Uniform Initialisation 5.1.2 Initialisation may Affect Bloat 5.1.3 Seeding 5.2 GP Mutation 5.2.1 Is Mutation Necessary? 5.2.2 Mutation Cookbook 5.3 GP Crossover 5.4 Other Techniques 32 5.5 Tree-based GP 39 6 Modular, Grammatical and Developmental Tree-based GP 47 6.1 Evolving Modular and Hierarchical Structures 47 6.1.1 Automatically Defined Functions 48 6.1.2 Program Architecture and Architecture-Altering 50 6.2 Constraining Structures 51 6.2.1 Enforcing Particular Structures 52 6.2.2 Strongly Typed GP 52 6.2.3 Grammar-based Constraints 53 6.2.4 Constraints and Bias 55 6.3 Developmental Genetic Programming 57 6.4 Strongly Typed Autoconstructive GP with PushGP 59 7 Linear and Graph Genetic Programming 61 7.1 Linear Genetic Programming 61 7.1.1 Motivations 61 7.1.2 Linear GP Representations 62 7.1.3 Linear GP Operators 64 7.2 Graph-Based Genetic Programming 65 7.2.1 Parallel Distributed GP (PDGP) 65 7.2.2 PADO 67 7.2.3 Cartesian GP 67 7.2.4 Evolving Parallel Programs using Indirect Encodings 68 8 Probabilistic Genetic Programming 8.1 Estimation of Distribution Algorithms 69 8.2 Pure EDA GP 71 8.3 Mixing Grammars and Probabilities 74 9 Multi-objective Genetic Programming 75 9.1 Combining Multiple Objectives into a Scalar Fitness Function 75 9.2 Keeping the Objectives Separate 76 9.2.1 Multi-objective Bloat and Complexity Control 77 9.2.2 Other Objectives 78 9.2.3 Non-Pareto Criteria 80 9.3 Multiple Objectives via Dynamic and Staged Fitness Functions 80 9.4 Multi-objective Optimisation via Operator Bias 81 10 Fast and Distributed Genetic Programming 83 10.1 Reducing Fitness Evaluations/Increasing their Effectiveness 83 10.2 Reducing Cost of Fitness with Caches 86 10.3 Parallel and Distributed GP are Not Equivalent 88 10.4 Running GP on Parallel Hardware 89 10.4.1 Master–slave GP 89 10.4.2 GP Running on GPUs 90 10.4.3 GP on FPGAs 92 10.4.4 Sub-machine-code GP 93 10.5 Geographically Distributed GP 93 11 GP Theory and its Applications 97 11.1 Mathematical Models 98 11.2 Search Spaces 99 11.3 Bloat 101 11.3.1 Bloat in Theory 101 11.3.2 Bloat Control in Practice 104 III Practical Genetic Programming 12 Applications 12.1 Where GP has Done Well 12.2 Curve Fitting, Data Modelling and Symbolic Regression 12.3 Human Competitive Results – the Humies 12.4 Image and Signal Processing 12.5 Financial Trading, Time Series, and Economic Modelling 12.6 Industrial Process Control 12.7 Medicine, Biology and Bioinformatics 12.8 GP to Create Searchers and Solvers – Hyper-heuristics xiii 12.9 Entertainment and Computer Games 127 12.10The Arts 127 12.11Compression 128 13 Troubleshooting GP 13.1 Is there a Bug in the Code? 13.2 Can you Trust your Results? 13.3 There are No Silver Bullets 13.4 Small Changes can have Big Effects 13.5 Big Changes can have No Effect 13.6 Study your Populations 13.7 Encourage Diversity 13.8 Embrace Approximation 13.9 Control Bloat 13.10 Checkpoint Results 13.11 Report Well 13.12 Convince your Customers 14 Conclusions Tricks of the Trade A Resources A.1 Key Books A.2 Key Journals A.3 Key International Meetings A.4 GP Implementations A.5 On-Line Resources 145 B TinyGP 151 B.1 Overview of TinyGP 151 B.2 Input Data Files for TinyGP 153 B.3 Source Code 154 B.4 Compiling and Running TinyGP 162 Bibliography 167 Inde

    Daily Eastern News: September 15, 1995

    Get PDF
    https://thekeep.eiu.edu/den_1995_sep/1018/thumbnail.jp

    Human environment interactions and collaborative adaptive capacity building in a resilience framework

    Get PDF
    2012 Spring.Includes bibliographical references.Being firmly in the Anthropocene Era--a period in humanity's evolution where human behavior and dominance is significantly impacting the earth's systems, my research objective was in response to the concern and call of the National Science Foundation and of the International Human Dimensions Programme on Global Environmental Change that humanity needs to develop new strategies to tackle complex anthropogenic issues impacting the global environment and that there should be a focus on human behavior to effect change. Through a collaborative tri-phase dual model research initiative in the back country of Burntwater, Arizona in the Houck Chapter on the Navajo Nation, a small group of Navajo, using a photovoice and artvoice technique, began an exploration into community issues and concerns. The outcome confirmed that illegal trash dumping was a serious matter to the community in need of attention. Through multiple community gatherings the illegal trash dumping issue was discussed and explored within the workings of a Participatory Social Frame Work of Action - Collaborative Adaptive Capacity Building (PSFA-CACB) conceptual model. Using data from my field site I was able to partially inform a theoretical agent-based model Taking Care of the Land - Human Environment Interactions (TCL-HEI). Using the TCL-HEI model I was then able to theoretically illustrate within a resilience framework a social-ecological system regime basin shift from an undesirable state to a desirable state. This shift resulted from a change in the system's stability landscape variables through the introduction of a combination of consultative behavior and economic incentive model parameters. The ultimate objective of the tri-phase dual-model approach was to show how local and regional sustainable entrepreneurial and cooperative action might change illegal trash dumping behavior through a recycling and waste-to-fuels processing program. I further show how the effect of such an initiative would result in mitigating environmental degradation by lessening illegal trash dumping sites and landfill deposits while creating jobs and empowering a local population. It is my hope that the ramifications of this study might be considered at the Chapter, Agency and Nation levels on the Navajo Nation to explore possibilities of contracting-out for the development of a clean-energy waste-to-fuels processing facility and program

    Spartan Daily, December 15, 1965

    Get PDF
    Volume 53, Issue 55https://scholarworks.sjsu.edu/spartandaily/4669/thumbnail.jp

    Abstracts of Papers, 89th Annual Meeting of the Virginia Academy of Science, May 25-27, 2011, University of Richmond, Richmond VA

    Get PDF
    Full abstracts of the 89th Annual Meeting of the Virginia Academy of Science, May 25-27, 2011, University of Richmond, Richmond V

    New Mexico Lobo, Volume 074, No 16, 10/2/1970

    Get PDF
    New Mexico Lobo, Volume 074, No 16, 10/2/1970https://digitalrepository.unm.edu/daily_lobo_1970/1098/thumbnail.jp

    Efficient Learning and Evaluation of Complex Concepts in Inductive Logic Programming

    No full text
    Inductive Logic Programming (ILP) is a subfield of Machine Learning with foundations in logic programming. In ILP, logic programming, a subset of first-order logic, is used as a uniform representation language for the problem specification and induced theories. ILP has been successfully applied to many real-world problems, especially in the biological domain (e.g. drug design, protein structure prediction), where relational information is of particular importance. The expressiveness of logic programs grants flexibility in specifying the learning task and understandability to the induced theories. However, this flexibility comes at a high computational cost, constraining the applicability of ILP systems. Constructing and evaluating complex concepts remain two of the main issues that prevent ILP systems from tackling many learning problems. These learning problems are interesting both from a research perspective, as they raise the standards for ILP systems, and from an application perspective, where these target concepts naturally occur in many real-world applications. Such complex concepts cannot be constructed or evaluated by parallelizing existing top-down ILP systems or improving the underlying Prolog engine. Novel search strategies and cover algorithms are needed. The main focus of this thesis is on how to efficiently construct and evaluate complex hypotheses in an ILP setting. In order to construct such hypotheses we investigate two approaches. The first, the Top Directed Hypothesis Derivation framework, implemented in the ILP system TopLog, involves the use of a top theory to constrain the hypothesis space. In the second approach we revisit the bottom-up search strategy of Golem, lifting its restriction on determinate clauses which had rendered Golem inapplicable to many key areas. These developments led to the bottom-up ILP system ProGolem. A challenge that arises with a bottom-up approach is the coverage computation of long, non-determinate, clauses. Prolog’s SLD-resolution is no longer adequate. We developed a new, Prolog-based, theta-subsumption engine which is significantly more efficient than SLD-resolution in computing the coverage of such complex clauses. We provide evidence that ProGolem achieves the goal of learning complex concepts by presenting a protein-hexose binding prediction application. The theory ProGolem induced has a statistically significant better predictive accuracy than that of other learners. More importantly, the biological insights ProGolem’s theory provided were judged by domain experts to be relevant and, in some cases, novel
    • …
    corecore