13 research outputs found

    An Analytical Approach to Programs as Data Objects

    Get PDF
    This essay accompanies a selection of 32 articles (referred to in bold face in the text and marginally marked in the bibliographic references) submitted to Aarhus University towards a Doctor Scientiarum degree in Computer Science.The author's previous academic degree, beyond a doctoral degree in June 1986, is an "Habilitation à diriger les recherches" from the Université Pierre et Marie Curie (Paris VI) in France; the corresponding material was submitted in September 1992 and the degree was obtained in January 1993.The present 32 articles have all been written since 1993 and while at DAIMI.Except for one other PhD student, all co-authors are or have been the author's students here in Aarhus

    From fuzzy to annotated semantic web languages

    Get PDF
    The aim of this chapter is to present a detailed, selfcontained and comprehensive account of the state of the art in representing and reasoning with fuzzy knowledge in Semantic Web Languages such as triple languages RDF/RDFS, conceptual languages of the OWL 2 family and rule languages. We further show how one may generalise them to so-called annotation domains, that cover also e.g. temporal and provenance extensions

    Patterns for Programming in the Semantic Web

    Get PDF
    Originally proposed in the mid-90s, design patterns for software development played a key role in object-oriented programming not only in increasing software quality, but also by giving a better understanding of the power and limitations of this paradigm. Since then, several authors have endorsed a similar task for other programming paradigms, in the hope of achieving similar benefits. In this paper we discuss design patterns for the Semantic Web, giving new insights on how existing programming frameworks can be used in a systematic way to design large-scale systems. The common denominator between these frameworks is the combination between different reasoning systems, namely description logics and logic programming. Therefore, we chose to work in a generalization of dl-programs that supports several (possibly different) description logics, expecting that our results will be easily adapted to other existing frameworks such as multi-context systems. This study also suggests new constructs to enforce legibility and internal structure of logic-based Semantic Web programs

    A Field Guide to Genetic Programming

    Get PDF
    xiv, 233 p. : il. ; 23 cm.Libro ElectrónicoA Field Guide to Genetic Programming (ISBN 978-1-4092-0073-4) is an introduction to genetic programming (GP). GP is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. The authorsIntroduction -- Representation, initialisation and operators in Tree-based GP -- Getting ready to run genetic programming -- Example genetic programming run -- Alternative initialisations and operators in Tree-based GP -- Modular, grammatical and developmental Tree-based GP -- Linear and graph genetic programming -- Probalistic genetic programming -- Multi-objective genetic programming -- Fast and distributed genetic programming -- GP theory and its applications -- Applications -- Troubleshooting GP -- Conclusions.Contents xi 1 Introduction 1.1 Genetic Programming in a Nutshell 1.2 Getting Started 1.3 Prerequisites 1.4 Overview of this Field Guide I Basics 2 Representation, Initialisation and GP 2.1 Representation 2.2 Initialising the Population 2.3 Selection 2.4 Recombination and Mutation Operators in Tree-based 3 Getting Ready to Run Genetic Programming 19 3.1 Step 1: Terminal Set 19 3.2 Step 2: Function Set 20 3.2.1 Closure 21 3.2.2 Sufficiency 23 3.2.3 Evolving Structures other than Programs 23 3.3 Step 3: Fitness Function 24 3.4 Step 4: GP Parameters 26 3.5 Step 5: Termination and solution designation 27 4 Example Genetic Programming Run 4.1 Preparatory Steps 29 4.2 Step-by-Step Sample Run 31 4.2.1 Initialisation 31 4.2.2 Fitness Evaluation Selection, Crossover and Mutation Termination and Solution Designation Advanced Genetic Programming 5 Alternative Initialisations and Operators in 5.1 Constructing the Initial Population 5.1.1 Uniform Initialisation 5.1.2 Initialisation may Affect Bloat 5.1.3 Seeding 5.2 GP Mutation 5.2.1 Is Mutation Necessary? 5.2.2 Mutation Cookbook 5.3 GP Crossover 5.4 Other Techniques 32 5.5 Tree-based GP 39 6 Modular, Grammatical and Developmental Tree-based GP 47 6.1 Evolving Modular and Hierarchical Structures 47 6.1.1 Automatically Defined Functions 48 6.1.2 Program Architecture and Architecture-Altering 50 6.2 Constraining Structures 51 6.2.1 Enforcing Particular Structures 52 6.2.2 Strongly Typed GP 52 6.2.3 Grammar-based Constraints 53 6.2.4 Constraints and Bias 55 6.3 Developmental Genetic Programming 57 6.4 Strongly Typed Autoconstructive GP with PushGP 59 7 Linear and Graph Genetic Programming 61 7.1 Linear Genetic Programming 61 7.1.1 Motivations 61 7.1.2 Linear GP Representations 62 7.1.3 Linear GP Operators 64 7.2 Graph-Based Genetic Programming 65 7.2.1 Parallel Distributed GP (PDGP) 65 7.2.2 PADO 67 7.2.3 Cartesian GP 67 7.2.4 Evolving Parallel Programs using Indirect Encodings 68 8 Probabilistic Genetic Programming 8.1 Estimation of Distribution Algorithms 69 8.2 Pure EDA GP 71 8.3 Mixing Grammars and Probabilities 74 9 Multi-objective Genetic Programming 75 9.1 Combining Multiple Objectives into a Scalar Fitness Function 75 9.2 Keeping the Objectives Separate 76 9.2.1 Multi-objective Bloat and Complexity Control 77 9.2.2 Other Objectives 78 9.2.3 Non-Pareto Criteria 80 9.3 Multiple Objectives via Dynamic and Staged Fitness Functions 80 9.4 Multi-objective Optimisation via Operator Bias 81 10 Fast and Distributed Genetic Programming 83 10.1 Reducing Fitness Evaluations/Increasing their Effectiveness 83 10.2 Reducing Cost of Fitness with Caches 86 10.3 Parallel and Distributed GP are Not Equivalent 88 10.4 Running GP on Parallel Hardware 89 10.4.1 Master–slave GP 89 10.4.2 GP Running on GPUs 90 10.4.3 GP on FPGAs 92 10.4.4 Sub-machine-code GP 93 10.5 Geographically Distributed GP 93 11 GP Theory and its Applications 97 11.1 Mathematical Models 98 11.2 Search Spaces 99 11.3 Bloat 101 11.3.1 Bloat in Theory 101 11.3.2 Bloat Control in Practice 104 III Practical Genetic Programming 12 Applications 12.1 Where GP has Done Well 12.2 Curve Fitting, Data Modelling and Symbolic Regression 12.3 Human Competitive Results – the Humies 12.4 Image and Signal Processing 12.5 Financial Trading, Time Series, and Economic Modelling 12.6 Industrial Process Control 12.7 Medicine, Biology and Bioinformatics 12.8 GP to Create Searchers and Solvers – Hyper-heuristics xiii 12.9 Entertainment and Computer Games 127 12.10The Arts 127 12.11Compression 128 13 Troubleshooting GP 13.1 Is there a Bug in the Code? 13.2 Can you Trust your Results? 13.3 There are No Silver Bullets 13.4 Small Changes can have Big Effects 13.5 Big Changes can have No Effect 13.6 Study your Populations 13.7 Encourage Diversity 13.8 Embrace Approximation 13.9 Control Bloat 13.10 Checkpoint Results 13.11 Report Well 13.12 Convince your Customers 14 Conclusions Tricks of the Trade A Resources A.1 Key Books A.2 Key Journals A.3 Key International Meetings A.4 GP Implementations A.5 On-Line Resources 145 B TinyGP 151 B.1 Overview of TinyGP 151 B.2 Input Data Files for TinyGP 153 B.3 Source Code 154 B.4 Compiling and Running TinyGP 162 Bibliography 167 Inde

    Field Guide to Genetic Programming

    Get PDF

    Foundations of program refinement by calculation

    Get PDF
    Tese de doutoramento em Informática (ramo de conhecimento em Fundamentos da Computação)Embora não seja prática generalizada, aceita-se hoje o valor da especificação formal de aplicações como ingrediente essencial ao desenvolvimento de software fiável. Isso pressupõe uma noção adicional — a de refinamento — capaz de sistematizar a derivação de implementações correctas a partir de modelos abstractos (ie. especificações). No chamado estilo construtivo de desenvolvimento, faz-se refinamento passo-a-passo, provando que cada passo decorre do anterior por regras que garantem a correcção. Estas provas, que são vulgarmente feitas na lógica de predicados e teoria de conjuntos, têm, porém, problemas de escalabilidade: por um lado, não é prático provar factos envolvendo muitas variáveis e quantificações. Por outro, o nível relativamente pouco ágil em que decorrem as provas impede a sua progressão e pede ferramentas automáticas de prova. Esta tese desenvolve uma técnica alternativa de refinamento baseada na chamada transformada-pointfree. A ideia é desenvolver um cálculo ágil capaz de calcular implementações a partir das suas especificações por transformações algébricas simples. A transformada actua sempre que pretendemos raciocinar, mapeando expressões da lógica de predicados em expressões do cálculo relacional com implosão das quantificações e outras construções baseadas em variáveis. Nesse sentido, esta tese aborda os fundamentos do refinamento de programas por cálculo, através de raciocínios ao nível do cálculo de relações binárias dito pointfree, nos seus dois níveis essenciais: dados e algoritmos. Para esse efeito, desenvolvem-se e generalizam-se algumas construções do cálculo relacional, nomeadamente a transposição funcional, uma técnica que tem por objectivo converter relações em funções, de modo a exprimir a álgebra de relações através da álgebra de funções. É utilizada nesta dissertação como leit-motiv. No sentido de potenciar ao máximo a pretendida algebrização do processo de cálculo de programas, a abordagem proposta capitaliza no conceito de conexão de Galois. Em particular, mostra-se como as principais leis de refinamento de dados podem ser vistas como esse tipo de conexão. No plano do refinamento algorítmico, estuda-se a ordem padrão de refinamento ao nível pointfree e calcula-se a sua factorização em duas subordens com comportamentos opostos: redução de não-determinismo e aumento da definição. Essa factorização torna a ordem original mais tratável matematicamente. Apresenta-se a sua teoria em estilo pointfree, que inclui uma prova simples do refinamento estrutural, para tipos paramétricos arbitrários. Finalmente, mostramos que só precisamos de uma regra completa de refinamento relacional—para provar o refinamento coalgébrico—e utilizámo-la para testemunhar o refinamento por cálculo de relações de transição correspondentes a coalgebras.Design of trustworthy software calls for technologies which discuss software reliability formally, ie. by writing and reasoning about mathematical models of real-life objects and activities (vulg. specifications). Such technologies involve the additional notion of refinement (or reification), which means the systematic process of ensuring correct implementations for formal specifications. In the well-known constructive style for software development, design is factored in several steps, each intermediate step being first proposed and then proved to follow from its antecedent. However, such an ”invent-and-verify” style is often impractical due to the complexity of the mathematical reasoning involved in real-size software problems. Moreover, program reasoning is normally carried out in predicate/ temporal logic and na¨ıve set theory — notations which don’t scale up to fully detailed models of complex problems. This thesis is concerned with the foundations of an alternative technique for program refinement based on so-called pointfree calculation. The idea is to develop a calculus allowing for programs to be actually calculated from their specifications. Instead of doing proofs from first principles, this strategy leads to implementations which are “correct by construction”. Conventional refinement rules are transformed into simple, elegant equations dispensing with points and involving only binary relation combinators. The pointfree binary relational calculus is therefore at the heart of the proposed refinement theory. This thesis adds to such a mathematical framework in two ways: on the one hand it shows how to apply it to data and algorithimc refinement problems. On the other hand, some constructions are proposed which prove useful not only in refinement but also in general. This includes generic functional transposition, a technique for converting relations into functions aimed at developing relational algebra via the algebra of functions. It is employed in this dissertation as a leit motiv. Our proposed theory of data refinement draws heavily on the Galois connection approach to mathematical reasoning. This includes a simple way to calculate refinement invariants induced by the Galois connected laws. Algorithmic refinement is addressed in the same way. The standard operation refinement ordering is given a pointfree treatmentwhich includes a simple calculation of Groves’ factorization and its direct application in structural refinement involving arbitrary parametric types. Finally, coalgebraic refinement is done using an equivalent single complete rule for data refinement which is used to witness refinement by calculation of transition relations corresponding to coalgebras

    Chemical programming to eploit chemical Reaction systems for computation

    Get PDF
    This thesis is on programming approaches to exploit the computational capabilities of chemical systems, consisting of two parts. In the first part, constructive design, research activities on theoretical development of chemical programming are reported. As results of the investigations, general programming principles, named organization-oriented programming, are derived. The idea is to design reaction networks such that the desired computational outputs correspond to the organizational structures within the networks. The second part, autonomous design, discusses on programming strategies without human interactions, namely evolution and exploration. Motivations for this programming approach include possibilities to discover novelty without rationalization. Regarding first the evolutionary strategies, we rather focused on how to track the evolutionary processes. Our approach is to analyze these dynamical processes on a higher level of abstraction, and usefulness of distinguishing organizational evolution in space of organizations from actual evolution in state space is emphasized. As second strategy of autonomous chemical programming, we suggest an explorative approach, in which an automated system is utilized to explore the behavior of the chemical reaction system as a preliminary step. A specific aspect of the system's behavior becomes ready for a programmer to be chosen for a particular computational purpose. In this thesis, developments of autonomous exploration techniques are reported. Finally, we discuss combining those two approaches, constructive design and autonomous design, titled as a hybrid approach. From our perspective, hybrid approaches are ideal, and cooperation of constructive design and autonomous design is fruitful

    Foundations of Fuzzy Logic and Semantic Web Languages

    Get PDF
    This book is the first to combine coverage of fuzzy logic and Semantic Web languages. It provides in-depth insight into fuzzy Semantic Web languages for non-fuzzy set theory and fuzzy logic experts. It also helps researchers of non-Semantic Web languages get a better understanding of the theoretical fundamentals of Semantic Web languages. The first part of the book covers all the theoretical and logical aspects of classical (two-valued) Semantic Web languages. The second part explains how to generalize these languages to cope with fuzzy set theory and fuzzy logic

    Metacomputing on clusters augmented with reconfigurable hardware

    Get PDF

    Foundations of Fuzzy Logic and Semantic Web Languages

    Get PDF
    This book is the first to combine coverage of fuzzy logic and Semantic Web languages. It provides in-depth insight into fuzzy Semantic Web languages for non-fuzzy set theory and fuzzy logic experts. It also helps researchers of non-Semantic Web languages get a better understanding of the theoretical fundamentals of Semantic Web languages. The first part of the book covers all the theoretical and logical aspects of classical (two-valued) Semantic Web languages. The second part explains how to generalize these languages to cope with fuzzy set theory and fuzzy logic
    corecore