586 research outputs found

    Automatic parallelization of irregular and pointer-based computations: perspectives from logic and constraint programming

    Get PDF
    Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for inter-procedural pointer aliasing analysis for independence detection and having to manage speculative and irregular computations through task granularity control and dynamic task allocation. We also provide pointers to some of the progress made in these áreas. In the associated talk we demónstrate representatives of several generations of these parallelizing compilers

    High-level characteristics of or-and independent and-parallelism in prolog

    Get PDF
    Although studies of a number of parallel implementations of logic programming languages are now available, their results are difficult to interpret due to the multiplicity of factors involved, the effect of each of which is difficult to sepárate. In this paper we present the results of a high-level simulation study of or- and independent and-parallelism with a wide selection of Prolog programs that aims to determine the intrinsic amount of parallelism, independently of implementation factors, thus facilitating this separation. We expect this study will be instrumental in better understanding and comparing results from actual implementations, as shown by some examples provided in the paper. In addition, the paper examines some of the issues and tradeoffs associated with the combination of and- and or-parallelism and proposes reasonable solutions based on the simulation data obtained

    Towards dynamic term size computation via program transformation

    Get PDF
    Knowing the size of the terms to which program variables are bound at run-time in logic programs is required in a class of applications related to program optimization such as, for example, recursion elimination and granularity analysis. Such size is difficult to even approximate at compile time and is thus generally computed at run-time by using (possibly predefined) predicates which traverse the terms involved. We propose a technique based on program transformation which has the potential of performing this computation much more efficiently. The technique is based on finding program procedures which are called before those in which knowledge regarding term sizes is needed and which traverse the terms whose size is to be determined, and transforming such procedures so that they compute term sizes "on the fly". We present a systematic way of determining whether a given program can be transformed in order to compute a given term size at a given program point without additional term traversal. Also, if several such transformations are possible our approach allows finding minimal transformations under certain criteria. We also discuss the advantages and present some applications of our technique

    Annotated text databases in the context of the Kaj Munk corpus:One database model, one query language, and several applications

    Get PDF

    Arguement in the humanities: A knowledge based approach

    Get PDF
    In this thesis I have a threefold purpose. I will attempt: (a) to present a generic design for a tool - the Argument Support Program - which can be of use in supporting the reasoning of archaeologists (and others especially, but not exclusively, in the humanities); (b) I will present a model of argumentation and debate as the theoretical orientation within which the model is developed; and, (c) I will suggest that this approach is a natural development of several strands of research within the artificial intelligence community. A tripartite model of argument is presented in terms of arguers, the argument structure produced and the argument domain or field. This model subsumes reasoning, interpretation and argument exchange or debate. It is maintained, further, that while this model is generally applicable, specific domains have particular styles of argument. The notion of argument style is discussed in terms of the types of reasoning used. The related concept of relevance in argument is discussed in terms of the specific tokens of these types which may be used in a particular argument. It is argued that archaeology is characterized, at least in part, by the use of argument by analogy and argument from theoretical principles or models. A design for a generic program - the Argument Support Program (ASP) - based on the theoretical principles is delineated. Details of the partial implementation of the model as a constrained debater in the domain of archaeology (ASP for archaeology or ASParch) are presented. Example runs which illustrate how the characterizing features of archaeology are dealt with are also presented as are examples of the various domain and system knowledge bases needed. The application of ASPs to other domains and areas such as literary criticism, legal reasoning and Darwinian theory is discussed. In the final chapter, the achievements and inadequacies of this research are summarized, possible reasons are presented for the inadequacies in the resulting system and future directions discussed

    The design and formative evaluation of computer based qualitative modelling environments for schools

    Get PDF
    This research investigated how computers might enable young learners to build models so that they can express and explore their ideas and hence they can gain understanding of the subject matter as well as developing modelling abilities. A design for a qualitative modelling environment was produced, which incorporated a simple rule-based metaphor that could be presented as a diagram. The design was founded on empirical evidence of children modelling as well as theoretical grounds. This research originated in and contributed to the Modus Project, a joint venture between King's College London and the Advisory Unit for Microtechnology in Education, Hertfordshire County Council. A prototype of the software, Expert Builder, was implemented by software engineers from the Modus team. The initial stage of evaluation, based on a questionnaire survey and widespread trialling, established that the tool could be used in a wide range of educational contexts. A detailed study of children using the qualitative modelling environment was conducted in three primary schools involving 34 pupils, aged nine to 11. They used the modelling environment within the classroom in their normal curriculum work over one school year on a variety of topics assisted by their class teacher. The modelling environment enabled cooperative groupwork and supported pupils in consolidating and extending their knowledge. A formative evaluation was used to inform the design of a revised version of the software. In addition the experiences of children using the software were analysed. A framework was developed which characterised the stages in the modelling process. Teachers in the study were observed to demonstrate the earlier stages of the modelling process and then to set tasks for the children based on the later stages of building and testing the models. The evidence suggested that the abilities to model were context dependent so that pupils as young as nine years old could undertake the whole modelling process provided that they were working on subject matter with which they were familiar. The teachers made use of computer based modelling in order to develop and reinforce pupils' understanding of various aspects of the curriculum and therefore they chose modelling tasks for the children. However in one school the children were given the opportunity to design and build models of their own choice and they demonstrated that they were able to carry out all the stages in the modelling process. A taxonomy of computer based modelling is proposed which could be used to inform decisions about the design of the modelling curriculum and could provide a basis for researchers investigating the modelling process. This would be useful for further research into the intellectual and social activities of people learning to model and for teachers seeking to develop a framework for the modelling curriculum. The National Curriculum (Department of Education and Science and the Welsh Office, 1990) specifies that early steps in computer based modelling should involve exploring models developed by others and pupils are not required to build models themselves until level 7 which is expected to be reached by more able 14 year-olds. In this thesis it is argued that a modelling curriculum should provide early opportunities for pupils to undertake the modelling process by developing simple models on familiar subject matter as well as opportunities for exploring more complex models as evidence from research reported in this thesis suggests that younger pupils are able to build models. In this way pupils will be enabled to acquire modelling capability as well as developing their understanding of a range of topics through modelling. Progression in modelling capability would involve constructing models of more complex situations and using a wider range of modelling environments

    Use of proofs-as-programs to build an anology-based functional program editor

    Get PDF
    This thesis presents a novel application of the technique known as proofs-as-programs. Proofs-as-programs defines a correspondence between proofs in a constructive logic and functional programs. By using this correspondence, a functional program may be represented directly as the proof of a specification and so the program may be analysed within this proof framework. CʸNTHIA is a program editor for the functional language ML which uses proofs-as-programs to analyse users' programs as they are written. So that the user requires no knowledge of proof theory, the underlying proof representation is completely hidden. The proof framework allows programs written in CʸNTHIA to be checked to be syntactically correct, well-typed, well-defined and terminating. CʸNTHIA also embodies the idea of programming by analogy — rather than starting from scratch, users always begin with an existing function definition. They then apply a sequence of high-level editing commands which transform this starting definition into the one required. These commands preserve correctness and also increase programming efficiency by automating commonly occurring steps. The design and implementation of CʸNTHIA is described and its role as a novice programming environment is investigated. Use by experts is possible but only a sub-set of ML is currently supported. Two major trials of CʸNTHIA have shown that CʸNTHIA is well-suited as a teaching tool. Users of CʸNTHIA make fewer programming errors and the feedback facilities of CʸNTHIA mean that it is easier to track down the source of errors when they do occur
    corecore