45 research outputs found

    Abstraction : a notion for reverse engineering.

    Get PDF

    Ideology Versus Clientelism: Modernization and Electoral Competition in Brazil

    Get PDF
    This study investigates how parties utilize the political dimensions of ideology (left-right) and clientelism (programmatic-patronage) to compete electorally in developing democracies. It proposes a combined utility theory, which suggests polarized competitive elections in modernizing national electoral markets compel programmatic parties to coalesce with clientelistic parties to gain access to regional private electoral markets. Methodologically, this study draws on a mixed-method approach focusing on Brazil as a crucial test case. It applies spatial voting models to assess the validity of ideological competition as well as geospatial voting distribution based on clustering and dispersion to devise a new quantitative measurement of clientelism based on subnational electoral market characteristics. Field research helps uncover how political elites form strategically combined ideological and clientelistic party coalitions to increase electoral success. The analysis suggests ideology and clientelism operate as independent factors explaining political linkages in developing democracies. The interaction of these dimensions through electoral coalitions, however, indicates the weakening of ideology over time and lack of discernible pattern on the clientelistic level. This study contributes to the literature by investigating party competition on the ideological and clientelistic levels. It also contributes to the analytical and methodological refinement of the concept of clientelism as a systematic political linkage

    Parallel programming using functional languages

    Get PDF
    It has been argued for many years that functional programs are well suited to parallel evaluation. This thesis investigates this claim from a programming perspective; that is, it investigates parallel programming using functional languages. The approach taken has been to determine the minimum programming which is necessary in order to write efficient parallel programs. This has been attempted without the aid of clever compile-time analyses. It is argued that parallel evaluation should be explicitly expressed, by the programmer, in programs. To do achieve this a lazy functional language is extended with parallel and sequential combinators. The mathematical nature of functional languages means that programs can be formally derived by program transformation. To date, most work on program derivation has concerned sequential programs. In this thesis Squigol has been used to derive three parallel algorithms. Squigol is a functional calculus from program derivation, which is becoming increasingly popular. It is shown that some aspects of Squigol are suitable for parallel program derivation, while others aspects are specifically orientated towards sequential algorithm derivation. In order to write efficient parallel programs, parallelism must be controlled. Parallelism must be controlled in order to limit storage usage, the number of tasks and the minimum size of tasks. In particular over-eager evaluation or generating excessive numbers of tasks can consume too much storage. Also, tasks can be too small to be worth evaluating in parallel. Several program techniques for parallelism control were tried. These were compared with a run-time system heuristic for parallelism control. It was discovered that the best control was effected by a combination of run-time system and programmer control of parallelism. One of the problems with parallel programming using functional languages is that non-deterministic algorithms cannot be expressed. A bag (multiset) data type is proposed to allow a limited form of non-determinism to be expressed. Bags can be given a non-deterministic parallel implementation. However, providing the operations used to combine bag elements are associative and commutative, the result of bag operations will be deterministic. The onus is on the programmer to prove this, but usually this is not difficult. Also bags' insensitivity to ordering means that more transformations are directly applicable than if, say, lists were used instead. It is necessary to be able to reason about and measure the performance of parallel programs. For example, sometimes algorithms which seem intuitively to be good parallel ones, are not. For some higher order functions it is posible to devise parameterised formulae describing their performance. This is done for divide and conquer functions, which enables constraints to be formulated which guarantee that they have a good performance. Pipelined parallelism is difficult to analyse. Therefore a formal semantics for calculating the performance of pipelined programs is devised. This is used to analyse the performance of a pipelined Quicksort. By treating the performance semantics as a set of transformation rules, the simulation of parallel programs may be achieved by transforming programs. Some parallel programs perform poorly due to programming errors. A pragmatic method of debugging such programming errors is illustrated by some examples

    Modelling mechanisms of change in crop populations

    Get PDF
    Computer -based simulation models of changes occurring within crop populations when subjected to agents of phenotypic change, have been developed for use on commonly available personal computer equipment. As an underlying developmental principle, the models have been designed as general -case, mechanistic, stochastic models, in contrast to the predominantly empirically- derived, system -specific, deterministic (predictive) models currently available. A modelling methodology has evolved, to develop portable simulation models, written in high - level, general purpose code, allowing for use, modification and continued development by biologists with little requirement for computer programming expertise.The initial subject of these modelling activities was the simulation of the effects of selection and other agents of genetic change in crop populations, resulting in the computer model, PSELECT. Output from PSELECT, specifically phenotypic and genotypic response to phenotypic truncation selection, conformed to expectation, as defined by results from established analogue modelling work. Validation of the model by comparison of output with the results from an experimental -scale plant breeding exercise was less conclusive, and, owing to the fact that the genetic basis of the phenotypic characters used in the selection programme was insufficiently defined, the validation exercise provided only broad qualitative agreement with the model output. By virtue of the predominantly subjective nature of plant breeding programmes, the development of PSELECT resulted in a model of theoretical interest, but with little current practical application.Modelling techniques from the development of the PSELECT model were applied to the simulation of plant disease epidemics, where the modelled system is well characterised, and simulation modelling is an area of active research. The model SATSUMA, simulating the spatial and temporal development of diseases within crop populations, was developed. The model generates output which conforms to current epidemiological theory, and is compatible with contemporary methods of temporal and spatial analysis of crop disease epidemics. Temporal disease progress in the simulations was accurately described by variations of a generalised logistic model. Analysis of the spatial pattern of simulated epidemics by frequency distribution fitting or distance class methods was found to give good qualitative agreement with observed biological systems.The mechanistic nature of SATSUMA and its deliberate design as a general case model make it especially suitable for the investigation of component processes in a generalised plant disease epidemic, and valuable as an educational tool. Subject to validation against observational data, such models can be utilised as predictive tools by the incorporation of information (concerning crop species, pathogen etc.) specifically relevant to the modelled system. In addition to its educational use, SATSUMA has been used as research tool for the examination of the effect of spatial pattern of disease and disease incidence on the efficiency of sampling protocols and in parameterising a general theoretical model for describing the spatio -temporal development of plant diseases

    Programming models for many-core architectures: a co-design approach

    Get PDF
    Common many-core processors contain tens of cores and distributed memory. Compared to a multicore system, which only has a few tightly coupled cores sharing a single bus and memory, several complex problems arise. Notably, many cores require many parallel tasks to fully utilize the cores, and communication happens in a distributed and decentralized way. Therefore, programming such a processor requires the application to exhibit concurrency. In contrast to a single-core application, a concurrent application has to deal with memory state changes with an observable (non-deterministic) intermediate state. The complexity introduced by these problems makes programming a many-core system with a single-core-based programming approach notoriously hard.\ud \ud The central concept of this thesis is that abstractions, which are related to (many-core) programming, are structured in a single platform model. A platform is a layered view of the hardware, a memory model, a concurrency model, a model of computation, and compile-time and run-time tooling. Then, a programming model is a specific view on this platform, which is used by a programmer. In this view, some details can be hidden from the programmer's perspective, some details cannot. For example, an operating system presents an infinite number of parallel virtual execution units to the application whilst it hides details regarding scheduling. On the other hand, a programmer usually has balance workload among threads by hand.\ud \ud This thesis presents modifications to different abstraction layers of a many-core architecture, in order to make the system as a whole more efficient, and to reduce the programming complexity. These modifications influence other abstractions in the platform, and especially the programming model. Therefore, this thesis applies co-design on all models. Notably, co-design of the memory model, concurrency model, and model of computation is required for a scalable implementation of lambda-calculus. Moreover, only the combination of requirements of the many-core hardware from one side and the concurrency model from the other leads to a memory model abstraction. Hence, this thesis shows that to cope with the current trends in many-core architectures from a programming perspective, it is essential and feasible to inspect and adapt all abstractions collectively

    Performance modelling for system-level design

    Get PDF
    xii+208hlm.;24c

    An Estelle compiler

    Get PDF
    The increasing development and use of computer networks has necessitated international standards to be defined. Central to the standardization efforts is the concept of a Formal Description Technique (FDT) which is used to provide a definition medium for communication protocols and services. This document describes the design and implementation of one of the few existing compilers for the one such FDT, the language "Estelle" ([ISO85], [ISO86], [ISO87])

    Energy-Efficient Technologies for High-Performance Manufacturing Industries

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A Distributed Security Architecture for Large Scale Systems

    Get PDF
    This thesis describes the research leading from the conception, through development, to the practical implementation of a comprehensive security architecture for use within, and as a value-added enhancement to, the ISO Open Systems Interconnection (OSI) model. The Comprehensive Security System (CSS) is arranged basically as an Application Layer service but can allow any of the ISO recommended security facilities to be provided at any layer of the model. It is suitable as an 'add-on' service to existing arrangements or can be fully integrated into new applications. For large scale, distributed processing operations, a network of security management centres (SMCs) is suggested, that can help to ensure that system misuse is minimised, and that flexible operation is provided in an efficient manner. The background to the OSI standards are covered in detail, followed by an introduction to security in open systems. A survey of existing techniques in formal analysis and verification is then presented. The architecture of the CSS is described in terms of a conceptual model using agents and protocols, followed by an extension of the CSS concept to a large scale network controlled by SMCs. A new approach to formal security analysis is described which is based on two main methodologies. Firstly, every function within the system is built from layers of provably secure sequences of finite state machines, using a recursive function to monitor and constrain the system to the desired state at all times. Secondly, the correctness of the protocols generated by the sequences to exchange security information and control data between agents in a distributed environment, is analysed in terms of a modified temporal Hoare logic. This is based on ideas concerning the validity of beliefs about the global state of a system as a result of actions performed by entities within the system, including the notion of timeliness. The two fundamental problems in number theory upon which the assumptions about the security of the finite state machine model rest are described, together with a comprehensive survey of the very latest progress in this area. Having assumed that the two problems will remain computationally intractable in the foreseeable future, the method is then applied to the formal analysis of some of the components of the Comprehensive Security System. A practical implementation of the CSS has been achieved as a demonstration system for a network of IBM Personal Computers connected via an Ethernet LAN, which fully meets the aims and objectives set out in Chapter 1. This implementation is described, and finally some comments are made on the possible future of research into security aspects of distributed systems.IBM (United Kingdom) Laboratories Hursley Park, Winchester, U
    corecore