34 research outputs found

    Time in discrete agent-based models of socio-economic systems

    Get PDF
    We formulate the problem of computing time in discrete dynamical agent-based models in the context of socio-economic modeling. For such formulation, we outline a simple solution. This requires minimal extensions of the original untimed model. The proposed solution relies on the notion of agent-specific schedules of action and on two modeling assumptions. These are fulfilled by most models of pratical interest. For models for which stronger assumptions can be made, we discuss alternative formulations.Agent-based models, time.

    Formal Verification of a Modern Boot Loader

    Get PDF
    We introduce the Syracuse Assured Boot Loader Executive (SABLE), a trustworthy secure loader. A trusted boot loader performs a cryptographic measurement (hash) of program code and executes it unconditionally, allowing later-stage software to verify the integrity of the system through local or remote attestation. A secure loader differs from a trusted loader in that it executes subsequent code only if measurements of that code match known-good values. We have applied a rigorous formal verification technique recently demonstrated in practice by NICTA in their verification of the seL4 microkernel. We summarize our design philosophy from a high level and present our formal verification strategy

    PolyAPM: Vergleichende Parallelprogrammierung mit Abstrakten Parallelen Maschinen

    Get PDF
    A parallelising compilation consists of many translation and optimisation stages. The programmer may steer the compiler through these stages by supplying directives with the source code or setting compiler switches. However, for an evaluation of the effects of individual stages, their selection and their best order, this approach is not optimal. To solve this problem, we propose the following method. The compilation is cast as a sequence of program transformations. Each intermediate program runs on an Abstract Parallel Machine (APM), while the program generated by the final transformation runs on the target architecture. Our intermediate programs are all in the same language, Haskell. Thus, each program is executable and still abstract enough to be legible, which enables the evaluation of the transformation that generated it. This evaluation is supported by a cost model, which makes a performance prediction of the abstract program for a real machine. Our project, PolyAPM, provides an acyclic directed graph -- usually a tree -- of APMs whose traversal specifies different combinations and orders of transformations. From one source program, several target programs can be constructed. Their run time characteristics can be evaluated and compared. The goal of PolyAPM is not to support the one-off construction of parallel application programs. For the method's overhead to pay off, the project aims rather at supporting the construction and comparison of many similar variations of a parallel program and a comparative evaluation of parallelisation techniques. With the automation of transformations, PolyAPM can also be used to construct semi-automatic compilation systems.Eine parallelisierende Compilation besteht aus vielen Übersetzungs- und Optimierungsstufen. Der Programmierer kann den Compiler in diesen Stufen steuern, in dem er im Quellcode Anweisungen einfügt oder Compileroptionen verwendet. Für eine Bewertung der Auswirkungen der einzelnen Stufen, der Auswahl der Stufen und ihrer besten Reihenfolge ist der Ansatz aber nicht geeignet. Um dieses Problem zu lösen, schlagen wir folgende Methode vor. Eine Compilation wird als Abfolge von Programmtransformationen betrachtet. Jedes Zwischenprogramm gehört jeweils zu einer Abstrakten Parallelen Maschine (APM), während das durch die letzte Transformation erzeugte Program für die Zielarchitektur bestimmt ist. Alle Zwischenprogramme sind in der Sprache Haskell geschrieben. Dadurch ist jedes Programm ausführbar und trotzdem abstrakt genug, um gut lesbar zu sein. Durch diese Ausführbarkeit kann die Transformation, durch die das Programm erzeugt wird, bewertet werden. Diese Bewertung wird durch ein Kostenmodell unterstützt, das eine Performance-Vorhersage des abstrakten Programms, bezogen auf eine reale Maschine, ermöglicht. Unser Projekt PolyAPM liefert einen azyklischen, gerichteten Graphen - in der Regel einen Baum - aus APMs, dessen Traversierungen jeweils bestimmte Kombinationen und Reihenfolgen von Transformationen definieren. Aus einem Quellprogramm können verschiedene Zielprogramme erzeugt werden, deren Laufzeitverhalten bewert- und vergleichbar ist. Das Ziel von PolyAPM liegt nicht in der Erzeugung eines einzelnen, parallelen Programms. Damit sich der zusätzliche Aufwand der Methode auszahlt, richtet sich das Projekt eher auf die Entwicklung und den Vergleich vieler, ähnlicher Variationen eines parallelen Programms und der vergleichenden Bewertung von Parallelisierungstechniken. Mit der Automatisierung von Transformationen kann PolyAPM dazu benutzt werden, halbautomatische Compilations-Systeme zu bauen

    Time in discrete agent-based models of socio-economic systems

    Get PDF
    URL des Documents de travail : http://centredeconomiesorbonne.univ-paris1.fr/bandeau-haut/documents-de-travail/Documents de travail du Centre d'Economie de la Sorbonne 2010.76 - ISSN : 1955-611XWe formulate the problem of computing time in discrete dynamical agent-based models in the context of socio-economic modeling. For such formulation, we outline a simple solution. This requires minimal extensions of the original untimed model. The proposed solution relies on the notion of agent-specific schedules of action and on two modeling assumptions. These are fulfilled by most models of pratical interest. For models for which stronger assumptions can be made, we discuss alternative formulations.On formule le problème du décompte du temps dans des modèles multi-agents à dynamique discrète dans le contexte de la modélisation socio-économique et on en donne une solution simple qui requiert une extension minimale du modèle initial

    Architecture aware parallel programming in Glasgow parallel Haskell (GPH)

    Get PDF
    General purpose computing architectures are evolving quickly to become manycore and hierarchical: i.e. a core can communicate more quickly locally than globally. To be effective on such architectures, programming models must be aware of the communications hierarchy. This thesis investigates a programming model that aims to share the responsibility of task placement, load balance, thread creation, and synchronisation between the application developer and the runtime system. The main contribution of this thesis is the development of four new architectureaware constructs for Glasgow parallel Haskell that exploit information about task size and aim to reduce communication for small tasks, preserve data locality, or to distribute large units of work. We define a semantics for the constructs that specifies the sets of PEs that each construct identifies, and we check four properties of the semantics using QuickCheck. We report a preliminary investigation of architecture aware programming models that abstract over the new constructs. In particular, we propose architecture aware evaluation strategies and skeletons. We investigate three common paradigms, such as data parallelism, divide-and-conquer and nested parallelism, on hierarchical architectures with up to 224 cores. The results show that the architecture-aware programming model consistently delivers better speedup and scalability than existing constructs, together with a dramatic reduction in the execution time variability. We present a comparison of functional multicore technologies and it reports some of the first ever multicore results for the Feedback Directed Implicit Parallelism (FDIP) and the semi-explicit parallelism (GpH and Eden) languages. The comparison reflects the growing maturity of the field by systematically evaluating four parallel Haskell implementations on a common multicore architecture. The comparison contrasts the programming effort each language requires with the parallel performance delivered. We investigate the minimum thread granularity required to achieve satisfactory performance for three implementations parallel functional language on a multicore platform. The results show that GHC-GUM requires a larger thread granularity than Eden and GHC-SMP. The thread granularity rises as the number of cores rises

    Effect of economic integration and trade facilitation on Intra-manufacturing export among ECOWAS member states

    Get PDF
    Institutional trade tariffs and non-tariff barriers are major constraints to exporters and importers. It affects many developing countries' trade flows and intra-trade performance, including ECOWAS member states. From the classical economic background, the economic problems of international macroeconomics can be examined from the position of trade costs. It further states that low trade facilitation policies such as high tariffs and non-tariff barriers alone can account for a 10% loss in a national income. From these backgrounds, the study assesses the interrelation of economic integration and trade facilitation in ECOWAS and how the member states have performed in intra-manufacturing export. Using the general method of moments (GMM) with instrumental variable (IV) estimation on a dynamic model with panel data from 15 members to analyse how regional economic integration and trade facilitation relationship promote intra-regional manufacturing exports. The findings reveal that trade facilitation in ECOWAS member states is below the world average. Due to high bureaucratic processes, thus, are high costs of exporting/importing. Again, the econometric analyses reveal economic integration significantly promotes trade facilitation in member states and can influence intra-manufacturing exports in ECOWAS. At the same time, manufacturing production has a direct and significant role in manufacturing exports. Some policy recommendations that would help facilitate trade and improve manufacturing production and intra exports in the ECOWAS sub-region were made

    Community Detection in Multimodal Networks

    Get PDF
    Community detection on networks is a basic, yet powerful and ever-expanding set of methodologies that is useful in a variety of settings. This dissertation discusses a range of different community detection on networks with multiple and non-standard modalities. A major focus of analysis is on the study of networks spanning several layers, which represent relationships such as interactions over time, different facets of high-dimensional data. These networks may be represented by several different ways; namely the few-layer (i.e. longitudinal) case as well as the many-layer (time-series cases). In the first case, we develop a novel application of variational expectation maximization as an example of the top-down mode of simultaneous community detection and parameter estimation. In the second case, we use a bottom-up strategy of iterative nodal discovery for these longer time-series, abetted with the assumption of their structural properties. In addition, we explore significantly self-looping networks, whose features are inseparable from the inherent construction of spatial networks whose weights are reflective of distance information. These types of networks are used to model and demarcate geographical regions. We also describe some theoretical properties and applications of a method for finding communities in bipartite networks that are weighted by correlations between samples. We discuss different strategies for community detection in each of these different types of networks, as well as their implications for the broader contributions to the literature. In addition to the methodologies, we also highlight the types of data wherein these ``non-standard" network structures arise and how they are fitting for the applications of the proposed methodologies: particularly spatial networks and multilayer networks. We apply the top-down and bottom-up community detection algorithms to data in the domains of demography, human mobility, genomics, climate science, psychiatry, politics, and neuroimaging. The expansiveness and diversity of these data speak to the flexibility and ubiquity of our proposed methods to all forms of relational data.Doctor of Philosoph

    Incremental Program Transformation Using Abstract Parallel Machines

    Get PDF
    Parallel processing is a key area of high-performance computing, providing the processing power to meet the needs of computation-intensive problems. Much research effort has therefore been invested in this area and it is growing. However, there are still several challenges that hinder its widespread use. In particular, parallel programs can be difficult to write because of the increase in the number of details to keep track of and design decisions to be made. Portability is also an important issue because the efficiency of a program depends heavily on the target machine. Another challenge arises when the issue of correctness is considered as the increased complexity and nondeterminism of parallel systems makes reasoning about them hard and renders traditional methods of testing unreliable. This thesis presents a methodology for developing parallel programs that addresses these issues. In it, executable parallel programs are derived incrementally from high-level specifications. A specification is given initially in mathematical notation and changed into an abstract functional specification. This is then transformed through a series of stages, during which additional information is given about the program, the target architecture and the parallelism. Finally it is transformed into the target language to produce an executable parallel program. This thesis uses C-f-MPI as an example target language, but many languages are possible. This methodology addresses several of the challenges of parallel programming. In particular, its incremental framework allows decisions about the program and its parallelism to be made one at a time, instead of all at once, easing the burden on the programmer and simplifying the decisions. Reasoning about the program is also made possible through the use of a pure functional language, such as Haskell, for intermediate versions of the program, as the program can then be transformed using equational reasoning, a correctness-preserving technique. The methodology is based on previous work on Abstract Parallel Machines and program derivation, which this thesis develops. It presents the basic infrastructure needed in the methodology, and therefore investigates how parallel systems can be modelled and manipulated in Haskell, and how the resultant programs can be transformed. It augments the basic methodology with the ability to introduce and reason about some key parallel programming features, including data distributions and program optimisations. The work is supported and demonstrated through two case studies

    Measurable Cones and Stable, Measurable Functions

    Full text link
    We define a notion of stable and measurable map between cones endowed with measurability tests and show that it forms a cpo-enriched cartesian closed category. This category gives a denotational model of an extension of PCF supporting the main primitives of probabilistic functional programming, like continuous and discrete probabilistic distributions, sampling, conditioning and full recursion. We prove the soundness and adequacy of this model with respect to a call-by-name operational semantics and give some examples of its denotations

    A self-mobile skeleton in the presence of external loads

    Get PDF
    Multicore clusters provide cost-effective platforms for running CPU-intensive and data-intensive parallel applications. To effectively utilise these platforms, sharing their resources is needed amongst the applications rather than dedicated environments. When such computational platforms are shared, user applications must compete at runtime for the same resource so the demand is irregular and hence the load is changeable and unpredictable. This thesis explores a mechanism to exploit shared multicore clusters taking into account the external load. This mechanism seeks to reduce runtime by finding the best computing locations to serve the running computations. We propose a generic algorithmic data-parallel skeleton which is aware of its computations and the load state of the computing environment. This skeleton is structured using the Master/Worker pattern where the master and workers are distributed on the nodes of the cluster. This skeleton divides the problem into computations where all these computations are initiated by the master and coordinated by the distributed workers. Moreover, the skeleton has built-in mobility to implicitly move the parallel computations between two workers. This mobility is data mobility controlled by the application, the skeleton. This skeleton is not problem-specific and therefore it is able to execute different kinds of problems. Our experiments suggest that this skeleton is able to efficiently compensate for unpredictable load variations. We also propose a performance cost model that estimates the continuation time of the running computations locally and remotely. This model also takes the network delay, data size and the load state as inputs to estimate the transfer time of the potential movement. Our experiments demonstrate that this model takes accurate decisions based on estimates in different load patterns to reduce the total execution time. This model is problem-independent because it considers the progress of all current computations. Moreover, this model is based on measurements so it is not dependent on the programming language. Furthermore, this model takes into account the load state of the nodes on which the computation run. This state includes the characteristics of the nodes and hence this model is architecture-independent. Because the scheduling has direct impact on system performance, we support the skeleton with a cost-informed scheduler that uses a hybrid scheduling policy to improve the dynamicity and adaptivity of the skeleton. This scheduler has agents distributed over the participating workers to keep the load information up to date, trigger the estimations, and facilitate the mobility operations. On runtime, the skeleton co-schedules its computations over computational resources without interfering with the native operating system scheduler. We demonstrate that using a hybrid approach the system makes mobility decisions which lead to improved performance and scalability over large number of computational resources. Our experiments suggest that the adaptivity of our skeleton in shared environment improves the performance and reduces resource contention on nodes that are heavily loaded. Therefore, this adaptivity allows other applications to acquire more resources. Finally, our experiments show that the load scheduler has a low incurred overhead, not exceeding 0.6%, compared to the total execution time
    corecore