8 research outputs found

    Expressing functional reactive programming in C++

    Get PDF
    Abstract. Most C++ programs are written in a straight-forward imperative style. While e.g. callbacks are employed either directly or through the observer pattern, the mental overhead of keeping program state congruent is high and increases with program size. This paper presents a translation of functional reactive programming into C++ terms. This paradigm originates from the Haskell language community and seeks to express easily how programs should react to new input. Concretely, an implementation of a reactive property class is presented, where property in this context is a class holding a value of a user-specified type. The property class provides a mechanism to bind to it an expression that takes an arbitrary number of inputs, some of which can be other instances of property classes. When any of these dependent properties is updated the expression is re-evaluated, so that a dataflow graph may be built using this type. The automatic re-evaluation reduces the boilerplate code necessary to update variables, which can lead to fewer programming errors and more concise programs. The implementation demonstrates that the core principles of functional reactive programming can be expressed in modern C++. Further, the implementation can be done in an idiomatic manner which appears familiar to C++ developers. At the same time, the implementation’s complexity highlights how much further the C++ meta-programming facilities must be developed to properly support facilities such as a functional reactive programming library implementation. A number of compile-time template metaprogramming utilities used in the implementation are also introduced

    Schedulability, Response Time Analysis and New Models of P-FRP Systems

    Get PDF
    Functional Reactive Programming (FRP) is a declarative approach for modeling and building reactive systems. FRP has been shown to be an expressive formalism for building applications of computer graphics, computer vision, robotics, etc. Priority-based FRP (P-FRP) is a formalism that allows preemption of executing programs and guarantees real-time response. Since functional programs cannot maintain state and mutable data, changes made by programs that are preempted have to be rolled back. Hence in P-FRP, a higher priority task can preempt the execution of a lower priority task, but the preempted lower priority task will have to restart after the higher priority task has completed execution. This execution paradigm is called Abort-and-Restart (AR). Current real-time research is focused on preemptive of non-preemptive models of execution and several state-of-the-art methods have been developed to analyze the real-time guarantees of these models. Unfortunately, due to its transactional nature where preempted tasks are aborted and have to restart, the execution semantics of P-FRP does not fit into the standard definitions of preemptive or non-preemptive execution, and the research on the standard preemptive and non-preemptive may not applicable for the P-FRP AR model. Out of many research areas that P-FRP may demands, we focus on task scheduling which includes task and system modeling, priority assignment, schedulability analysis, response time analysis, improved P-FRP AR models, algorithms and corresponding software. In this work, we review existing results on P-FRP task scheduling and then present our research contributions: (1) a tighter feasibility test interval regarding the task release offsets as well as a linked list based algorithm and implementation for scheduling simulation; (2) P-FRP with software transactional memory-lazy conflict detection (STM-LCD); (3) a non-work-conserving scheduling model called Deferred Start; (4) a multi-mode P-FRP task model; (5) SimSo-PFRP, the P-FRP extension of SimSo - a SimPy-based, highly extensible and user friendly task generator and task scheduling simulator.Computer Science, Department o

    A new approach to reversible computing with applications to speculative parallel simulation

    Get PDF
    In this thesis, we propose an innovative approach to reversible computing that shifts the focus from the operations to the memory outcome of a generic program. This choice allows us to overcome some typical challenges of "plain" reversible computing. Our methodology is to instrument a generic application with the help of an instrumentation tool, namely Hijacker, which we have redesigned and developed for the purpose. Through compile-time instrumentation, we enhance the program's code to keep track of the memory trace it produces until the end. Regardless of the complexity behind the generation of each computational step of the program, we can build inverse machine instructions just by inspecting the instruction that is attempting to write some value to memory. Therefore from this information, we craft an ad-hoc instruction that conveys this old value and the knowledge of where to replace it. This instruction will become part of a more comprehensive structure, namely the reverse window. Through this structure, we have sufficient information to cancel all the updates done by the generic program during its execution. In this writing, we will discuss the structure of the reverse window, as the building block for the whole reversing framework we designed and finally realized. Albeit we settle our solution in the specific context of the parallel discrete event simulation (PDES) adopting the Time Warp synchronization protocol, this framework paves the way for further general-purpose development and employment. We also present two additional innovative contributions coming from our innovative reversibility approach, both of them still embrace traditional state saving-based rollback strategy. The first contribution aims to harness the advantages of both the possible approaches. We implement the rollback operation combining state saving together with our reversible support through a mathematical model. This model enables the system to choose in autonomicity the best rollback strategy, by the mutable runtime dynamics of programs. The second contribution explores an orthogonal direction, still related to reversible computing aspects. In particular, we will address the problem of reversing shared libraries. Indeed, leading from their nature, shared objects are visible to the whole system and so does every possible external modification of their code. As a consequence, it is not possible to instrument them without affecting other unaware applications. We propose a different method to deal with the instrumentation of shared objects. All our innovative proposals have been assessed using the last generation of the open source ROOT-Sim PDES platform, where we integrated our solutions. ROOT-Sim is a C-based package implementing a general purpose simulation environment based on the Time Warp synchronization protocol

    Reactive imperative programming with dataflow constraints

    No full text
    Dataflow languages provide natural support for specifying constraints between objects in dynamic applications, where programs need to react efficiently to changes of their environment. In this article we show that one-way dataflow constraints, largely explored in the context of interactive applications, can be seamlessly integrated in any imperative language and can be used as a general paradigm for writing performance-critical reactive applications that require efficient incremental computations. In our framework, programmers can define ordinary statements of the imperative host language that enforce constraints between objects stored in special memory locations designated as "reactive". Reactive objects can be of any legal type in the host language, including primitive data types, pointers, arrays, and structures. Statements defining constraints are automatically re-executed every time their input memory locations change, letting a program behave like a spreadsheet where the values of some variables depend upon the values of other variables. The constraint solving mechanism is handled transparently by altering the semantics of elementary operations of the host language for reading and modifying objects. We provide a formal semantics and describe a concrete embodiment of our technique into C/C++, showing how to implement it efficiently in conventional platforms using off-the-shelf compilers. We discuss common coding idioms and relevant applications to reactive scenarios, including incremental computation, observer design pattern, data structure repair, and software visualization. The performance of our implementation is compared to problem-specific change propagation algorithms, as well as to language-centric approaches such as self-adjusting computation and subject/observer communication mechanisms, showing that the proposed approach is efficient in practice

    Historia, evolución y perspectivas de futuro en la utilización de técnicas de simulación en la gestión portuaria: aplicaciones en el análisis de operaciones, estrategia y planificación portuaria

    Get PDF
    Programa Oficial de Doutoramento en Análise Económica e Estratexia Empresarial. 5033V0[Resumen] Las técnicas de simulación, tal y como hoy las conocemos, comenzaron a mediados del siglo XX; primero con la aparición del primer computador y el desarrollo del método Monte Carlo, y más tarde con el desarrollo del primer simulador de propósito específico conocido como GPS y desarrollado por Geoffrey Gordon en IBM y la publicación del primer texto completo dedicado a esta materia y llamado the Art of Simulation (K.D. Tocher, 1963). Estás técnicas han evolucionado de una manera extraordinaria y hoy en día están plenamente implementadas en diversos campos de actividad. Las instalaciones portuarias no han escapado de esta tendencia, especialmente las dedicadas al tráfico de contenedores. Efectivamente, las características intrínsecas de este sector económico, le hacen un candidato idóneo para la implementación de modelos de simulación con propósitos y alcances muy diversos. No existe, sin embargo y hasta lo que conocemos, un trabajo científico que compile y analice pormenorizadamente tanto la historia como la evolución de simulación en ambientes portuarios, ayudando a clasificar los mismos y determinar cómo estos pueden ayudar en el análisis económico de estas instalaciones y en la formulación de las oportunas estrategias empresariales. Este es el objetivo último de la presente tesis doctoral.[Resumo] As técnicas de simulación, tal e como hoxe as coñecemos, comezaron a mediados do século XX; primeiro coa aparición do computador e o desenvolvemento do método Monte Carlo e máis tarde co desenvolvemento do primeiro simulador de propósito específico coñecido como GPS e desenvolvido por Geoffrey Gordon en IBM e a publicación do primeiro texto completo dedicado a este tema chamado “A Arte da Simulación” (K.D. Tocher, 1963). Estas técnicas evolucionaron dun xeito extraordinario e hoxe en día están plenamente implementadas en diversos campos de actividade. As instalacións portuarias non escaparon desta tendencia, especialmente as dedicadas ao tráfico de contenedores. Efectivamente, as características intrínsecas deste sector económico, fanlle un candidato idóneo para a implementación de modelos de simulación con propósitos e alcances moi variados. Con todo, e ata o que coñecemos, non existe un traballo científico que compila e analiza de forma detallada tanto a historia como a evolución da simulación en estes ambientes portuarios, clasificando os mesmos e determinando como estes poden axudar na análise económica destas instalacións e na formulación das oportunas estratexias empresariais. Este é o último obxectivo da presente tese doutoral.[Abstract] Simulation, to the extend that we understand it nowadays, began in the middle of the 20th century; first with the appearance of the computer and the development of the Monte Carlo method, and later with the development of the first specific purpose simulator known as GPS developed by Geoffrey Gordon in IBM. This author published the first full text devoted to this subject “The Art of Simulation” in 1963. These techniques have evolved in an extraordinary way and nowadays they are fully implemented in different fields of activity. Port facilities have not escaped this trend, especially those dedicated to container traffic. Indeed, the intrinsic characteristics of this economic sector, make it a suitable candidate for the implementation of simulation with very different purposes and scope. However, to the best of our knowelegde, there is not a scientific work that compiles and analyzes in detail both, the history and the evolution of simulation in port environments, contributing to classify them and determine how they can help in the economic analysis of these facilities and in the formulation of different business strategies. This is the ultimate goal of this doctoral thesis
    corecore