87 research outputs found

    Symbolic Supervisory Control of Resource Allocation Systems

    Get PDF
    <p>Supervisory control theory (SCT) is a formal model-based methodology for verification and synthesis of supervisors for discrete event systems (DES). The main goal is to guarantee that the closed-loop system fulfills given specifications. SCT has great promise to assist engineers with the generation of reliable control functions. This is, for instance, beneficial to manufacturing systems where both products and production equipment might change frequently.</p> <p>The industrial acceptance of SCT, however, has been limited for at least two reasons: (i) the analysis of DES involves an intrinsic difficulty known as the state-space explosion problem, which makes the explicit enumeration of enormous state-spaces for industrial systems intractable; (ii) the synthesized supervisor, represented as a deterministic finite automaton (FA) or an extended finite automaton (EFA), is not straightforward to implement in an industrial controller.</p> <p>In this thesis, to address the aforementioned issues, we study the modeling, synthesis and supervisor representation of DES using binary decision diagrams (BDDs), a compact data structure for representing DES models symbolically. We propose different kinds of BDD-based algorithms for exploring the symbolically represented state-spaces in an effort to improve the abilities of existing supervisor synthesis approaches to handle large-scale DES and represent the obtained supervisors appropriately.</p> <p>Following this spirit, we bring the efficiencies of BDD into a particular DES application domain -- deadlock avoidance for resource allocation systems (RAS) -- a problem that arises in many technological systems including flexible manufacturing systems and multi-threaded software. We propose a framework for the effective and computationally efficient development of the maximally permissive deadlock avoidance policy (DAP) for various RAS classes. Besides the employment of symbolic computation, special structural properties that are possessed by RAS are utilized by the symbolic algorithms to gain additional efficiencies in the computation of the sought DAP. Furthermore, to bridge the gap between the BDD-based representation of the target DAP and its actual industrial realization, we extend this work by introducing a procedure that generates a set of "guard" predicates to represent the resulting DAP.</p> <p>The work presented in this thesis has been implemented in the SCT tool Supremica. Computational benchmarks have manifested the superiority of the proposed algorithms with respect to the previously published results. Hence, the work holds a strong potential for providing robust, practical and efficient solutions to a broad range of supervisory control and deadlock avoidance problems that are experienced in the considered DES application domain.</p

    Contributions to the deadlock problem in multithreaded software applications observed as Resource Allocation Systems

    Get PDF
    Desde el punto de vista de la competencia por recursos compartidos sucesivamente reutilizables, se dice que un sistema concurrente compuesto por procesos secuenciales está en situación de bloqueo si existe en él un conjunto de procesos que están indefinidamente esperando la liberación de ciertos recursos retenidos por miembros del mismo conjunto de procesos. En sistemas razonablemente complejos o distribuidos, establecer una política de asignación de recursos que sea libre de bloqueos puede ser un problema muy difícil de resolver de forma eficiente. En este sentido, los modelos formales, y particularmente las redes de Petri, se han ido afianzando como herramientas fructíferas que permiten abstraer el problema de asignación de recursos en este tipo de sistemas, con el fin de abordarlo analíticamente y proveer métodos eficientes para la correcta construcción o corrección de estos sistemas. En particular, la teoría estructural de redes de Petri se postula como un potente aliado para lidiar con el problema de la explosión de estados inherente a aquéllos. En este fértil contexto han florecido una serie de trabajos que defienden una propuesta metodológica de diseño orientada al estudio estructural y la correspondiente corrección física del problema de asignación de recursos en familias de sistemas muy significativas en determinados contextos de aplicación, como el de los Sistemas de Fabricación Flexible. Las clases de modelos de redes de Petri resultantes asumen ciertas restricciones, con significado físico en el contexto de aplicación para el que están destinadas, que alivian en buena medida la complejidad del problema. En la presente tesis, se intenta acercar ese tipo de aproximación metodológica al diseño de aplicaciones software multihilo libres de bloqueos. A tal efecto, se pone de manifiesto cómo aquellas restricciones procedentes del mundo de los Sistemas de Fabricación Flexible se muestran demasiado severas para aprehender la versatilidad inherente a los sistemas software en lo que respecta a la interacción de los procesos con los recursos compartidos. En particular, se han de resaltar dos necesidades de modelado fundamentales que obstaculizan la mera adopción de antiguas aproximaciones surgidas bajo el prisma de otros dominios: (1) la necesidad de soportar el anidamiento de bucles no desplegables en el interior de los procesos, y (2) la posible compartición de recursos no disponibles en el arranque del sistema pero que son creados o declarados por un proceso en ejecución. A resultas, se identifica una serie de requerimientos básicos para la definición de un tipo de modelos orientado al estudio de sistemas software multihilo y se presenta una clase de redes de Petri, llamada PC2R, que cumple dicha lista de requerimientos, manteniéndose a su vez respetuosa con la filosofía de diseño de anteriores subclases enfocadas a otros contextos de aplicación. Junto con la revisión e integración de anteriores resultados en el nuevo marco conceptual, se aborda el estudio de propiedades inherentes a los sistemas resultantes y su relación profunda con otros tipos de modelos, la confección de resultados y algoritmos eficientes para el análisis estructural de vivacidad en la nueva clase, así como la revisión y propuesta de métodos de resolución de los problemas de bloqueo adaptadas a las particularidades físicas del dominio de aplicación. Asimismo, se estudia la complejidad computacional de ciertas vertientes relacionadas con el problema de asignación de recursos en el nuevo contexto, así como la traslación de los resultados anteriormente mencionados sobre el dominio de la ingeniería de software multihilo, donde la nueva clase de redes permite afrontar problemas inabordables considerando el marco teórico y las herramientas suministradas para subclases anteriormente explotadas

    Language independent modelling of parallelism

    Get PDF
    To make programs work in parallel contexts without any hazards, programming languages require changes to their structures and compilers. One of the most complicated parts is memory models and how programming languages deal with memory interactions. Different processors provide a different level of safety guarantees (i.e. ARM provides relaxed whereas Intel provides strong guarantees). On the other hand, different programming languages provide different structures for parallel computation and have individual protocols for communicating with parallel processes. Unfortunately, no specific choice is best in all situations. This thesis focuses on memory models of various programming languages and processors highlighting some positive and negative features from the point of view of programmability, performance and portability. In order to give some evidence of problems and performance bottlenecks, some small programs have been developed. This thesis also concentrates on incorrect behaviors, especially on data race conditions in programs, providing suggestions on how to avoid them. Also, some litmus tests on systems featuring different vendors' processors were performed to observe data races on each system. Nowadays programming paradigms also became a big issue. Some of the programming styles support observable non-determinism which is the main reason for incorrect behavior in programs. In this thesis, different programming models are also discussed based on the current state of the available research. Also, the imperative and functional paradigms in different contexts are compared. Finally, a mathematical problem was solved using two different paradigms to provide some practical evidence of the theory

    Investigating tools and techniques for improving software performance on multiprocessor computer systems

    Get PDF
    The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured

    Effective interprocess communication (IPC) in a real-time transputer network

    Get PDF
    The thesis describes the design and implementation of an interprocess communication (IPC) mechanism within a real-time distributed operating system kernel (RT-DOS) which is designed for a transputer-based network. The requirements of real-time operating systems are examined and existing design and implementation strategies are described. Particular attention is paid to one of the object-oriented techniques although it is concluded that these techniques are not feasible for the chosen implementation platform. Studies of a number of existing operating systems are reported. The choices for various aspects of operating system design and their influence on the IPC mechanism to be used are elucidated. The actual design choices are related to the real-time requirements and the implementation that has been adopted is described. [Continues.

    The exploitation of parallelism on shared memory multiprocessors

    Get PDF
    PhD ThesisWith the arrival of many general purpose shared memory multiple processor (multiprocessor) computers into the commercial arena during the mid-1980's, a rift has opened between the raw processing power offered by the emerging hardware and the relative inability of its operating software to effectively deliver this power to potential users. This rift stems from the fact that, currently, no computational model with the capability to elegantly express parallel activity is mature enough to be universally accepted, and used as the basis for programming languages to exploit the parallelism that multiprocessors offer. To add to this, there is a lack of software tools to assist programmers in the processes of designing and debugging parallel programs. Although much research has been done in the field of programming languages, no undisputed candidate for the most appropriate language for programming shared memory multiprocessors has yet been found. This thesis examines why this state of affairs has arisen and proposes programming language constructs, together with a programming methodology and environment, to close the ever widening hardware to software gap. The novel programming constructs described in this thesis are intended for use in imperative languages even though they make use of the synchronisation inherent in the dataflow model by using the semantics of single assignment when operating on shared data, so giving rise to the term shared values. As there are several distinct parallel programming paradigms, matching flavours of shared value are developed to permit the concise expression of these paradigms.The Science and Engineering Research Council

    A Scalable and Adaptive Network on Chip for Many-Core Architectures

    Get PDF
    In this work, a scalable network on chip (NoC) for future many-core architectures is proposed and investigated. It supports different QoS mechanisms to ensure predictable communication. Self-optimization is introduced to adapt the energy footprint and the performance of the network to the communication requirements. A fault tolerance concept allows to deal with permanent errors. Moreover, a template-based automated evaluation and design methodology and a synthesis flow for NoCs is introduced

    Simple, safe, and efficient memory management using linear pointers

    Full text link
    Efficient and safe memory management is a hard problem. Garbage collection promises automatic memory management but comes with the cost of increased memory footprint, reduced parallelism in multi-threaded programs, unpredictable pause time, and intricate tuning parameters balancing the program's workload and designated memory usage in order for an application to perform reasonably well. Existing research mitigates the above problems to some extent, but programmer error could still cause memory leak by erroneously keeping memory references when they are no longer needed. We need a methodology for programmers to become resource aware, so that efficient, scalable, predictable and high performance programs may be written without the fear of resource leak. Linear logic has been recognized as the formalism of choice for resource tracking. It requires explicit introduction and elimination of resources and guarantees that a resource cannot be implicitly shared or abandoned, hence must be linear. Early languages based on linear logic focused on Curry-Howard correspondence. They began by limiting the expressive powers of the language and then reintroduced them by allowing controlled sharing which is necessary for recursive functions. However, only by deviating from Curry-Howard correspondence could later development actually address programming errors in resource usage. The contribution of this dissertation is a simple, safe, and efficient approach introducing linear resource ownership semantics into C++ (which is still a widely used language after 30 years since inception) through linear pointer, a smart pointer inspired by linear logic. By implementing various linear data structures and a parallel, multi-threaded memory allocator based on these data structures, this work shows that linear pointer is practical and efficient in the real world, and that it is possible to build a memory management stack that is entirely leak free. The dissertation offers some closing remarks on the difficulties a formal system would encounter when reasoning about a concurrent linear data algorithm, and what might be done to solve these problems
    corecore