220 research outputs found

    Non uniform (hyper/multi)coherence spaces

    Full text link
    In (hyper)coherence semantics, proofs/terms are cliques in (hyper)graphs. Intuitively, vertices represent results of computations and the edge relation witnesses the ability of being assembled into a same piece of data or a same (strongly) stable function, at arrow types. In (hyper)coherence semantics, the argument of a (strongly) stable functional is always a (strongly) stable function. As a consequence, comparatively to the relational semantics, where there is no edge relation, some vertices are missing. Recovering these vertices is essential for the purpose of reconstructing proofs/terms from their interpretations. It shall also be useful for the comparison with other semantics, like game semantics. In [BE01], Bucciarelli and Ehrhard introduced a so called non uniform coherence space semantics where no vertex is missing. By constructing the co-free exponential we set a new version of this last semantics, together with non uniform versions of hypercoherences and multicoherences, a new semantics where an edge is a finite multiset. Thanks to the co-free construction, these non uniform semantics are deterministic in the sense that the intersection of a clique and of an anti-clique contains at most one vertex, a result of interaction, and extensionally collapse onto the corresponding uniform semantics.Comment: 32 page

    Compiling a Functional Logic Language: The Fair Scheme

    Full text link
    Abstract. We present a compilation scheme for a functional logic programming language. The input program to our compiler is a constructor-based graph rewrit-ing system in a non-confluent, but well-behaved class. This input is an interme-diate representation of a functional logic program in a language such as Curry or T OY. The output program from our compiler consists of three procedures that make recursive calls and execute both rewrite and pull-tab steps. This output is an intermediate representation that is easy to encode in any number of programming languages. Our design evolves the Basic Scheme of Antoy and Peters by removing the “left bias ” that prevents obtaining results of some computations—a behavior related to the order of evaluation, which is counter to declarative programming. The benefits of this evolution are not only the strong completeness of computa-tions, but also the provability of non-trivial properties of these computations. We rigorously describe the compiler design and prove some of its properties. To state and prove these properties, we introduce novel definitions of “need ” and “fail-ure. ” For non-confluent constructor-based rewriting systems these concepts are more appropriate than the classic definition of need of Huet and Levy

    Acta Cybernetica : Volume 9. Number 3.

    Get PDF

    Vitality of ice and bone: known uncertainty and awareness in change through Dolpo, Nepal, The

    Get PDF
    2012 Spring.Includes bibliographical references.At least one thousand years of caravanning yaks through the remote Himalayas have significantly shaped the practices of the Dolpo-pa, a culturally Tibetan population dwelling through the highlands of Midwestern Nepal. In turn, those practices have significantly affected how the Dolpo-pa conceptualize their world, the models by which they frame the experiences that effect those practices being directly and continuously synergized with the ecological realities of the existential present in persistently confirming, contesting or altering their awareness of those experiences. Physical reality at the biometabolic scale of ecological processes, therefore, which is as a rule perfunctorily and uncritically framed by observers descended from the specific histories of the European Enlightenment as the second-order reification labeled the environment, is schematized by the Dolpo-pa as something more like an "entanglement" in the uncertainty inherent to dwelling through that scale. As such, unlike the Cartesian divide elemental to the Western model that distorts reality by a cognitive trick of circular framing in reifying second-order conceptualizations and taking those reifications as first-order realities in the world, ethnographic evidence indicates that the Dolpo-pa culturally model themselves as unique and distinct as humans but not as separate from their domain of metabolic entanglement. The difference in these representations is significant, not only because it highlights the emergent cultural model of the Dolpo-pa after extended engagement within that unforgiving mountain environment but also because it suggests what is being lost with the increasing contravention of the Western model of development into that domain. The Dolpo-pa's increasing acquiescence to the distortions of that model is beginning to disentangle at very basic levels their unique awareness, which is especially evident in new forms of social fragmentation that have only since around 2005 begun to influence how individuals in Dolpo constellate schemas of intra-entanglement arrangements and extra-entanglement connotations there. Worryingly, such new, second-order constellations have been concurrent with an increasing decline in the reliability of deep-rooted cultural models of known ecological uncertainties to effectively frame recent experiences with rapidly changing phenological conditions as average weather patterns (i.e. climate) have steadily altered in recent years. The Dolpo-pa's cultural model of entanglement is unfortunately incapable of proficiently conceptualizing let alone adequately representing and responding to changes at the technometabolic scale of industrial processes, whence such phenological changes have originated but at which few among the Dolpo-pa have experience or proficiency negotiating. This thesis concludes with a brief discussion of how continued decline in the efficacy of the Dolpo-pa's cultural model of entanglement is progressively leading to greater existential dissonance, a concept introduced here in conclusion that qualitatively gauges how such disentanglement gives rise to an increased likelihood of physical loss of life or livelihood within experiences no less physically entangled at the scale of ecological processes

    Out-of-Order Retirement of Instructions in Superscalar, Multithreaded, and Multicore Processors

    Full text link
    Los procesadores superescalares actuales utilizan un reorder buffer (ROB) para contabilizar las instrucciones en vuelo. El ROB se implementa como una cola FIFO first in first out en la que las instrucciones se insertan en orden de programa después de ser decodificadas, y de la que se extraen también en orden de programa en la etapa commit. El uso de esta estructura proporciona un soporte simple para la especulación, las excepciones precisas y la reclamación de registros. Sin embargo, el hecho de retirar instrucciones en orden puede degradar las prestaciones si una operación de alta latencia está bloqueando la cabecera del ROB. Varias propuestas se han publicado atacando este problema. La mayoría utiliza retirada de instrucciones fuera de orden de forma especulativa, requiriendo almacenar puntos de recuperación (checkpoints) para restaurar un estado válido del procesador ante un fallo de especulación. Normalmente, los checkpoints necesitan implementarse con estructuras hardware costosas, y además requieren un crecimiento de otras estructuras del procesador, lo cual a su vez puede impactar en el tiempo de ciclo de reloj. Este problema afecta a muchos tipos de procesadores actuales, independientemente del número de hilos hardware (threads) y del número de núcleos de cómputo (cores) que incluyan. Esta tesis abarca el estudio de la retirada no especulativa de instrucciones fuera de orden en procesadores superescalares, multithread y multicore.Ubal Tena, R. (2010). Out-of-Order Retirement of Instructions in Superscalar, Multithreaded, and Multicore Processors [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8535Palanci

    Research-study of a self-organizing computer

    Get PDF
    It is shown that a self organizing system has two main components: an organizable physical part, and a programing part. This report presents the organizable part in the form of a programable hardware and its programing language

    The modern landscape of managing effects for the working programmer

    Get PDF
    The management of side effects is a crucial aspect of modern programming, especially in concurrent and distributed systems. This thesis analyses different approaches for managing side effects in programming languages, specifically focusing on unrestricted side effects, monads, and algebraic effects and handlers. Unrestricted side effects, used in mainstream imperative programming languages, can make programs difficult to reason about. Monads offer a solution to this problem by describing side effects in a composable and referentially transparent way but many find them cumbersome to use. Algebraic effects and handlers can address some of the shortcomings of monads by providing a way to model effects in more modular and flexible way. The thesis discusses the advantages and disadvantages of each of these approaches and compares them based on factors such as expressiveness, safety, and constraints they place on how programs must be implemented. The thesis focuses on ZIO, a Scala library for concurrent and asynchronous programming, which revolves around a ZIO monad with three type parameters. With those three parameters ZIO can encode the majority of practically useful effects in a single monad. ZIO takes inspiration from algebraic effects, combining them with monadic effects. The library provides a range of features, such as declarative concurrency, error handling, and resource management. The thesis presents examples of using ZIO to manage side effects in practical scenarios, highlighting its strengths over other approaches. The applicability of ZIO is evaluated by implementing a server side application using ZIO, and analyzing observations from the development process

    Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer

    Get PDF
    SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness
    corecore