13,520 research outputs found
Desynchronization: Synthesis of asynchronous circuits from synchronous specifications
Asynchronous implementation techniques, which measure logic delays at run time and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst-case delays at design time, and constrain the clock cycle accordingly. De-synchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus permitting widespread adoption of asynchronicity, without requiring special design skills or tools. In this paper, we first of all study different protocols for de-synchronization and formally prove their correctness, using techniques originally developed for distributed deployment of synchronous language specifications. We also provide a taxonomy of existing protocols for asynchronous latch controllers, covering in particular the four-phase handshake protocols devised in the literature for micro-pipelines. We then propose a new controller which exhibits provably maximal concurrency, and analyze the performance of desynchronized circuits with respect to the original synchronous optimized implementation. We finally prove the feasibility and effectiveness of our approach, by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architectur
The abelian sandpile and related models
The Abelian sandpile model is the simplest analytically tractable model of
self-organized criticality. This paper presents a brief review of known results
about the model. The abelian group structure allows an exact calculation of
many of its properties. In particular, one can calculate all the critical
exponents for the directed model in all dimensions. For the undirected case,
the model is related to q= 0 Potts model. This enables exact calculation of
some exponents in two dimensions, and there are some conjectures about others.
We also discuss a generalization of the model to a network of communicating
reactive processors. This includes sandpile models with stochastic toppling
rules as a special case. We also consider a non-abelian stochastic variant,
which lies in a different universality class, related to directed percolation.Comment: Typos and minor errors fixed and some references adde
Functional Integration Approach to Hysteresis
A general formulation of scalar hysteresis is proposed. This formulation is
based on two steps. First, a generating function g(x) is associated with an
individual system, and a hysteresis evolution operator is defined by an
appropriate envelope construction applied to g(x), inspired by the overdamped
dynamics of systems evolving in multistable free energy landscapes. Second, the
average hysteresis response of an ensemble of such systems is expressed as a
functional integral over the space G of all admissible generating functions,
under the assumption that an appropriate measure m has been introduced in G.
The consequences of the formulation are analyzed in detail in the case where
the measure m is generated by a continuous, Markovian stochastic process. The
calculation of the hysteresis properties of the ensemble is reduced to the
solution of the level-crossing problem for the stochastic process. In
particular, it is shown that, when the process is translationally invariant
(homogeneous), the ensuing hysteresis properties can be exactly described by
the Preisach model of hysteresis, and the associated Preisach distribution is
expressed in closed analytic form in terms of the drift and diffusion
parameters of the Markovian process. Possible applications of the formulation
are suggested, concerning the interpretation of magnetic hysteresis due to
domain wall motion in quenched-in disorder, and the interpretation of critical
state models of superconducting hysteresis.Comment: 36 pages, 9 figures, to be published on Phys. Rev.
Task-based Augmented Contour Trees with Fibonacci Heaps
This paper presents a new algorithm for the fast, shared memory, multi-core
computation of augmented contour trees on triangulations. In contrast to most
existing parallel algorithms our technique computes augmented trees, enabling
the full extent of contour tree based applications including data segmentation.
Our approach completely revisits the traditional, sequential contour tree
algorithm to re-formulate all the steps of the computation as a set of
independent local tasks. This includes a new computation procedure based on
Fibonacci heaps for the join and split trees, two intermediate data structures
used to compute the contour tree, whose constructions are efficiently carried
out concurrently thanks to the dynamic scheduling of task parallelism. We also
introduce a new parallel algorithm for the combination of these two trees into
the output global contour tree. Overall, this results in superior time
performance in practice, both in sequential and in parallel thanks to the
OpenMP task runtime. We report performance numbers that compare our approach to
reference sequential and multi-threaded implementations for the computation of
augmented merge and contour trees. These experiments demonstrate the run-time
efficiency of our approach and its scalability on common workstations. We
demonstrate the utility of our approach in data segmentation applications
Machine Hyperconsciousness
Individual animal consciousness appears limited to a single giant component of interacting cognitive modules, instantiating a shifting, highly tunable, Global Workspace. Human institutions, by contrast, can support several, often many, such giant components simultaneously, although they generally function far more slowly than the minds of the individuals who compose them. Machines having multiple global workspaces -- hyperconscious machines -- should, however, be able to operate at the few hundred milliseconds characteistic of individual consciousness. Such multitasking -- machine or institutional -- while clearly limiting the phenomenon of inattentional blindness, does not eliminate it, and introduces characteristic failure modes involving the distortion of information sent between global workspaces. This suggests that machines explicitly designed along these principles, while highly efficient at certain sets of tasks, remain subject to canonical and idiosyncratic failure patterns analogous to, but more complicated than, those explored in Wallace (2006a). By contrast, institutions, facing similar challenges, are usually deeply embedded in a highly stabilizing cultural matrix of law, custom, and tradition which has evolved over many centuries. Parallel development of analogous engineering strategies, directed toward ensuring an 'ethical' device, would seem requisite to the sucessful application of any form of hyperconscious machine technology
- …