32,945 research outputs found

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal

    Twenty years of "Lipid World": a fertile partnership with David Deamer

    Get PDF
    "The Lipid World" was published in 2001, stemming from a highly effective collaboration with David Deamer during a sabbatical year 20 years ago at the Weizmann Institute of Science in Israel. The present review paper highlights the benefits of this scientific interaction and assesses the impact of the lipid world paper on the present understanding of the possible roles of amphiphiles and their assemblies in the origin of life. The lipid world is defined as a putative stage in the progression towards life's origin, during which diverse amphiphiles or other spontaneously aggregating small molecules could have concurrently played multiple key roles, including compartment formation, the appearance of mutually catalytic networks, molecular information processing, and the rise of collective self-reproduction and compositional inheritance. This review brings back into a broader perspective some key points originally made in the lipid world paper, stressing the distinction between the widely accepted role of lipids in forming compartments and their expanded capacities as delineated above. In the light of recent advancements, we discussed the topical relevance of the lipid worldview as an alternative to broadly accepted scenarios, and the need for further experimental and computer-based validation of the feasibility and implications of the individual attributes of this point of view. Finally, we point to possible avenues for exploring transition paths from small molecule-based noncovalent structures to more complex biopolymer-containing proto-cellular systems.711473 - Minerva Foundation; 80NSSC17K0295, 80NSSC17K0296, 1724150 - National Science FoundationPublished versio

    A Benes Based NoC Switching Architecture for Mixed Criticality Embedded Systems

    Get PDF
    Multi-core, Mixed Criticality Embedded (MCE) real-time systems require high timing precision and predictability to guarantee there will be no interference between tasks. These guarantees are necessary in application areas such as avionics and automotive, where task interference or missed deadlines could be catastrophic, and safety requirements are strict. In modern multi-core systems, the interconnect becomes a potential point of uncertainty, introducing major challenges in proving behaviour is always within specified constraints, limiting the means of growing system performance to add more tasks, or provide more computational resources to existing tasks. We present MCENoC, a Network-on-Chip (NoC) switching architecture that provides innovations to overcome this with predictable, formally verifiable timing behaviour that is consistent across the whole NoC. We show how the fundamental properties of Benes networks benefit MCE applications and meet our architecture requirements. Using SystemVerilog Assertions (SVA), formal properties are defined that aid the refinement of the specification of the design as well as enabling the implementation to be exhaustively formally verified. We demonstrate the performance of the design in terms of size, throughput and predictability, and discuss the application level considerations needed to exploit this architecture

    The future of computing beyond Moore's Law.

    Get PDF
    Moore's Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore's Law for 50 years is failing and is anticipated to flatten by 2025. This article provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this article covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
    • …
    corecore