180 research outputs found
Recommended from our members
Compiling Irregular Software to Specialized Hardware
High-level synthesis (HLS) has simplified the design process for energy-efficient hardware accelerators: a designer specifies an accelerator’s behavior in a “high-level” language, and a toolchain synthesizes register-transfer level (RTL) code from this specification. Many HLS systems produce efficient hardware designs for regular algorithms (i.e., those with limited conditionals or regular memory access patterns), but most struggle with irregular algorithms that rely on dynamic, data-dependent memory access patterns (e.g., traversing pointer-based structures like lists, trees, or graphs). HLS tools typically provide imperative, side-effectful languages to the designer, which makes it difficult to correctly specify and optimize complex, memory-bound applications.
In this dissertation, I present an alternative HLS methodology that leverages properties of functional languages to synthesize hardware for irregular algorithms. The main contribution is an optimizing compiler that translates pure functional programs into modular, parallel dataflow networks in hardware. I give an overview of this compiler, explain how its source and target together enable parallelism in the face of irregularity, and present two specific optimizations that further exploit this parallelism. Taken together, this dissertation verifies my thesis that pure functional programs exhibiting irregular memory access patterns can be compiled into specialized hardware and optimized for parallelism.
This work extends the scope of modern HLS toolchains. By relying on properties of pure functional languages, our compiler can synthesize hardware from programs containing constructs that commercial HLS tools prohibit, e.g., recursive functions and dynamic memory allocation. Hardware designers may thus use our compiler in conjunction with existing HLS systems to accelerate a wider class of algorithms than before
Digital ecosystems : a distributed service oriented approach for business transactions
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Specification and Verification of Shared-Memory Concurrent Programs
Ph.DDOCTOR OF PHILOSOPH
Design and Evaluation of the Hamal Parallel Computer
Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine
Regional climate, federal land management, and the social-ecological resilience of southeastern Alaska
Thesis (Ph.D.) University of Alaska Fairbanks, 2007Complex systems of humans and nature often experience rapid and unpredictable change that results in undesirable outcomes for both ecosystems and society. In circumpolar regions, where multiple converging drivers of change are reshaping both human and natural communities, there is uncertainty about future dynamics and the capacity to sustain the important interactions of social-ecological systems in the face of rapid change. This research addresses this uncertainty in the region of Southeast Alaska, where lessons learned from other circumpolar regions may not be applicable because of unique social and ecological conditions. Southeast Alaska contains the most productive and diverse ecosystems at high latitudes and a human population almost entirely isolated and embedded in National Forest lands; these qualities underscore the importance of the region's climate and federal management systems, respectively. This research presents a series of case studies of the drivers, dynamics, and outcomes of change in regional climate and federal management, and theoretically grounds these studies to understand the regional resilience to change. Climate change in Southeast Alaska is investigated with respect to impacts on temperate rainforest ecosystems. Findings suggest that warming is linked to emergence of declining cedar forests in the last century. Dynamics of federal management are investigated in several studies, concerning the origins and outcomes of national conservation policy, the boom-bust history of the regional timber economy, and the factors contributing to the current 'deadlock' in Tongass National Forest management. Synthesis of case study findings suggests both emergent phenomena (yellow-cedar decline) and cyclic dynamics (timber boom-bust) resulting from the convergence of ecological and social drivers of change. Adaptive responses to emergent opportunities appear constrained by inertia in management philosophies. Resilience to timber industry collapse has been variable at local scales, but overall the regional economy has experienced transition while retaining many of its key social-ecological interactions (e.g., subsistence and commercial uses of fish and wildlife). An integrated assessment of regional datasets suggests a high integrity of these interactions, but also identifies critical areas of emergent vulnerability. Overall findings are synthesized to provide policy and management recommendations for supporting regional resilience to future change.Introduction : Southeast Alaska as a social-ecological system -- Climate change and forest decline in Southeast Alaska -- Significance of wilderness conservation in Southeast Alaska : outcomes of the Alaska lands debate over the Tongass National Forest -- Dynamics of federal land management during the 20th century -- Factors influencing the reorganization of federal land management -- Conclusions : regional dynamics and social-ecological resilience of Southeast Alaska
Design and evaluation of the Hamal parallel computer
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2003."December 2002."Includes bibliographical references (p. 145-152).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization.(cont.) Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.by J.B. Grossman.Ph.D
The Life-Cycle Policy model
Our daily life activity leaves digital trails in an increasing number of databases (commercial web sites, internet service providers, search engines, location tracking systems, etc). Personal digital trails are commonly exposed to accidental disclosures resulting from negligence or piracy and to ill-intentioned scrutinization and abusive usages fostered by fuzzy privacy policies. No one is sheltered because a single event (e.g., applying for a job or a credit) can suddenly make our history a precious asset. By definition, access control fails preventing trail disclosures, motivating the integration of the Limited Data Retention principle in legislations protecting data privacy. By this principle, data is withdrawn from a database after a predefined time period. However, this principle is difficult to apply in practice, leading to retain useless sensitive information for years in databases. In this paper, we propose a simple and practical data degradation model where sensitive data undergoes a progressive and irreversible degradation from an accurate state at collection time, to intermediate but still informative degraded states, up to complete disappearance when the data becomes useless. The benefits of data degradation is twofold: (i) by reducing the amount of accurate data, the privacy offence resulting from a trail disclosure is drastically reduced and (ii) degrading the data in line with the application purposes offers a new compromise between privacy preservation and application reach. We introduce in this paper a data degradation model, analyze its impact over core database techniques like storage, indexation and transaction management and propose degradation-aware techniques
- …