110 research outputs found

    Principled Approaches to Last-Level Cache Management

    Get PDF
    Memory is a critical component of all computing systems. It represents a fundamental performance and energy bottleneck. Ideally, memory aspects such as energy cost, performance, and the cost of implementing management techniques would scale together with the size of all different computing systems; unfortunately this is not the case. With the upcoming trends in applications, new memory technologies, etc., scaling becomes a bigger a problem, aggravating the performance bottleneck that memory represents. A memory hierarchy was proposed to alleviate the problem. Each level in the hierarchy tends to have a decreasing cost per bit, an increased capacity, and a higher access time compared to its previous level. Preferably all data will be stored in the fastest level of memory, unfortunately, faster memory technologies tend to be associated with a higher manufacturing cost, which often limits their capacity. The design challenge is, to determine which is the frequently used data, and store it in the faster levels of memory. A cache is a small, fast, on-chip chunk of memory. Any data stored in main memory can be stored in the cache. For many programs, a typical behavior is to access data that has been accessed previously. Taking advantage of this behavior, a copy of frequently accessed data is kept in the cache, in order to provide a faster access time next time is requested. Due to capacity constrains, it is likely that all of the frequently reused data cannot fit in the cache, because of this, cache management policies decide which data is to be kept in the cache, and which in other levels of the memory hierarchy. Under an efficient cache management policy, an encouraging amount of memory requests will be serviced from a fast on-chip cache. The disparity in access latency between the last-level cache and main memory motivates the search for efficient cache management policies. There is a great amount of recently proposed work that strives to utilize cache capacity in the most favorable to performance way possible. Related work focus on optimizing the performance of caches focusing on different possible solutions, e.g. reduce miss rate, consume less power, reducing storage overhead, reduce access latency, etc. Our work focus on improving the performance of last-level caches by designing policies based on principles adapted from other areas of interest. In this dissertation, we focus on several aspects of cache management policies, we first introduce a space-efficient placement and promotion policy which goal is to minimize the updates to the replacement policy state on each cache access. We further introduce a mechanism that predicts whether a block in the cache will be reused, it feeds different features from a block to the predictor in order to increase the correlation of a previous access to a future access. We later introduce a technique that tweaks traditional cache indexing, providing fast accesses to a vast majority of requests in the presence of a slow access memory technology such as DRAM

    Periodic activity migration for fast sequential execution in future heterogeneous multicore processors

    Get PDF
    On each new technology generation, miniaturization permits putting twice as many computing cores on the same silicon area, potentially doubling the processor performance. However, if sequential execution is not accelerated at the same time, Amdahl's law will eventually limit the actual performance. Hence it will be beneficial to have asymmetric multicores where some cores are specialized for fast sequential execution. This specialization may be achieved by architectural means, but it may also be achieved by specializing transistors, voltage, and clock frequency. In the latter case, one of the main constraints is that the power consumption of fast cores is not increased across technology generations. Yet this implies that the instantaneous heat flux in fast cores be potentially doubled on each new generation. A high instantaneous heat flux can be tolerated by doing periodic activity migration. This requires to double the number of fast cores on each new generation, even though only a single fast core can be used at a given time. To keep the chip temperature below the limit, the migration interval must be divided approximately by four on each new generation. We show with an analytical model that this will eventually decrease the apparent level-2 cache size. We show that this problem can be tackled by preparing a certain number of cores before they become active

    Autonomously Reconfigurable Artificial Neural Network on a Chip

    Get PDF
    Artificial neural network (ANN), an established bio-inspired computing paradigm, has proved very effective in a variety of real-world problems and particularly useful for various emerging biomedical applications using specialized ANN hardware. Unfortunately, these ANN-based systems are increasingly vulnerable to both transient and permanent faults due to unrelenting advances in CMOS technology scaling, which sometimes can be catastrophic. The considerable resource and energy consumption and the lack of dynamic adaptability make conventional fault-tolerant techniques unsuitable for future portable medical solutions. Inspired by the self-healing and self-recovery mechanisms of human nervous system, this research seeks to address reliability issues of ANN-based hardware by proposing an Autonomously Reconfigurable Artificial Neural Network (ARANN) architectural framework. Leveraging the homogeneous structural characteristics of neural networks, ARANN is capable of adapting its structures and operations, both algorithmically and microarchitecturally, to react to unexpected neuron failures. Specifically, we propose three key techniques --- Distributed ANN, Decoupled Virtual-to-Physical Neuron Mapping, and Dual-Layer Synchronization --- to achieve cost-effective structural adaptation and ensure accurate system recovery. Moreover, an ARANN-enabled self-optimizing workflow is presented to adaptively explore a "Pareto-optimal" neural network structure for a given application, on the fly. Implemented and demonstrated on a Virtex-5 FPGA, ARANN can cover and adapt 93% chip area (neurons) with less than 1% chip overhead and O(n) reconfiguration latency. A detailed performance analysis has been completed based on various recovery scenarios

    Pit Features: A View From Grand Island, Michigan

    Get PDF
    Serving a multitude of functions from subterrestrial cavities of storage, basins for cooking, to vessels that securely hold pounds of rice allowing the grains to be danced upon to thresh, pit features are one of North Americas most common archaeological feature. These constructions are dug to fit a diversity of needs based on the people who manufacture them. By understanding the distinct function(s) a pit or group of pit features played at a site-level, the needs of the people who inhabited that landscape are better understood. The nature of a pit feature is to store something or process a food resource that is of value, by virtue of the objects pits once contained, those materials are predominantly reclaimed from the pit when it was in use. This lack of associated material remains found in the archaeological record make it difficult to understand the activates associated with these features. Recorded pit features of the lower peninsula of Michigan have contained varying floral remains, charred wood, burned soils, fire-cracked rocks, and limited amounts of ceramics and lithics. A considerable amount of regional ethnohistoric accounts demonstrates the importance of pit features in the subsistence and settlement patterns of native Upper Great Lakes groups. Despite these accounts, and high frequencies in which these features manifest throughout the region, there have been no formal archaeological investigations into pit feature use in the Upper Peninsula of Michigan. To address this regional gap in research, archaeological investigation of selected pit features at the Muskrat Point site (03-910) was conducted under the direction of the Grand Island Archaeological Project in the summer of 2017. Field survey identified 24 surface depressions, likely to be pit features along the southern end of Grand Island eastern lobe. Fifteen of these are located in the area of the Muskrat Point site, four of these surface depressions were excavated, each confirmed to be pit features. A performance-based approach is used to consider pit stratigraphy, macrobotanical remains, radiocarbon dating, and other contextual evidence in order to investigate pit feature function at this coastal Lake Superior site. This research acts as an initial step towards understanding the rolls pit features played in Native American lifeways of the Upper Peninsula of Michigan

    The case for in-network computing on demand

    Get PDF
    Programmable network hardware can run services traditionally deployed on servers, resulting in orders-of-magnitude improvements in performance. Yet, despite these performance improvements, network operators remain skeptical of in-network computing. The conventional wisdom is that the operational costs from increased power consumption outweigh any performance benefits. Unless in-network computing can justify its costs, it will be disregarded as yet another academic exercise. In this paper, we challenge that assumption, by providing a detailed power analysis of several in-network computing use cases. Our experiments show that in-network computing can be extremely power-efficient. In fact, for a single watt, a software system on commodity CPU can be improved by a factor of x100 using FPGA, and a factor of x1000 utilizing ASIC implementations. However, this efficiency depends on the system load. To address changing workloads, we propose In-Network Computing On Demand, where services can be dynamically moved between servers and the network. By shifting the placement of services on-demand, data centers can optimize for both performance and power efficiency

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques

    Green Alternatives and National Energy Strategy

    Get PDF
    It is no secret that the United States’ dependence on oil—mostly foreign—puts the country in a precarious position. The United States needs innovative ways not only to power millions of automobiles on its highways but also to secure sustainable sources of fuel for the future. This book presents the latest facts and figures about alternative energy to any physicist, engineer, policymaker, or concerned citizen who needs a reliable source of information on the nation’s looming energy crisis. Philip G. Gallman focuses especially on green vehicles and the interrelationship between their design and various energy sources. He explains simply and clearly the complex energy and automotive engineering issues involved in developing green vehicles, measures their likely effect on energy resource demand, and considers what they might mean for national energy strategy. Addressing problems associated with renewable resources often overlooked or ignored in the popular press, Gallman explains what replacing oil with alternative sources of energy realistically entails. Can the nation satisfy its energy demands with wind turbines, solar power, hydroelectric power, or geothermal power? Is biodiesel or electricity the answer to our gas-guzzling ways? Organized logically and with an accessible narrative, Green Alternatives and National Energy Strategy guides readers through the essential questions and hurdles the United States must answer and overcome to transition from a petroleum-dependent nation to one that runs on sustainable, renewable energy

    Quliaqtuavut Tuugaatigun (Our Stories in Ivory): Reconnecting Arctic Narratives with Engraved Drill Bows

    Get PDF
    abstract: This dissertation explores complex representations of spiritual, social and cultural ways of knowing embedded within engraved ivory drill bows from the Bering Strait. During the nineteenth century, multi-faceted ivory drill bows formed an ideal surface on which to recount life events and indigenous epistemologies reflective of distinct environmental and socio-cultural relationships. Carvers added motifs over time and the presence of multiple hands suggests a passing down of these objects as a form of familial history and cultural patrimony. Explorers, traders and field collectors to the Bering Strait eagerly acquired engraved drill bows as aesthetic manifestations of Arctic mores but recorded few details about the carvings resulting in a disconnect between the objects and their multi-layered stories. However, continued practices of ivory carving and storytelling within Bering Strait communities holds potential for engraved drill bows to animate oral histories and foster discourse between researchers and communities. Thus, this collaborative project integrates stylistic analyses and ethno-historical accounts on drill bows with knowledge shared by Alaska Native community members and is based on the understanding that oral narratives can bring life and meaning to objects within museum collections.Dissertation/ThesisPh.D. Art 201

    The Murray Ledger and Times, September 7, 1982

    Get PDF
    • …
    corecore