12,281 research outputs found
Energy-Efficient and Reliable Computing in Dark Silicon Era
Dark silicon denotes the phenomenon that, due to thermal and power constraints, the fraction of transistors that can operate at full frequency is decreasing in each technology generation. Moore’s law and Dennard scaling had been backed and coupled appropriately for five decades to bring commensurate exponential performance via single core and later muti-core design. However, recalculating Dennard scaling for recent small technology sizes shows that current ongoing multi-core growth is demanding exponential thermal design power to achieve linear performance increase. This process hits a power wall where raises the amount of dark or dim silicon on future multi/many-core chips more and more. Furthermore, from another perspective, by increasing the number of transistors on the area of a single chip and susceptibility to internal defects alongside aging phenomena, which also is exacerbated by high chip thermal density, monitoring and managing the chip reliability before and after its activation is becoming a necessity. The proposed approaches and experimental investigations in this thesis focus on two main tracks: 1) power awareness and 2) reliability awareness in dark silicon era, where later these two tracks will combine together. In the first track, the main goal is to increase the level of returns in terms of main important features in chip design, such as performance and throughput, while maximum power limit is honored. In fact, we show that by managing the power while having dark silicon, all the traditional benefits that could be achieved by proceeding in Moore’s law can be also achieved in the dark silicon era, however, with a lower amount. Via the track of reliability awareness in dark silicon era, we show that dark silicon can be considered as an opportunity to be exploited for different instances of benefits, namely life-time increase and online testing. We discuss how dark silicon can be exploited to guarantee the system lifetime to be above a certain target value and, furthermore, how dark silicon can be exploited to apply low cost non-intrusive online testing on the cores. After the demonstration of power and reliability awareness while having dark silicon, two approaches will be discussed as the case study where the power and reliability awareness are combined together. The first approach demonstrates how chip reliability can be used as a supplementary metric for power-reliability management. While the second approach provides a trade-off between workload performance and system reliability by simultaneously honoring the given power budget and target reliability
Recommended from our members
The Chemical Imprint Of Silicate Dust On The Most Metal-Poor Stars
We investigate the impact of dust-induced gas fragmentation on the formation of the first low-mass, metal-poor stars (<1 M-circle dot) in the early universe. Previous work has shown the existence of a critical dust-to-gas ratio, below which dust thermal cooling cannot cause gas fragmentation. Assuming that the first dust is silicon-based, we compute critical dust-to-gas ratios and associated critical silicon abundances ([Si/H](crit)). At the density and temperature associated with protostellar disks, we find that a standard Milky Way grain size distribution gives [Si/H](crit) = -4.5 +/- 0.1, while smaller grain sizes created in a supernova reverse shock give [Si/H](crit) = -5.3 +/- 0.1. Other environments are not dense enough to be influenced by dust cooling. We test the silicate dust cooling theory by comparing to silicon abundances observed in the most iron-poor stars ([Fe/H] < -4.0). Several stars have silicon abundances low enough to rule out dust-induced gas fragmentation with a standard grain size distribution. Moreover, two of these stars have such low silicon abundances that even dust with a shocked grain size distribution cannot explain their formation. Adding small amounts of carbon dust does not significantly change these conclusions. Additionally, we find that these stars exhibit either high carbon with low silicon abundances or the reverse. A silicate dust scenario thus suggests that the earliest low-mass star formation in the most metal-poor regime may have proceeded through two distinct cooling pathways: fine-structure line cooling and dust cooling. This naturally explains both the carbon-rich and carbon-normal stars at extremely low [Fe/H].NSF AST-1255160, AST-1009928NASA ATFP NNX09-AJ33GAstronom
DeSyRe: on-Demand System Reliability
The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints
Direct Detection of sub-GeV Dark Matter with Semiconductor Targets
Dark matter in the sub-GeV mass range is a theoretically motivated but
largely unexplored paradigm. Such light masses are out of reach for
conventional nuclear recoil direct detection experiments, but may be detected
through the small ionization signals caused by dark matter-electron scattering.
Semiconductors are well-studied and are particularly promising target materials
because their band gaps allow for ionization signals from
dark matter as light as a few hundred keV. Current direct detection
technologies are being adapted for dark matter-electron scattering. In this
paper, we provide the theoretical calculations for dark matter-electron
scattering rate in semiconductors, overcoming several complications that stem
from the many-body nature of the problem. We use density functional theory to
numerically calculate the rates for dark matter-electron scattering in silicon
and germanium, and estimate the sensitivity for upcoming experiments such as
DAMIC and SuperCDMS. We find that the reach for these upcoming experiments has
the potential to be orders of magnitude beyond current direct detection
constraints and that sub-GeV dark matter has a sizable modulation signal. We
also give the first direct detection limits on sub-GeV dark matter from its
scattering off electrons in a semiconductor target (silicon) based on published
results from DAMIC. We make available publicly our code, QEdark, with which we
calculate our results. Our results can be used by experimental collaborations
to calculate their own sensitivities based on their specific setup. The
searches we propose will probe vast new regions of unexplored dark matter model
and parameter space.Comment: 30 pages + 22 pages appendices/references, 17 figures, website at
http://ddldm.physics.sunysb.edu/, v2 added references, minor edits to text
and Figs. 2 and 14, version to appear in JHE
Bubble budgeting: throughput optimization for dynamic workloads by exploiting dark cores in many core systems
All the cores of a many-core chip cannot be active at the same time, due to reasons like low CPU utilization in server systems and limited power budget in dark silicon era. These free cores (referred to as bubbles) can be placed near active cores for heat dissipation so that the active cores can run at a higher frequency level, boosting the performance of applications that run on active cores. Budgeting inactive cores (bubbles) to applications to boost performance has the following three challenges. First, the number of bubbles varies due to open workloads. Second, communication distance increases when a bubble is inserted between two communicating tasks (a task is a thread or process of a parallel application), leading to performance degradation. Third, budgeting too many bubbles as coolers to running applications leads to insufficient cores for future applications. In order to address these challenges, in this paper, a bubble budgeting scheme is proposed to budget free cores to each application so as to optimize the throughput of the whole system. Throughput of the system depends on the execution time of each application and the waiting time incurred for newly arrived applications. Essentially, the proposed algorithm determines the number and locations of bubbles to optimize the performance and waiting time of each application, followed by tasks of each application being mapped to a core region. A Rollout algorithm is used to budget power to the cores as the last step. Experiments show that our approach achieves 50 percent higher throughput when compared to state-of-the-art thermal-aware runtime task mapping approaches. The runtime overhead of the proposed algorithm is in the order of 1M cycles, making it an efficient runtime task management method for large-scale many-core systems
Adaptive Knobs for Resource Efficient Computing
Performance demands of emerging domains such as artificial intelligence, machine learning and vision, Internet-of-things etc., continue to grow. Meeting such requirements on modern multi/many core systems with higher power densities, fixed power and energy budgets, and thermal constraints exacerbates the run-time management challenge. This leaves an open problem on extracting the required performance within the power and energy limits, while also ensuring thermal safety. Existing architectural solutions including asymmetric and heterogeneous cores and custom acceleration improve performance-per-watt in specific design time and static scenarios. However, satisfying applications’ performance requirements under dynamic and unknown workload scenarios subject to varying system dynamics of power, temperature and energy requires intelligent run-time management.
Adaptive strategies are necessary for maximizing resource efficiency, considering i) diverse requirements and characteristics of concurrent applications, ii) dynamic workload variation, iii) core-level heterogeneity and iv) power, thermal and energy constraints. This dissertation proposes such adaptive techniques for efficient run-time resource management to maximize performance within fixed budgets under unknown and dynamic workload scenarios. Resource management strategies proposed in this dissertation comprehensively consider application and workload characteristics and variable effect of power actuation on performance for pro-active and appropriate allocation decisions. Specific contributions include i) run-time mapping approach to improve power budgets for higher throughput, ii) thermal aware performance boosting for efficient utilization of power budget and higher performance, iii) approximation as a run-time knob exploiting accuracy performance trade-offs for maximizing performance under power caps at minimal loss of accuracy and iv) co-ordinated approximation for heterogeneous systems
through joint actuation of dynamic approximation and power knobs for performance guarantees with minimal power consumption.
The approaches presented in this dissertation focus on adapting existing mapping techniques, performance boosting strategies, software and dynamic approximations to meet the performance requirements, simultaneously considering system constraints. The proposed strategies are compared against relevant state-of-the-art run-time management frameworks to qualitatively evaluate their efficacy
The Boston University Photonics Center annual report 2015-2016
This repository item contains an annual report that summarizes activities of the Boston University Photonics Center in the 2015-2016 academic year. The report provides quantitative and descriptive information regarding photonics programs in education, interdisciplinary research, business innovation, and technology development. The Boston University Photonics Center (BUPC) is an interdisciplinary hub for education, research, scholarship, innovation, and technology development associated with practical uses of light.This has been a good year for the Photonics Center. In the following pages, you will see that this year the Center’s faculty received prodigious honors and awards, generated more than 100 notable scholarly publications in the leading journals in our field, and attracted $18.9M in new research grants/contracts. Faculty and staff also expanded their efforts in education and training, and cooperated in supporting National Science Foundation sponsored Sites for Research Experiences for Undergraduates and for Research Experiences for Teachers. As a community, we emphasized the theme of “Frontiers in Plasmonics as Enabling Science in Photonics and Beyond” at our annual symposium, hosted by Bjoern Reinhard. We continued to support the National Photonics Initiative, and contributed as a cooperating site in the American Institute for Manufacturing Integrated Photonics (AIM Photonics) which began this year as a new photonics-themed node in the National Network of Manufacturing Institutes. Highlights of our research achievements for the year include an ambitious new DoD-sponsored grant for Development of Less Toxic Treatment Strategies for Metastatic and Drug Resistant Breast Cancer Using Noninvasive Optical Monitoring led by Professor Darren Roblyer, continued support of our NIH-sponsored, Center for Innovation in Point of Care Technologies for the Future of Cancer Care led by Professor Cathy Klapperich, and an exciting confluence of new grant awards in the area of Neurophotonics led by Professors Christopher Gabel, Timothy Gardner, Xue Han, Jerome Mertz, Siddharth Ramachandran, Jason Ritt, and John White. Neurophotonics is fast becoming a leading area of strength of the Photonics Center. The Industry/University Collaborative Research Center, which has become the centerpiece of our translational biophotonics program, continues to focus onadvancing the health care and medical device industries, and has entered its sixth year of operation with a strong record of achievement and with the support of an enthusiastic industrial membership base
- …