27,490 research outputs found
Thermal analysis and modeling of embedded processors
This paper presents a complete modeling approach to analyze the thermal behavior of microprocessor-based systems. While most compact modeling approaches require a deep knowledge of the implementation details, our method defines a black box technique which can be applied to different target processors when this detailed information is unknown. The obtained results show high accuracy, applicability and can be easily automated. The proposed methodology has been used to study the impact of code transformations in the thermal behavior of the chip. Finally, the analysis of the thermal effect of the source code modifications can be included in a temperature-aware compiler which minimizes the total temperature of the chip, as well as the temperature gradients, according to these guidelines
Energy challenges for ICT
The energy consumption from the expanding use of information and communications technology (ICT) is unsustainable with present drivers, and it will impact heavily on the future climate change. However, ICT devices have the potential to contribute signi - cantly to the reduction of CO2 emission and enhance resource e ciency in other sectors, e.g., transportation (through intelligent transportation and advanced driver assistance systems and self-driving vehicles), heating (through smart building control), and manu- facturing (through digital automation based on smart autonomous sensors). To address the energy sustainability of ICT and capture the full potential of ICT in resource e - ciency, a multidisciplinary ICT-energy community needs to be brought together cover- ing devices, microarchitectures, ultra large-scale integration (ULSI), high-performance computing (HPC), energy harvesting, energy storage, system design, embedded sys- tems, e cient electronics, static analysis, and computation. In this chapter, we introduce challenges and opportunities in this emerging eld and a common framework to strive towards energy-sustainable ICT
Astrometric calibration and performance of the Dark Energy Camera
We characterize the ability of the Dark Energy Camera (DECam) to perform
relative astrometry across its 500~Mpix, 3 deg^2 science field of view, and
across 4 years of operation. This is done using internal comparisons of ~4x10^7
measurements of high-S/N stellar images obtained in repeat visits to fields of
moderate stellar density, with the telescope dithered to move the sources
around the array. An empirical astrometric model includes terms for: optical
distortions; stray electric fields in the CCD detectors; chromatic terms in the
instrumental and atmospheric optics; shifts in CCD relative positions of up to
~10 um when the DECam temperature cycles; and low-order distortions to each
exposure from changes in atmospheric refraction and telescope alignment. Errors
in this astrometric model are dominated by stochastic variations with typical
amplitudes of 10-30 mas (in a 30 s exposure) and 5-10 arcmin coherence length,
plausibly attributed to Kolmogorov-spectrum atmospheric turbulence. The size of
these atmospheric distortions is not closely related to the seeing. Given an
astrometric reference catalog at density ~0.7 arcmin^{-2}, e.g. from Gaia, the
typical atmospheric distortions can be interpolated to 7 mas RMS accuracy (for
30 s exposures) with 1 arcmin coherence length for residual errors. Remaining
detectable error contributors are 2-4 mas RMS from unmodelled stray electric
fields in the devices, and another 2-4 mas RMS from focal plane shifts between
camera thermal cycles. Thus the astrometric solution for a single DECam
exposure is accurate to 3-6 mas (0.02 pixels, or 300 nm) on the focal plane,
plus the stochastic atmospheric distortion.Comment: Submitted to PAS
Enabling stream processing for people-centric IoT based on the fog computing paradigm
The world of machine-to-machine (M2M) communication is gradually moving from vertical single purpose solutions to multi-purpose and collaborative applications interacting across industry verticals, organizations and people - A world of Internet of Things (IoT). The dominant approach for delivering IoT applications relies on the development of cloud-based IoT platforms that collect all the data generated by the sensing elements and centrally process the information to create real business value. In this paper, we present a system that follows the Fog Computing paradigm where the sensor resources, as well as the intermediate layers between embedded devices and cloud computing datacenters, participate by providing computational, storage, and control. We discuss the design aspects of our system and present a pilot deployment for the evaluating the performance in a real-world environment. Our findings indicate that Fog Computing can address the ever-increasing amount of data that is inherent in an IoT world by effective communication among all elements of the architecture
Systematic energy characterization of CMP/SMT processor systems via automated micro-benchmarks
Microprocessor-based systems today are composed of multi-core, multi-threaded processors with complex cache hierarchies and gigabytes of main memory. Accurate characterization of such a system, through predictive pre-silicon modeling and/or diagnostic postsilicon measurement based analysis are increasingly cumbersome and error prone. This is especially true of energy-related characterization studies. In this paper, we take the position that automated micro-benchmarks generated with particular objectives in mind hold the key to obtaining accurate energy-related characterization. As such, we first present a flexible micro-benchmark generation framework (MicroProbe) that is used to probe complex multi-core/multi-threaded systems with a variety and range of energy-related queries in mind. We then present experimental results centered around an
IBM POWER7 CMP/SMT system to demonstrate how the systematically generated micro-benchmarks can be used to answer three
specific queries: (a) How to project application-specific (and if needed, phase-specific) power consumption with component-wise breakdowns? (b) How to measure energy-per-instruction (EPI) values for the target machine? (c) How to bound the worst-case (maximum) power consumption in order to determine safe, but practical (i.e. affordable) packaging or cooling solutions? The solution approaches to the above problems are all new. Hardware measurement
based analysis shows superior power projection accuracy (with error margins of less than 2.3% across SPEC CPU2006) as well as max-power stressing capability (with 10.7% increase in processor power over the very worst-case power seen during the execution of SPEC CPU2006 applications).Peer ReviewedPostprint (author’s final draft
Holographic Entanglement Entropy
We review the developments in the past decade on holographic entanglement
entropy, a subject that has garnered much attention owing to its potential to
teach us about the emergence of spacetime in holography. We provide an
introduction to the concept of entanglement entropy in quantum field theories,
review the holographic proposals for computing the same, providing some
justification for where these proposals arise from in the first two parts. The
final part addresses recent developments linking entanglement and geometry. We
provide an overview of the various arguments and technical developments that
teach us how to use field theory entanglement to detect geometry. Our
discussion is by design eclectic; we have chosen to focus on developments that
appear to us most promising for further insights into the holographic map.
This is a draft of a few chapters of a book which will appear sometime in the
near future, to be published by Springer. The book in addition contains a
discussion of application of holographic ideas to computation of entanglement
entropy in strongly coupled field theories, and discussion of tensor networks
and holography, which we have chosen to exclude from the current manuscript.Comment: 154 pages. many figures. preliminary version of book chapters.
comments welcome. v2: typos fixed and references adde
Review of UK microgeneration. Part 1 : policy and behavioural aspects
A critical review of the literature relating to government policy and behavioural aspects relevant to the uptake and application of microgeneration in the UK is presented. Given the current policy context aspiring to zero-carbon new homes by 2016 and a variety of minimum standards and financial policy instruments supporting microgeneration in existing dwellings, it appears that this class of technologies could make a significant contribution to UK energy supply and low-carbon buildings in the future. Indeed, achievement of a reduction in greenhouse gas emissions by 80% (the UK government's 2050 target) for the residential sector may entail substantial deployment of microgeneration. Realisation of the large potential market for microgeneration relies on a variety of inter-related factors such as microeconomics, behavioural aspects, the structure of supporting policy instruments and well-informed technology development. This article explores these issues in terms of current and proposed policy instruments in the UK. Behavioural aspects associated with both initial uptake of the technology and after purchase are also considered
- …