698 research outputs found

    Modeling Data Center Building Blocks for Energy-efficiency and Thermal Simulations

    Get PDF
    International audienceIn this paper we present a concept and specification of Data Center Efficiency Building Blocks (DEBBs), which represent hardware components of a data center complemented by descriptions of their energy efficiency. Proposed building blocks contain hardware and thermodynamic models that can be applied to simulate a data center and to evaluate its energy efficiency. DEBBs are available in an open repository being built by the CoolEmAll project. In the paper we illustrate the concept by an example of DEBB defined for the RECS multi-server system including models of its power usage and thermodynamic properties. We also show how these models are affected by specific architecture of modeled hardware and differences between various classes of applications. Proposed models are verified by a comparison to measurements on a real infrastructure. Finally, we demonstrate how DEBBs are used in data center simulations

    Composition in Differential Privacy for General Granularity Notions

    Get PDF
    The composition theorems of differential privacy (DP) allow data curators to combine different algorithms to obtain a new algorithm that continues to satisfy DP. However, new granularity notions (i.e., neighborhood definitions), data domains, and composition settings have appeared in the literature that the classical composition theorems do not cover. For instance, the original parallel composition theorem does not translate well to general granularity notions. This complicates the opportunity of composing DP mechanisms in new settings and obtaining accurate estimates of the incurred privacy loss after composition. To overcome these limitations, we study the composability of DP in a general framework and for any kind of data domain or neighborhood definition. We give a general composition theorem in both independent and adaptive versions and we provide analogous composition results for approximate, zero-concentrated, and Gaussian DP. Besides, we study the hypothesis needed to obtain the best composition bounds. Our theorems cover both parallel and sequential composition settings. Importantly, they also cover every setting in between, allowing us to compute the final privacy loss of a composition with greatly improved accuracy

    Scalable and flexible location-based services for ubiquitous information access

    Get PDF
    In mobile distributed environments applications often need to dynamically obtain information that is relevant to their current location. The current design of the Internet does not provide any conceptual models for addressing this issue. As a result, developing a system that requires this functionality becomes a challenging and costly task, leading to individual solutions that only address the requirements of specific application scenarios. In this paper we propose a more generic approach, based on a scalable and exible concept of location-based services, and an architectural framework to support its application in the Internet environment. We describe a case study in which this architectural framework is used for developing a location-sensitive tourist guide. The realisation of this case study demonstrates the applicability of the framework, as well as the overall concept of location-based services, and highlights some of the issues involved.To the GUIDE team, and especially to Keith Mitchell and Matthias Franz, for their collaboration in the preparation of this case study. To Adrian Friday for his comments on a draft version of this paper. To the anonymous reviewers for their attentive reading and valuable comments. This work was carried out as part of the PRAXIS funded AROUND project (PRAXIS/P/EEI/14267/1998) and supported by grant PRAXIS XXI/BD/13853/97

    Super-scalar RAM-CPU cache compression

    Get PDF
    High-performance data-intensive query processing tasks like OLAP, data mining or scientific data analysis can be severely I/O bound, even when high-e

    Data mining as a tool for environmental scientists

    Get PDF
    Over recent years a huge library of data mining algorithms has been developed to tackle a variety of problems in fields such as medical imaging and network traffic analysis. Many of these techniques are far more flexible than more classical modelling approaches and could be usefully applied to data-rich environmental problems. Certain techniques such as Artificial Neural Networks, Clustering, Case-Based Reasoning and more recently Bayesian Decision Networks have found application in environmental modelling while other methods, for example classification and association rule extraction, have not yet been taken up on any wide scale. We propose that these and other data mining techniques could be usefully applied to difficult problems in the field. This paper introduces several data mining concepts and briefly discusses their application to environmental modelling, where data may be sparse, incomplete, or heterogenous

    Autonomous Attitude Determination System (AADS). Volume 1: System description

    Get PDF
    Information necessary to understand the Autonomous Attitude Determination System (AADS) is presented. Topics include AADS requirements, program structure, algorithms, and system generation and execution

    Data Impact Analysis in Business Processes

    Get PDF
    Business processes and their outcomes rely on data whose values are changed during process execution. When unexpected changes occur, e.g., due to last minute changes of circumstances, human errors, or corrections of detected errors in data values, this may have consequences for various parts of the process. This challenges the process participants to understand the full impact of the changes and decide on responses or corrective actions. To tackle this challenge, the paper suggests a semi-automated approach for data impact analysis. The approach entails a trans-formation of business process models to a relational database representation, to which querying is applied, in order to retrieve process elements that are related to a given data change. Specifically, the proposed method receives a data item (an attribute or an object) and information about the current state of process execution (in the form of a trace upon which an unexpected change has occurred). It analyzes the impact of the change in terms of activities, other data items, and gateways that are affected. When evaluating the usefulness of the approach through a case study, it was found that it has the potential to assist experienced process participants, especially when the consequences of the change are extensive, and its locus is in the middle of the process. The approach contributes both to practice with tool-supported guidance on how to handle unexpected data changes, and to research with a set of impact analysis primitives and queries

    Adaptive Probabilistic Forecasting of Electricity (Net-)Load

    Full text link
    Electricity load forecasting is a necessary capability for power system operators and electricity market participants. The proliferation of local generation, demand response, and electrification of heat and transport are changing the fundamental drivers of electricity load and increasing the complexity of load modelling and forecasting. We address this challenge in two ways. First, our setting is adaptive; our models take into account the most recent observations available, yielding a forecasting strategy able to automatically respond to changes in the underlying process. Second, we consider probabilistic rather than point forecasting; indeed, uncertainty quantification is required to operate electricity systems efficiently and reliably. Our methodology relies on the Kalman filter, previously used successfully for adaptive point load forecasting. The probabilistic forecasts are obtained by quantile regressions on the residuals of the point forecasting model. We achieve adaptive quantile regressions using the online gradient descent; we avoid the choice of the gradient step size considering multiple learning rates and aggregation of experts. We apply the method to two data sets: the regional net-load in Great Britain and the demand of seven large cities in the United States. Adaptive procedures improve forecast performance substantially in both use cases for both point and probabilistic forecasting
    • 

    corecore