52,552 research outputs found

    Computing server power modeling in a data center: survey,taxonomy and performance evaluation

    Full text link
    Data centers are large scale, energy-hungry infrastructure serving the increasing computational demands as the world is becoming more connected in smart cities. The emergence of advanced technologies such as cloud-based services, internet of things (IoT) and big data analytics has augmented the growth of global data centers, leading to high energy consumption. This upsurge in energy consumption of the data centers not only incurs the issue of surging high cost (operational and maintenance) but also has an adverse effect on the environment. Dynamic power management in a data center environment requires the cognizance of the correlation between the system and hardware level performance counters and the power consumption. Power consumption modeling exhibits this correlation and is crucial in designing energy-efficient optimization strategies based on resource utilization. Several works in power modeling are proposed and used in the literature. However, these power models have been evaluated using different benchmarking applications, power measurement techniques and error calculation formula on different machines. In this work, we present a taxonomy and evaluation of 24 software-based power models using a unified environment, benchmarking applications, power measurement technique and error formula, with the aim of achieving an objective comparison. We use different servers architectures to assess the impact of heterogeneity on the models' comparison. The performance analysis of these models is elaborated in the paper

    Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)

    Get PDF
    Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through routing models. The most important input to debris \ufb02ow routing models are the topographic data, usually in the form of Digital Elevation Models (DEMs). The quality of DEMs depends on the accuracy, density, and spatial distribution of the sampled points; on the characteristics of the surface; and on the applied gridding methodology. Therefore, the choice of the interpolation method affects the realistic representation of the channel and fan morphology, and thus potentially the debris \ufb02ow routing modeling outcomes. In this paper, we initially investigate the performance of common interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor, Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging) in building DEMs with the complex topography of a debris \ufb02ow channel located in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full- waveform Light Detection And Ranging (LiDAR) data. The investigation is carried out through a combination of statistical analysis of vertical accuracy, algorithm robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms on the performance of a Geographic Information System (GIS)-based cell model for simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation between the DEMs heights uncertainty resulting from the gridding procedure and that on the corresponding simulated erosion/deposition depths, both the effect of interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid discharges, and channel morphology after the event. The comparison among the tested interpolation methods highlights that the ANUDEM and ordinary kriging algorithms are not suitable for building DEMs with complex topography. Conversely, the linear triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy and shape reliability. Anyway, the evaluation of the effects of gridding techniques on debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does not signi\ufb01cantly affect the model outcomes

    Cheetah Experimental Platform Web 1.0: Cleaning Pupillary Data

    Get PDF
    Recently, researchers started using cognitive load in various settings, e.g., educational psychology, cognitive load theory, or human-computer interaction. Cognitive load characterizes a tasks' demand on the limited information processing capacity of the brain. The widespread adoption of eye-tracking devices led to increased attention for objectively measuring cognitive load via pupil dilation. However, this approach requires a standardized data processing routine to reliably measure cognitive load. This technical report presents CEP-Web, an open source platform to providing state of the art data processing routines for cleaning pupillary data combined with a graphical user interface, enabling the management of studies and subjects. Future developments will include the support for analyzing the cleaned data as well as support for Task-Evoked Pupillary Response (TEPR) studies

    Fast Simulation of Gaussian-Mode Scattering for Precision Interferometry

    Get PDF
    Understanding how laser light scatters from realistic mirror surfaces is crucial for the design, com- missioning and operation of precision interferometers, such as the current and next generation of gravitational-wave detectors. Numerical simulations are indispensable tools for this task but their utility can in practice be limited by the computational cost of describing the scattering process. In this paper we present an efficient method to significantly reduce the computational cost of optical simulations that incorporate scattering. This is accomplished by constructing a near optimal representation of the complex, multi-parameter 2D overlap integrals that describe the scattering process (referred to as a reduced order quadrature). We demonstrate our technique by simulating a near-unstable Fabry-Perot cavity and its control signals using similar optics to those installed in one of the LIGO gravitational-wave detectors. We show that using reduced order quadrature reduces the computational time of the numerical simulation from days to minutes (a speed-up of 2750×\approx 2750 \times) whilst incurring negligible errors. This significantly increases the feasibility of modelling interferometers with realistic imperfections to overcome current limits in state-of-the-art optical systems. Whilst we focus on the Hermite-Gaussian basis for describing the scattering of the optical fields, our method is generic and could be applied with any suitable basis. An implementation of this reduced order quadrature method is provided in the open source interferometer simulation software Finesse.Comment: 15 pages, 11 figure

    Surrogate modeling based cognitive decision engine for optimization of WLAN performance

    Get PDF
    Due to the rapid growth of wireless networks and the dearth of the electromagnetic spectrum, more interference is imposed to the wireless terminals which constrains their performance. In order to mitigate such performance degradation, this paper proposes a novel experimentally verified surrogate model based cognitive decision engine which aims at performance optimization of IEEE 802.11 links. The surrogate model takes the current state and configuration of the network as input and makes a prediction of the QoS parameter that would assist the decision engine to steer the network towards the optimal configuration. The decision engine was applied in two realistic interference scenarios where in both cases, utilization of the cognitive decision engine significantly outperformed the case where the decision engine was not deployed

    Surface Reconstruction from Scattered Point via RBF Interpolation on GPU

    Full text link
    In this paper we describe a parallel implicit method based on radial basis functions (RBF) for surface reconstruction. The applicability of RBF methods is hindered by its computational demand, that requires the solution of linear systems of size equal to the number of data points. Our reconstruction implementation relies on parallel scientific libraries and is supported for massively multi-core architectures, namely Graphic Processor Units (GPUs). The performance of the proposed method in terms of accuracy of the reconstruction and computing time shows that the RBF interpolant can be very effective for such problem.Comment: arXiv admin note: text overlap with arXiv:0909.5413 by other author

    Agroecological aspects of evaluating agricultural research and development:

    Get PDF
    In this paper we describe how biophysical data can be used, in conjunction with agroecological concepts and multimarket economic models, to systematically evaluate the effects of agricultural R&D in ways that inform research priority setting and resource allocation decisions. Agroecological zones can be devised to help estimate the varying, site-specific responses to new agricultural technologies and to evaluate the potential for research to spill over from one agroecological zone to another. The application of agroecological zonation procedures in an international agricultural research context is given special attention.Agricultural research., Technological innovations., Agricultural economics and policies.,

    PaPaS: A Portable, Lightweight, and Generic Framework for Parallel Parameter Studies

    Full text link
    The current landscape of scientific research is widely based on modeling and simulation, typically with complexity in the simulation's flow of execution and parameterization properties. Execution flows are not necessarily straightforward since they may need multiple processing tasks and iterations. Furthermore, parameter and performance studies are common approaches used to characterize a simulation, often requiring traversal of a large parameter space. High-performance computers offer practical resources at the expense of users handling the setup, submission, and management of jobs. This work presents the design of PaPaS, a portable, lightweight, and generic workflow framework for conducting parallel parameter and performance studies. Workflows are defined using parameter files based on keyword-value pairs syntax, thus removing from the user the overhead of creating complex scripts to manage the workflow. A parameter set consists of any combination of environment variables, files, partial file contents, and command line arguments. PaPaS is being developed in Python 3 with support for distributed parallelization using SSH, batch systems, and C++ MPI. The PaPaS framework will run as user processes, and can be used in single/multi-node and multi-tenant computing systems. An example simulation using the BehaviorSpace tool from NetLogo and a matrix multiply using OpenMP are presented as parameter and performance studies, respectively. The results demonstrate that the PaPaS framework offers a simple method for defining and managing parameter studies, while increasing resource utilization.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US
    corecore