22,908 research outputs found

    Autonomic log/restore for advanced optimistic simulation systems

    Get PDF
    In this paper we address state recoverability in optimistic simulation systems by presenting an autonomic log/restore architecture. Our proposal is unique in that it jointly provides the following features: (i) log/restore operations are carried out in a completely transparent manner to the application programmer, (ii) the simulation-object state can be scattered across dynamically allocated non-contiguous memory chunks, (iii) two differentiated operating modes, incremental vs non-incremental, coexist via transparent, optimized run-time management of dual versions of the same application layer, with dynamic selection of the best suited operating mode in different phases of the optimistic simulation run, and (iv) determinationof the best suited mode for any time frame is carried out on the basis of an innovative modeling/optimization approach that takes into account stability of each operating mode vs variations of the model execution parameters. © 2010 IEEE

    Towards a Taxonomy of Performance Evaluation of Commercial Cloud Services

    Full text link
    Cloud Computing, as one of the most promising computing paradigms, has become increasingly accepted in industry. Numerous commercial providers have started to supply public Cloud services, and corresponding performance evaluation is then inevitably required for Cloud provider selection or cost-benefit analysis. Unfortunately, inaccurate and confusing evaluation implementations can be often seen in the context of commercial Cloud Computing, which could severely interfere and spoil evaluation-related comprehension and communication. This paper introduces a taxonomy to help profile and standardize the details of performance evaluation of commercial Cloud services. Through a systematic literature review, we constructed the taxonomy along two dimensions by arranging the atomic elements of Cloud-related performance evaluation. As such, this proposed taxonomy can be employed both to analyze existing evaluation practices through decomposition into elements and to design new experiments through composing elements for evaluating performance of commercial Cloud services. Moreover, through smooth expansion, we can continually adapt this taxonomy to the more general area of evaluation of Cloud Computing.Comment: 8 pages, Proceedings of the 5th International Conference on Cloud Computing (IEEE CLOUD 2012), pp. 344-351, Honolulu, Hawaii, USA, June 24-29, 201

    The Child Penalty – A Compensating Wage Differential? ENEPRI Research Reports No. 22, 22 August 2006

    Get PDF
    Many studies document that women with children tend to earn lower wages than women without children (a shortfall known as the ‘child penalty’ or ‘family gap’). Despite the existence of several hypotheses about the causes of the child penalty, much about the gap in wages remains unexplained. This study explores the premise that mothers might substitute income for advantageous, non-pecuniary job characteristics. More specifically, the hypothesis to be investigated is that if the labour market rewards working arrangements that involve disamenities, to some extent the child penalty might be a compensating wage differential for the disamenities avoided by mothers. In order to assess the impact of motherhood on the choice between pecuniary and non-pecuniary job features in Germany, data from the German Socio-Economic Panel (GSOEP) is used. The longitudinal nature of the data allows a comparison of working women before and after the birth of their first child. Furthermore, the GSOEP provides detailed information on personal attributes, job characteristics and job satisfaction, which enables the application of the following three steps to test the hypothesis. First, an event study is used to analyse the changes in the characteristics of a woman’s job around the birth of her first child. The features of interest are time, workload and flexibility. Second, job characteristics are included by their utility (proxied by job satisfaction) for a mother. Third, following the approach of hedonic wage regressions, these (dis)amenities are included in the wage regression in order to see whether a trade-off exists between pecuniary and non-pecuniary job characteristics. The results suggest that to some degree the child penalty can be interpreted as a compensating wage differential

    Breast cancer risk is increased in the years following false-positive breast cancer screening

    Get PDF
    A small number of studies have investigated breast cancer (BC) risk among women with a history of false-positive recall (FPR) in BC screening, but none of them has used time-to-event analysis while at the same time quantifying the effect of false-negative diagnostic assessment (FNDA). FNDA occurs when screening detects BC, but this BC is missed on diagnostic assessment (DA). As a result of FNDA, screenings that detected cancer are incorrectly classified as FPR. Our study linked data recorded in the Flemish BC screening program (women aged 50-69 years) to data from the national cancer registry. We used Cox proportional hazards models on a retrospective cohort of 298 738 women to assess the association between FPR and subsequent BC, while adjusting for potential confounders. The mean follow-up was 6.9 years. Compared with women without recall, women with a history of FPR were at an increased risk of developing BC [hazard ratio = 2.10 (95% confidence interval: 1.92-2.31)]. However, 22% of BC after FPR was due to FNDA. The hazard ratio dropped to 1.69 (95% confidence interval: 1.52-1.87) when FNDA was excluded. Women with FPR have a subsequently increased BC risk compared with women without recall. The risk is higher for women who have a FPR BI-RADS 4 or 5 compared with FPR BI- RADS 3. There is room for improvement of diagnostic assessment: 41% of the excess risk is explained by FNDA after baseline screening

    Optimizing memory management for optimistic simulation with reinforcement learning

    Get PDF
    Simulation is a powerful technique to explore complex scenarios and analyze systems related to a wide range of disciplines. To allow for an efficient exploitation of the available computing power, speculative Time Warp-based Parallel Discrete Event Simulation is universally recognized as a viable solution. In this context, the rollback operation is a fundamental building block to support a correct execution even when causality inconsistencies are a posteriori materialized. If this operation is supported via checkpoint/restore strategies, memory management plays a fundamental role to ensure high performance of the simulation run. With few exceptions, adaptive protocols targeting memory management for Time Warp-based simulations have been mostly based on a pre-defined analytic models of the system, expressed as a closed-form functions that map system's state to control parameters. The underlying assumption is that the model itself is optimal. In this paper, we present an approach that exploits reinforcement learning techniques. Rather than assuming an optimal control strategy, we seek to find the optimal strategy through parameter exploration. A value function that captures the history of system feedback is used, and no a-priori knowledge of the system is required. An experimental assessment of the viability of our proposal is also provided for a mobile cellular system simulation

    Inside Dropbox: Understanding Personal Cloud Storage Services

    Get PDF
    Personal cloud storage services are gaining popularity. With a rush of providers to enter the market and an increasing of- fer of cheap storage space, it is to be expected that cloud storage will soon generate a high amount of Internet traffic. Very little is known about the architecture and the perfor- mance of such systems, and the workload they have to face. This understanding is essential for designing efficient cloud storage systems and predicting their impact on the network. This paper presents a characterization of Dropbox, the leading solution in personal cloud storage in our datasets. By means of passive measurements, we analyze data from four vantage points in Europe, collected during 42 consecu- tive days. Our contributions are threefold: Firstly, we are the first to study Dropbox, which we show to be the most widely-used cloud storage system, already accounting for a volume equivalent to around one third of the YouTube traffic at campus networks on some days. Secondly, we characterize the workload typical users in different environments gener- ate to the system, highlighting how this reflects on network traffic. Lastly, our results show possible performance bot- tlenecks caused by both the current system architecture and the storage protocol. This is exacerbated for users connected far from control and storage data-center

    Per-task energy metering and accounting in the multicore era

    Get PDF
    Chip multi-core processors (CMPs) are the preferred processing platform across different domains such as data centers, real-time systems and mobile devices. In all those domains, energy is arguably the most expensive resource in a computing system, in particular, with fastest growth. Therefore, measuring the energy usage draws vast attention. Current studies mostly focus on obtaining finer-granularity energy measurement, such as measuring power in smaller time intervals, distributing energy to hardware components or software components. Such studies focus on scenarios where system energy is measured under the assumption that only one program is running in the system. So far, there is no hardware-level mechanism proposed to distribute the system energy to multiple running programs in a resource sharing multi-core system in an exact way. For the first time, we have formalized the need for per-task energy measurement in multicore by establishing a two-fold concept: Per-Task Energy Metering (PTEM) and Sensible Energy Accounting (SEA). In the scenario where many tasks running in parallel in a multicore system: For each task, the target of PTEM is to provide estimate of the actual energy consumption at runtime based on its resource usage during execution; and SEA aims at providing estimates on the energy it would have consumed when running in isolation with a particular fraction of system's resources. Accurately determining the energy consumed by each task in a system will become of prominent importance in future multi-core based systems as it offers several benefits including (i) Selection of appropriate co-runners, (ii) improved energy-aware task scheduling and (iii) energy-aware billing in data centers. We have shown how these two concepts can be applied to the main components of a computing system: the processor and the memory system. At first, we have applied PTEM to the processor by means of tracking the activities and occupancy of all the resources in a per-task basis. Secondly, we have applied PTEM to the memory system by means of tracking the activities and the state switches of memory banks. Then, we have applied SEA to the processor by predicting the activities and execution time for each task when they run with an fraction of chip resources alone. And last, we apply SEA to the memory system, by means of predicting activities, execution time and the time invoking memory system for each task. As for all these works, by trading-off the hardware cost with the estimation accuracy, we have obtained the implementable and affordable cost mechanisms with high accuracy. We have also shown how these techniques can be applied in different scenarios, such as, to detect significant energy usage variations for any particular task and to develop more energy efficient scheduling policy for the multi-core system. These works in this thesis have been published into IEEE/ACM journals and conferences proceedings that can be found in the publication chapter of this thesis.Los "Chip Multi-core Processors" (CMPs) son la plataforma de procesado preferida en diferentes dominios, tales como los centros de datos, sistemas de tiempo real y dispositivos mĂłviles. En todos estos dominios, la energĂ­a puede ser el recurso mĂĄs caro en el sistema de computaciĂłn, concretamente, lo rĂĄpido que estĂĄ creciendo. Por lo tanto, como medir el consumo energĂ©tico estĂĄ ganando mucha atenciĂłn. Los estudios actuales se centran mayormente en cĂłmo obtener medidas muy detalladas (finer granularity). Por ejemplo, tomar medidas de potencia en pequeños intervalos de tiempo, usando medidores de energĂ­a hardware o software. Estos estudios se centran en escenarios donde el consumo del sistema se mide bajo la suposiciĂłn de que solo un programa se estĂĄ ejecutando en el sistema. Aun no hay ninguna propuesta de un mecanismo a nivel de hardware para medir el consumo entre mĂșltiples programas ejecutĂĄndose a la vez en un sistema multi-core con recursos compartidos. Por primera vez, hemos formalizado la necesidad de medir el consumo energĂ©tico por-tarea en un multi-core estableciendo un concepto dual: Per-Taks Energy Metering (PTEM) y Sensible Energy Accounting (SEA). En un escenario donde varias tareas se ejecutan en paralelo en un sistema multi-core, por cada tarea, el objetivo de PTEM es estimar el consumo real energĂ©tico durante tiempo de ejecuciĂłn basĂĄndose en los recursos usados durante la ejecuciĂłn, y SEA trata de proveer una estimaciĂłn del consumo que tendrĂ­a en solitario con solo una fracciĂłn concreta de los recursos del sistema. Determinar el consumo energĂ©tico con precisiĂłn para cada tarea en un sistema tomara gran importancia en el futuro de los sistemas basados en multi-cores, ya que ofrecen varias ventajas tales como: (i) determinar los co-runners apropiados, (ii) mejorar la planificaciĂłn de tareas teniendo en cuenta su consumo y (iii) facturaciĂłn de los servicios de los data centers basada en el consumo. Hemos mostrado como estos dos conceptos pueden aplicarse a los principales componentes de un sistema de computaciĂłn: el procesador y el sistema de memoria. Para empezar, hemos aplicado PTEM al procesador para registrar la actividad y la ocupaciĂłn de todos los recursos por cada tarea. Luego, hemos aplicado SEA al procesador prediciendo la actividad y tiempo de ejecuciĂłn por tarea cuando se ejecutan con solo una parte de los recursos del chip. Por Ășltimo, hemos aplicado SEA al sistema de memoria para predecir la activada, el tiempo ejecuciĂłn y cuando el sistema de memoria es invocado por cada tarea. Con todo ello, hemos alcanzado un compromiso entre el coste del hardware y la precisiĂłn en las estimaciones para obtener mecanismos implementables con un coste aceptable y una alta precisiĂłn. Durante nuestros estudios mostramos como esas tĂ©cnicas pueden ser aplicadas a diferente escenarios, tales como: detectar variaciones significativas en el consumo energĂ©tico por una tarea en concreto o como desarrollar polĂ­ticas de planificaciĂłn energĂ©ticamente mĂĄs eficientes para sistemas multi-core. Los trabajos que hemos publicado durante el desarrollo de esta tesis en los IEEE/ACM journals y en varias conferencias pueden encontrarse en el capĂ­tulo de "publicaciones" de este documentoPostprint (published version
    • 

    corecore