38 research outputs found

    Fluid Petri Nets for the Performance Evaluation of MapReduce Applications

    Get PDF
    Big Data applications allow to successfully analyze large amounts of data not necessarily structured, though at the same time they present new challenges. For example, predicting the performance of frameworks such as Hadoop can be a costly task, hence the necessity to provide models that can be a valuable support for designers and developers. This paper provides a new contribution in studying a novel modeling approach based on fluid Petri nets to predict MapReduce jobs execution time. The experiments we performed at CINECA, the Italian supercomputing center, have shown that the achieved accuracy is within 16% of the actual measurements on average

    A comparison study of co-simulation frameworks for multi-energy systems: the scalability problem

    Get PDF
    The transition to a low-carbon society will completely change the structure of energy systems from a standalone hierarchical centralised vision to cooperative and dis- tributed Multi-Energy Systems. The analysis of these complex systems requires the collaboration of researchers from different disciplines in the energy, ICT, social, economic, and political sectors. Combining such disparate disciplines into a single tool for modeling and analyzing such a complex environment as a Multi-Energy System requires tremendous effort. Researchers have overcome this effort by using co-simulation techniques that give the possibility of integrating existing domain-specific simulators in a single environment. Co-simulation frameworks, such as Mosaik and HELICS, have been developed to ease such integration. In this context, an additional challenge is the different temporal and spatial scales that are involved in the real world and that must be addressed during co-simulation. In particular, the huge number of heterogeneous actors populating the system makes it difficult to represent the system as a whole. In this paper, we propose a comparison of the scalability performance of two major co-simulation frameworks (i.e. HELICS and Mosaik) and a particular implementation of a well-known multi-agent systems library (i.e. AIOMAS). After describing a generic co-simulation framework infrastructure and its related challenges in managing a distributed co-simulation environment, the three selected frameworks are introduced and compared with each other to highlight their principal structure. Then, the scalability problem of co-simulation frameworks is introduced presenting four benchmark configurations to test their ability to scale in terms of a number of running instances. To carry out this comparison, a simplified multi-model energy scenario was used as a common testing environment. This work helps to understand which of the three frameworks and four configurations to select depending on the scenario to analyse. Experimental results show that a Multi-processing configuration of HELICS reaches the best performance in terms of KPIs defined to assess the scalability among the co-simu- lation frameworks

    Design and accuracy analysis of multi-level state estimation based on smart metering infrastructure

    Get PDF
    While the first aim of smart meters is to provide energy readings for billing purposes, the availability of these measurements could open new opportunities for the management of future distribution grids. This paper presents a multi-level state estimator that exploits smart meter measurements for monitoring both low and medium voltage grids. The goal of the paper is to present an architecture able to efficiently integrate smart meter measurements and to show the accuracy performance achievable if the use of real-time smart meter measurements for state estimation purposes were enabled. The design of the state estimator applies the uncertainty propagation theory for the integration of the data at the different hierarchical levels. The coordination of the estimation levels is realized through a cloud-based infrastructure, which also provides the interface to auxiliary functions and the access to the estimation results for other distribution grid management applications. A mathematical analysis is performed to characterize the estimation algorithm in terms of accuracy and to show the performance achievable at the different levels of the distribution grid when using the smart meter data. Simulations are presented, which validate the analytical results and demonstrate the operation of the multi-level estimator in coordination with the cloud-based platform

    Performance Prediction of Cloud-Based Big Data Applications

    Get PDF
    Big data analytics have become widespread as a means to extract knowledge from large datasets. Yet, the heterogeneity and irregular- ity usually associated with big data applications often overwhelm the existing software and hardware infrastructures. In such con- text, the exibility and elasticity provided by the cloud computing paradigm o er a natural approach to cost-e ectively adapting the allocated resources to the application’s current needs. However, these same characteristics impose extra challenges to predicting the performance of cloud-based big data applications, a key step to proper management and planning. This paper explores three modeling approaches for performance prediction of cloud-based big data applications. We evaluate two queuing-based analytical models and a novel fast ad hoc simulator in various scenarios based on di erent applications and infrastructure setups. The three ap- proaches are compared in terms of prediction accuracy, nding that our best approaches can predict average application execution times with 26% relative error in the very worst case and about 7% on average

    Intracoronary physiology-guided percutaneous coronary intervention in patients with diabetes

    Get PDF
    Objective: The risk of vessel-oriented cardiac adverse events (VOCE) in patients with diabetes mellitus (DM) undergoing intracoronary physiology-guided coronary revascularization is poorly defined. The purpose of this work is to evaluate the risk of VOCE in patients with and without DM in whom percutaneous coronary intervention (PCI) was performed or deferred based on pressure-wire functional assessment. Methods: This is a retrospective analysis of a multicenter registry of patients evaluated with fractional flow reserve (FFR) and/or non-hyperaemic pressure ratio (NHPR). Primary endpoint was a composite of VOCE including cardiac death, vessel-related myocardial infarction (MI), and ischemia-driven target vessel revascularization (TVR). Results: A large cohort of 2828 patients with 3353 coronary lesions was analysed to assess the risk of VOCE at long-term follow-up (23 [14-36] months). Non-insulin-dependent-DM (NIDDM) was not associated with the primary endpoint in the overall cohort (adjusted Hazard Ratio [aHR] 1.18, 95% CI 0.87-1.59, P = 0.276) or in patients with coronary lesions treated with PCI (aHR = 1.30, 95% CI 0.78-2.16, P = 0.314). Conversely, insulin-dependent diabetes mellitus (IDDM) demonstrated an increased risk of VOCE in the overall cohort (aHR 1.76, 95% CI 1.07-2.91, P = 0.027), but not in coronary lesions undergoing PCI (aHR 1.26, 95% CI 0.50-3.16, P = 0.621). Importantly, in coronary lesions deferred after functional assessment IDDM (aHR 2.77, 95% CI 1.11-6.93, P = 0.029) but not NIDDM (aHR = 0.94, 95% CI 0.61-1.44, P = 0.776) was significantly associated with the risk of VOCE. IDDM caused a significant effect modification of FFR-based risk stratification (P for interaction < 0.001). Conclusion: Overall, DM was not associated with an increased risk of VOCE in patients undergoing physiology-guided coronary revascularization. However, IDDM represents a phenotype at high risk of VOCE

    I codici miniati e decorati della Biblioteca Statale di Cremona

    No full text
    Il catalogo contiene saggi paleografico-codicologico e di storia della miniatura relativi ai codici miniati e decorati della Biblioteca Statale di Cremona. Ai saggi fanno seguito le le schede descrittive dei codici in mostra

    A Performance Modeling Language For Big Data Architectures

    No full text
    Big Data applications represent an emerging field, which have proved to be crucial in business intelligence and in massive data management. Big Data promises to be the next big thing in the development of strategical computer applications, even if it requires considerable investment and an accurate resource planning, as the architectures needed to perform at the requisite speed need to scale easily on to a large number of computing nodes. Appropriate management of such architectures benefits from the availability of performance models, to allow developers and administrators to take informed decisions, saving time and experimental work. This paper presents a dedicated modeling language showing firstly how it is possible to ease the modeling process and secondly how the semantic gap between modeling logic and the domain can be reduced

    Modeling Apache Hive based applications in Big Data architectures

    No full text
    Performance prediction for Big Data applications is a powerful tool supporting designers and administrators in achieving a better exploitation of their computing resources. Big Data architectures are complex, continuously evolving and adaptive, thus a rapid design and verification modeling approach can be fit to the needs. As a result, a minimal semantic gap between models and applications would enable a wider number of designers to directly benefit from the results. The paper presents a multiformalism modeling approach based on a one-to-one mapping of Apache Hive querying primitives to modeling primitives. This approach exploits a combination of proper Big Data specific submodels and Petri nets to enable modeling of conventional application logic

    Modeling Hybrid Systems in SIMTHESys

    Get PDF
    Hybrid systems (HS) have been proven a valid formalism to study and analyze specific issues in a variety of fields. However, most of the analysis techniques for HS are based on low-level description, where single states of the systems have to be defined and enumerated by the modeler. Some high level modeling formalisms, such as Fluid Stochastic Petri Nets, have been introduced to overcome such difficulties, but simple procedures allowing the definitions of domain specific languages for HS could simplify the analysis of such systems. This paper presents a stochastic HS language consisting of a subset of piecewise deterministic Markov processes, and shows how SIMTHESys – a compositional, metamodeling based framework describing and extending formalisms – can be used to convert into this paradigm a wide number of high-level HS description languages. A simple example applying the technique to solve a model of the energy consumption of a data-center specified using Queuing Network and Hybrid Petri Nets is presented to show the effectiveness of the proposal

    Simulating hybrid systems within SIMTHESys multi-formalism models

    No full text
    As many real world systems evolve according to phenomena characterized by a continuous time dependency, literature studied several approaches to correctly capture all their aspects. Since their analysis is not trivial, different high level approaches have been proposed, such as classical pure mathematical analysis or tool-oriented frameworks like Fluid Stochastic Petri Nets. Each approach has its specific purposes and naturally addresses some application field. This paper instead focuses on the simulation of models written in a custom Hybrid Systems (HS) formalism. The key aspect of this work is focused on the use within a framework called SIMTHESys of a function describing how the fluid variables evolve, providing more efficient simulation with respect to traditional approaches
    corecore