39 research outputs found

    Application Driven IT Service Management for Energy Efficiency

    Get PDF
    Considering the ever increasing of information technology usage in our everyday life and the huge concentration of computational resources at remote service centers, energy costs become one of the biggest challenging issues for IT managers. Mechanisms to improve energy efficiency in service centers are divided at different levels which range from single components to the whole facility, considering both equipment and application issues. In this paper we focus on analyzing energy efficiency issues at the application level, focusing on e-business processes. Our approach proposes a new method to evaluate and to apply green adaptations strategies based on the service application characteristics with respect to the business process taking into account non-functional requirements

    1 A Survey on Service Quality Description

    Get PDF
    Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in compute

    Information logistics and fog computing: The DITAS∗ approach

    Get PDF
    Data-intensive applications are usually developed based on Cloud resources whose service delivery model helps towards building reliable and scalable solutions. However, especially in the context of Internet of Things-based applications, Cloud Computing comes with some limitations as data, generated at the edge of the network, are processed at the core of the network producing security, privacy, and latency issues. On the other side, Fog Computing is emerging as an extension of Cloud Computing, where resources located at the edge of the network are used in combination with cloud services. The goal of this paper is to present the approach adopted in the recently started DITAS project: the design of a Cloud platform is proposed to optimize the development of data-intensive applications providing information logistics tools that are able to deliver information and computation resources at the right time, right place, with the right quality. Applications that will be developed with DITAS tools live in a Fog Computing environment, where data move from the cloud to the edge and vice versa to provide secure, reliable, and scalable solutions with excellent performance

    Native Study of the Behaviour of Magnetite Nanoparticles for Hyperthermia Treatment during the Initial Moments of Intravenous Administration

    Get PDF
    Magnetic nanoparticles (MNPs) present outstanding properties making them suitable as therapeutic agents for hyperthermia treatments. Since the main safety concerns of MNPs are represented by their inherent instability in a biological medium, strategies to both achieve longterm stability and monitor hazardous MNP degradation are needed. We combined a dynamic approach relying on flow field flow fractionation (FFF)-multidetection with conventional techniques to explore frame-by-frame changes of MNPs injected in simulated biological medium, hypothesize the interaction mechanism they are subject to when surrounded by a saline, protein-rich environment, and understand their behaviour at the most critical point of intravenous administration. In the first moments of MNPs administration in the patient, MNPs change their surrounding from a favorable to an unfavorable medium, i.e., a complex biological fluid such as blood; the particles evolve from a synthetic identity to a biological identity, a transition that needs to be carefully monitored. The dynamic approach presented herein represents an optimal alternative to conventional batch techniques that can monitor only size, shape, surface charge, and aggregation phenomena as an averaged information, given that they cannot resolve different populations present in the sample and cannot give accurate information about the evolution or temporary instability of MNPs. The designed FFF method equipped with a multidetection system enabled the separation of the particle populations providing selective information on their morphological evolution and on nanoparticle– proteins interaction in the very first steps of infusion. Results showed that in a dynamic biological setting and following interaction with serum albumin, PP-MNPs retain their colloidal properties, supporting their safety profile for intravenous administration

    Energy-Aware Process Design Optimization2013 International Conference on Cloud and Green Computing

    No full text
    Cloud computing has a big impact on the environment since the energy consumption and the resulting CO2 emissions of data centers can be compared to the worldwide airlines traffic. Many researchers are addressing such issue by proposing methods and techniques to increase data center energy efficiency. Focusing at the application level, this paper proposes a method to support the process design by optimizing the configuration and deployment. In particular, measuring and monitoring suitable metrics, the presented approach provides a support to the designer to select the way in which it is possible to modify the process deployment in order to continuously guarantee good performance and energy efficiency. The process adaptation can be required when inefficiencies occur or when, although the system is efficient, there is still room for improvement

    CO2-Aware Adaptation Strategies for Cloud Applications

    Get PDF
    The increasing utilization of cloud resources raises several issues related to their environmental impact and, more in general, sustainability. Recently, most of the contributions have focused on energy efficiency achieved through a better physical and virtual resource management. The present paper considers instead the application level, extending the focus to the reduction of CO2 emissions related to the execution of applications. We aim to exploit adaptivity through the design of an Application Controller that, enacting the right adaptation strategy for a given context, allows the improvement of the trade off between QoS and CO2 emission reduction. The effectiveness of the approach has been shown running an HPC application in a federated cloud infrastructure

    EUCIP Core Level Guida alla certificazione per il professionista ICT

    No full text
    La certificazione EUCIP (European Certification of Informatics Professionals) attesta le competenze di base del professionista ITC per la pianificazione, realizzazione ed esercizio di sistemi informativi. Il volume introduce in modo organico e dettagliato tutti gli argomenti oggetto degli esami di certificazione EUCIP Core Level e, al contempo, fornisce una serie di riferimenti utili al loro approfondimento. Il livello Core richiede il superamento di un test a risposte multiple su diversi argomenti legati alle tecnologie dell’informazione e delle telecomunicazioni, per esempio alcune domande riguardano: gestione dei progetti informatici, algoritmi, progettazione di basi di dati, sistemi di comunicazione wireless. Il manuale segue le indicazioni del Syllabus 3.0 (ultima versione), il documento articolato in vari punti che elenca nel dettaglio le competenze necessarie per superare i test di certificazione, ed è suddiviso in tre sezioni che coprono tutti gli ambiti fondamentali del ciclo di vita dei sistemi ICT: Pianificazione (Plan); Realizzazione (Build); Esercizio (Operate). Il testo è stato approvato da AICA, l’unica associazione preposta in Italia a verificare l’effettiva copertura dell’intero programma d’esame (Syllabus) e la correttezza complessiva dell’opera

    Assessing and improving measurability of process performance indicators based on quality of logs

    No full text
    The efficiency and effectiveness of business processes are usually evaluated by Process Performance Indicators (PPIs), which are computed using process event logs. PPIs can be insightful only when they are measurable, i.e., reliable. This paper proposes to define PPI measurability on the basis of the quality of the data in the process logs. Then, based on this definition, a framework for PPI measurability assessment and improvement is presented. For the assessment, we propose novel definitions of PPI accuracy, completeness, consistency, timeliness and volume that contextualise the traditional definitions in the data quality literature to the case of process logs. For the improvement, we define a set of guidelines for improving the measurability of a PPI. These guidelines may concern improving existing event logs, for instance through data imputation, implementation or enhancement of the process monitoring systems, or updating the PPI definitions. A case study in a large-sized institution is discussed to show the feasibility and the practical value of the proposed framework. (C) 2021 Elsevier Ltd. All rights reserved
    corecore