48 research outputs found

    Performance Improvements Using Dynamic Performance Stubs

    Get PDF
    This thesis proposes a new methodology to extend the software performance engineering process. Common performance measurement and tuning principles mainly target to improve the software function itself. Hereby, the application source code is studied and improved independently of the overall system performance behavior. Moreover, the optimization of the software function has to be done without an estimation of the expected optimization gain. This often leads to an under- or overoptimization, and hence, does not utilize the system sufficiently. The proposed performance improvement methodology and framework, called dynamic performance stubs, improves the before mentioned insufficiencies by evaluating the overall system performance improvement. This is achieved by simulating the performance behavior of the original software functionality depending on an adjustable optimization level prior to the real optimization. So, it enables the software performance analyst to determine the systems’ overall performance behavior considering possible outcomes of different improvement approaches. Moreover, by using the dynamic performance stubs methodology, a cost-benefit analysis of different optimizations regarding the performance behavior can be done. The approach of the dynamic performance stubs is to replace the software bottleneck by a stub. This stub combines the simulation of the software functionality with the possibility to adjust the performance behavior depending on one or more different performance aspects of the replaced software function. A general methodology for using dynamic performance stubs as well as several methodologies for simulating different performance aspects is discussed. Finally, several case studies to show the application and usability of the dynamic performance stubs approach are presented

    Autonomic Management And Performance Optimization For Cloud Computing Services

    Get PDF
    Cloud computing has become an increasingly important computing paradigm. It offers three levels of on-demand services to cloud users: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) . The success of cloud services heavily depends on the effectiveness of cloud management strategies. In this dissertation work, we aim to design and implement an automatic cloud management system to improve application performance, increase platform efficiency and optimize resource allocation. For large-scale multi-component applications, especially web-based cloud applica- tions, parameter setting is crucial to the service availability and quality. The increas- ing system complexity requires an automatic and efficient application configuration strategy. To improve the quality of application services, we propose a reinforcement learning(RL)-based autonomic configuration framework. It is able to adapt appli- cation parameter settings not only to the variations in workload, but also to the change of virtual resource allocation. The RL approach is enhanced with an efficient initialization policy to reduce the learning time for online decision. Experiments on Xen-based virtual cluster with TPC-W benchmarks show that the framework can drive applications into a optimal configuration in less than 25 iterations. For cloud platform service, one of the key challenges is to efficiently adapt the offered platforms to the virtualized environment, meanwhile maintaining their service features. MapReduce has become an important distributed parallel programming paradigm. Offering MapReduce cloud service presents an attractive usage model for enterprises. In a virtual MapReduce cluster, the interference between virtual machines (VMs) causes performance degradation of map and reduce tasks and renders existing data locality-aware task scheduling policy, like delay scheduling, no longer effective. On the other hand, virtualization offers an extra opportunity of data locality for co-hosted VMs. To address these issues, we present a task scheduling strategy to mitigate interference and meanwhile preserving task data locality for MapReduce applications. The strategy includes an interference-aware scheduling policy, based on a task performance prediction model, and an adaptive delay scheduling algorithm for data locality improvement. Experimental results on a 72-node Xen-based virtual cluster show that the scheduler is able to achieve a speedup of 1.5 to 6.5 times for individual jobs and yield an improvement of up to 1.9 times in system throughput in comparison with four other MapReduce schedulers. Cloud computing has a key requirement for resource configuration in a real-time manner. In such virtualized environments, both virtual machines (VMs) and hosted applications need to be configured on-the fly to adapt to system dynamics. The in- terplay between the layers of VMs and applications further complicates the problem of cloud configuration. Independent tuning of each aspect may not lead to optimal system wide performance. In this work, we propose a framework for coordinated configuration of VMs and resident applications. At the heart of the framework is a model-free hybrid reinforcement learning (RL) approach, which combines the advan- tages of Simplex method and RL method and is further enhanced by the use of system knowledge guided exploration policies. Experimental results on Xen based virtualized environments with TPC-W and TPC-C benchmarks demonstrate that the framework is able to drive a virtual server cluster into an optimal or near-optimal configuration state on the fly, in response to the change of workload. It improves the systems throughput by more than 30% over independent tuning strategies. In comparison with the coordinated tuning strategies based on basic RL or Simplex algorithm, the hybrid RL algorithm gains 25% to 40% throughput improvement

    Architectural stability of self-adaptive software systems

    Get PDF
    This thesis studies the notion of stability in software engineering with the aim of understanding its dimensions, facets and aspects, as well as characterising it. The thesis further investigates the aspect of behavioural stability at the architectural level, as a property concerned with the architecture's capability in maintaining the achievement of expected quality of service and accommodating runtime changes, in order to delay the architecture drifting and phasing-out as a consequence of the continuous unsuccessful provision of quality requirements. The research aims to provide a systematic and methodological support for analysing, modelling, designing and evaluating architectural stability. The novelty of this research is the consideration of stability during runtime operation, by focusing on the stable provision of quality of service without violations. As the runtime dimension is associated with adaptations, the research investigates stability in the context of self-adaptive software architectures, where runtime stability is challenged by the quality of adaptation, which in turn affects the quality of service. The research evaluation focuses on the effectiveness, scale and accuracy in handling runtime dynamics, using the self-adaptive cloud architectures

    Characterizing and exploiting application behavior under data corruption

    Get PDF
    Shrinking semiconductor technologies come at the cost of higher susceptibility to hardware faults that render the systems unreliable. Traditionally, reliability solutions are aimed to protect equally and exhaustively all hardware parts of a system. This is in order to maintain the illusion of a correctly operating hardware. Due to the increasing error rates that induce higher reliability costs, this approach can no longer be sustainable. It is a fact that hardware faults can be masked by various levels of fault-masking effects. Therefore, not all hardware faults manifest as the same outcome on an application’s execution. Motivated by this fact, we propose a shift to vulnerability driven unequal protection of a given structure (or same-level structures), where the less-vulnerable parts of a structure are protected less than their more-vulnerable counterparts. For that purpose, in this thesis, we quantitatively investigate how the effect of hardware-induced data corruptions on application behavior varies. We develop a portable software-implemented fault-injection (SWIFI) tool. On top of performing single-bit fault injections to capture their effects on application behavior, our tool is also data-level aware and tracks the corrupted data to obtain more of their characteristics. This enables to analyze the effects of single-bit data corruptions in relation to the corrupted data characteristics and the executing workload. After a set of extensive fault-injection experiments on programs from the NAS Parallel Benchmarks suite, we obtain detailed insight on how the vulnerability varies; among others, for different application data types and for different bit locations within the data. The results show that we can characterize the vulnerability of data based on their high-level characteristics (e.g. usage type, size, user and memory space location). Moreover, we conclude that application data are vulnerable in parts. All these show that there is potential in exploiting the application behavior under data corruption. The exhaustive equal protection can be avoided by safely shifting to vulnerability-driven unequal protection within given structures. This can reduce the reliability overheads, without a significant impact on the fault coverage. For that purpose, we demonstrate the potential benefits of exploiting the varying vulnerability characteristics of application data in the case of a data cache

    Data-Intensive Computing in Smart Microgrids

    Get PDF
    Microgrids have recently emerged as the building block of a smart grid, combining distributed renewable energy sources, energy storage devices, and load management in order to improve power system reliability, enhance sustainable development, and reduce carbon emissions. At the same time, rapid advancements in sensor and metering technologies, wireless and network communication, as well as cloud and fog computing are leading to the collection and accumulation of large amounts of data (e.g., device status data, energy generation data, consumption data). The application of big data analysis techniques (e.g., forecasting, classification, clustering) on such data can optimize the power generation and operation in real time by accurately predicting electricity demands, discovering electricity consumption patterns, and developing dynamic pricing mechanisms. An efficient and intelligent analysis of the data will enable smart microgrids to detect and recover from failures quickly, respond to electricity demand swiftly, supply more reliable and economical energy, and enable customers to have more control over their energy use. Overall, data-intensive analytics can provide effective and efficient decision support for all of the producers, operators, customers, and regulators in smart microgrids, in order to achieve holistic smart energy management, including energy generation, transmission, distribution, and demand-side management. This book contains an assortment of relevant novel research contributions that provide real-world applications of data-intensive analytics in smart grids and contribute to the dissemination of new ideas in this area

    Energy-Efficient Computing for Mobile Signal Processing

    Full text link
    Mobile devices have rapidly proliferated, and deployment of handheld devices continues to increase at a spectacular rate. As today's devices not only support advanced signal processing of wireless communication data but also provide rich sets of applications, contemporary mobile computing requires both demanding computation and efficiency. Most mobile processors combine general-purpose processors, digital signal processors, and hardwired application-specific integrated circuits to satisfy their high-performance and low-power requirements. However, such a heterogeneous platform is inefficient in area, power and programmability. Improving the efficiency of programmable mobile systems is a critical challenge and an active area of computer systems research. SIMD (single instruction multiple data) architectures are very effective for data-level-parallelism intense algorithms in mobile signal processing. However, new characteristics of advanced wireless/multimedia algorithms require architectural re-evaluation to achieve better energy efficiency. Therefore, fourth generation wireless protocol and high definition mobile video algorithms are analyzed to enhance a wide-SIMD architecture. The key enhancements include 1) programmable crossbar to support complex data alignment, 2) SIMD partitioning to support fine-grain SIMD computation, and 3) fused operation to support accelerating frequently used instruction pairs. Near-threshold computation has been attractive in low-power architecture research because it balances performance and power. To further improve energy efficiency in mobile computing, near-threshold computation is applied to a wide SIMD architecture. This proposed near-threshold wide SIMD architecture-Diet SODA-presents interesting architectural design decisions such as 1) very wide SIMD datapath to compensate for degraded performance induced by near-threshold computation and 2) scatter-gather data prefetcher to exploit large latency gap between memory and the SIMD datapath. Although near-threshold computation provides excellent energy efficiency, it suffers from increased delay variations. A systematic study of delay variations in near-threshold computing is performed and simple techniques-structural duplication and voltage/frequency margining-are explored to tolerate and mitigate the delay variations in near-threshold wide SIMD architectures. This dissertation analyzes representative wireless/multimedia mobile signal processing algorithms, proposes an energy-efficient programmable platform, and evaluates performance and power. A main theme of this dissertation is that the performance and efficiency of programmable embedded systems can be significantly improved with a combination of parallel SIMD and near-threshold computations.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/86356/1/swseo_1.pd

    Control over the Cloud : Offloading, Elastic Computing, and Predictive Control

    Get PDF
    The thesis studies the use of cloud native software and platforms to implement critical closed loop control. It considers technologies that provide low latency and reliable wireless communication, in terms of edge clouds and massive MIMO, but also approaches industrial IoT and the services of a distributed cloud, as an extension of commercial-of-the-shelf software and systems.First, the thesis defines the cloud control challenge, as control over the cloud and controller offloading. This is followed by a demonstration of closed loop control, using MPC, running on a testbed representing the distributed cloud.The testbed is implemented using an IoT device, clouds, next generation wireless technology, and a distributed execution platform. Platform details are provided and feasibility of the approach is shown. Evaluation includes relocating an on-line MPC to various locations in the distributed cloud. Offloaded control is examined next, through further evaluation of cloud native software and frameworks. This is followed by three controller designs, tailored for use with the cloud. The first controller solves MPC problems in parallel, to implement a variable horizon controller. The second is a hierarchical design, in which rate switching is used to implement constrained control, with a local and a remote mode. The third design focuses on reliability. Here, the MPC problem is extended to include recovery paths that represent a fallback mode. This is used by a control client if it experiences connectivity issues.An implementation is detailed and examined.In the final part of the thesis, the focus is on latency and congestion. A cloud control client can experience long and variable delays, from network and computations, and used services can become overloaded. These problems are approached by using predicted control inputs, dynamically adjusting the control frequency, and using horizontal scaling of the cloud service. Several examples are shown through simulation and on real clouds, including admitting control clients into a cluster that becomes temporarily overloaded

    ENABLING LOCAL GOVERNMENTS TO DESIGN AND IMPLEMENT ANTI-CORRUPTION STRATEGIES THROUGH DYNAMIC PERFORMANCE MANAGEMENT AND GOVERNANCE. A CASE-STUDY OF AN ITALIAN MUNICIPALITY

    Get PDF
    Lo scopo principale di questa ricerca è quello di inquadrare le possibili relazioni causali fra corruzione negli approvvigionamenti pubblici e performance delle amministrazioni locali. A tal fine, viene adottato un disegno di ricerca che integra dinamicamente metodi quantitativi e qualitativi in ogni fase del processo di studio. Gli approcci Dynamic Performance Management (DPM) e Governance (DPG), supportati da interviste approfondite non-strutturate, modellazioni formali e simulazioni quantitative, sono qui adottati per analizzare un caso di studio rappresentativo di un piccolo comune italiano in cui si sono verificati alcuni episodi di corruzione nelle attività di approvvigionamento nei primi anni 2000. In particolare, l'Ente locale in questione è stato sciolto due volte per infiltrazioni di stampo mafioso e, attualmente, versa in una situazione di dissesto finanziario. Il lavoro analizza i possibili esiti di tali fatti sulla performance organizzativa dell'Ente nel suo complesso sulla base di tre fonti: dati primari qualitativi generati da interviste frontali convergenti non-strutturate; dati secondari estrapolati da sentenze passate in giudicato e da archivi ad accesso aperto; ed un'ampia rassegna della letteratura. All'inizio, una panoramica della letteratura aiuta il lettore a comprendere i contenuti, le teorie ed i confini della corruzione. Successivamente, viene proposto un esame delle strategie di misurazione e delle misure più diffuse per prevenirla e contrastarla. Nel complesso, una particolare attenzione è riservata agli approvvigionamenti nei contesti pubblici locali. A seguito di una discussione sui possibili vantaggi e svantaggi, in termini di opportunità e deterrenze corruttive, derivanti dall'adozione dei paradigmi di governance nel settore pubblico più diffusi, gli approcci DPM e DPG vengono analizzati per comprendere il loro contributo teorico nel supportare i responsabili politici ed i manager ad arginare i fenomeni di corruzione. Successivamente, sulla base di alcune tecniche di codifica applicate alle interviste frontali non-strutturate svolte con alcuni Pubblici Ufficiali, un approccio esplorativo-descrittivo del caso-studio selezionato consente di comprendere la misura in cui gli eventi corruttivi investigati in questa sede abbiano inciso nel tempo sulla performance complessiva del Comune preso in esame. In seguito, viene adottata una prospettiva sistemica e dinamica di performance management per inquadrare le relazioni di causa-effetto emergenti dal caso di studio. L'assunzione di un approccio DPM consente ai politici ed ai dirigenti pubblici di progettare, implementare e valutare strategie anticorruzione fattibili, efficaci ed efficienti a livello di governo locale. Più precisamente, l'utilizzo, in un grafico DPM, di driver di performance adeguati al rischio di corruzione legato alle sue cause strutturali ed individualistiche può porre rimedio non soltanto alle riconosciute ambiguità e carenze derivanti dall'adozione di “bandiere rosse” nei processi di approvvigionamento pubblico, ma anche ai fallimenti dei controlli direzionali meccanicistici nel rilevare l'effettiva presenza di corruzione, fornendo ai decisori tempestivi segnali derivanti dagli effetti deleteri prodotti da siffatte pratiche clandestine. Inoltre, enfatizzare il ruolo della moralità civica comunitaria a livello di sistema può supportare la comprensione di alcuni risultati controintuitivi della passata ricerca sulla corruzione negli approvvigionamenti pubblici e dedurre in che modo gli investimenti nelle tecnologie dell'informazione e della comunicazione (TIC) e la formazione del personale possano migliorare responsabilità e competenza dei governi locali. Per quanto riguarda la professionalità dei dirigenti e dello staff quale causa individualistica di corruzione, il patronage politico derivante da opportunità legali risulta, per il caso di studio, significativo nello spiegare il cattivo andamento degli approvvigionamenti pubblici nel tempo. All'interno di questo quadro, la visualizzazione DPM consente altresì di distinguere più dettagliatamente la corruzione dallo spreco di risorse dovuto ad azioni non corruttive. In sintesi, un approccio DPM può consentire ai manager pubblici di mantenere i loro “radar cognitivi” costantemente (re)attivi, in modo da: identificare e sopprimere pratiche illecite negli approvvigionamenti pubblici; rilevare illeciti emergenti che potrebbero essere trascurati dai tradizionali approcci diagnostici e interattivi del controllo direzionale; favorire l'apprendimento etico e migliorare il valore pubblico generato. Infine, la prospettiva DPG può essere efficace nel supportare la formulazione e la messa a punto di strategie anticorruzione basate sulla collaborazione multistakeholder, nonché nel sondarne la fattibilità e gli impatti nel tempo all'interno di aree locali caratterizzate da strutture di governance carenti. Pertanto, sulla base di precedenti progetti di successo realizzati da governi locali di diverse parti del mondo, nell'ultima parte di questa tesi di ricerca viene utilizzato un approccio DPG per delineare e valutare una strategia anticorruzione per il caso in esame secondo un'impostazione di collaborative governance, così da inquadrare possibili sinergie ed interdipendenze tra i soggetti rilevanti, quali leve critiche per contrastare il rischio sistemico di corruzione a livello locale.The main purpose of this research is to frame the possible causal relationships between corruption in public procurement and performance of local governments. With this aim, a fully-integrated research design is adopted to dynamically mix quantitative and qualitative methods at every phase of the research process. The Dynamic Performance Management (DPM) and Governance (DPG) approaches, supported by in-depth interviews, formal modelling and simulations, are here adopted to analyse a representative case-study of an Italian small Municipality, where some corruption episodes in procurement activities occurred in the early 2000s. In particular, the local Authority in question was disbanded twice for mafia-like infiltrations and is currently facing a financial instability. In the light of that, the overall work studies the possible outcomes of those facts on the organisational performance as a whole, based on three sources: qualitative primary data generated by face-to-face convergent interviews; secondary data retrieved from both documents describing legal cases and open-access repositories; an extensive literature review. At the beginning, a broad and composite literature overview helps the reader become aware about the contents, theories and boundaries of corruption. Thereafter, an examination of the most widespread measurement strategies and measures to either prevent or repress it is proposed. Overall, a special focus is set on procurement in local public contexts. Following a discussion on the possible advantages and disadvantages of the most common public sector paradigms in terms of opportunities and constraints for corruption, the DPM and DPG views are explored to understand their theoretical contribution in supporting policy- and decision-makers to curb corruption phenomena in heterogeneous governance contexts. Afterwards, based on coding techniques, an exploratory-descriptive approach of the selected case-study allows for a better understanding of the extent to which the investigated corruption events impacted on the overall performance of the Municipality under scrutiny over time, by means of non-structured face-to-face interviews held with some Public Officials in 2019. Therefore, a system perspective in performance management is adopted to frame the emerging cause-and-effect relationships of the case-study. Assuming a DPM approach allows politicians and public managers to design, implement and assess feasible, effective and efficient anti-corruption strategies at local government level. More precisely, the use in a DPM chart of performance drivers adjusted for the risk of corruption linked to its structural and individualistic causes may not only put right to ambiguities and flaws deriving from the adoption of ‘red-flags’ in public procurement, but also to failures of mechanistical controls in detecting the actual presence of corruption, thus providing decision-makers with prompt signals arising from the emergent effects of clandestine practices. In addition, emphasising the role of community civic morality at system level may back up the understanding of some counterintuitive results in the past research on corruption in public procurement and deduce to what extent investments in Information and Communication Technologies (ICTs) and personnel training may enhance local government accountability and expertise. With regard to managers’ and staff’s professionalism, as individualistic cause of corruption, political patronage stemming from legal opportunities results significant in explaining poor performance in public procurement over time. Within this framework, the DPM view also allows for better singling corruption out from resource waste due to non-corrupt actions. In summary, a DPM approach may lead public managers to constantly maintain their ‘cognitive radar’ reactive, so as to identify and suppress unlawful practices in procurement, detect emerging malfeasances that could be otherwise overlooked by traditional static diagnostical and interactive approaches, foster ethical learning and enhance community outcomes. Finally, the DPG perspective may be effective in supporting formulation and fine-tuning of collaboration-based anti-corruption strategies and probing their feasibility and impacts over time within local areas characterised by poor governance structures. Hence, on the basis of previous successful projects throughout the world, in the last part of this research thesis a DPG approach is used to outline and evaluate an anti-corruption strategy for the case under scrutiny according to collaboration settings, in a way to frame possible synergies and interdependencies among relevant participants as critical levers to hinder systemic risk of corruption at local level
    corecore