225 research outputs found

    Energy Efficient Servers

    Get PDF
    Computer scienc

    Energy Efficient Servers

    Get PDF
    Computer scienc

    Multiprocessing techniques for unmanned multifunctional satellites Final report,

    Get PDF
    Simulation of on-board multiprocessor for long lived unmanned space satellite contro

    Autonomic Management And Performance Optimization For Cloud Computing Services

    Get PDF
    Cloud computing has become an increasingly important computing paradigm. It offers three levels of on-demand services to cloud users: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) . The success of cloud services heavily depends on the effectiveness of cloud management strategies. In this dissertation work, we aim to design and implement an automatic cloud management system to improve application performance, increase platform efficiency and optimize resource allocation. For large-scale multi-component applications, especially web-based cloud applica- tions, parameter setting is crucial to the service availability and quality. The increas- ing system complexity requires an automatic and efficient application configuration strategy. To improve the quality of application services, we propose a reinforcement learning(RL)-based autonomic configuration framework. It is able to adapt appli- cation parameter settings not only to the variations in workload, but also to the change of virtual resource allocation. The RL approach is enhanced with an efficient initialization policy to reduce the learning time for online decision. Experiments on Xen-based virtual cluster with TPC-W benchmarks show that the framework can drive applications into a optimal configuration in less than 25 iterations. For cloud platform service, one of the key challenges is to efficiently adapt the offered platforms to the virtualized environment, meanwhile maintaining their service features. MapReduce has become an important distributed parallel programming paradigm. Offering MapReduce cloud service presents an attractive usage model for enterprises. In a virtual MapReduce cluster, the interference between virtual machines (VMs) causes performance degradation of map and reduce tasks and renders existing data locality-aware task scheduling policy, like delay scheduling, no longer effective. On the other hand, virtualization offers an extra opportunity of data locality for co-hosted VMs. To address these issues, we present a task scheduling strategy to mitigate interference and meanwhile preserving task data locality for MapReduce applications. The strategy includes an interference-aware scheduling policy, based on a task performance prediction model, and an adaptive delay scheduling algorithm for data locality improvement. Experimental results on a 72-node Xen-based virtual cluster show that the scheduler is able to achieve a speedup of 1.5 to 6.5 times for individual jobs and yield an improvement of up to 1.9 times in system throughput in comparison with four other MapReduce schedulers. Cloud computing has a key requirement for resource configuration in a real-time manner. In such virtualized environments, both virtual machines (VMs) and hosted applications need to be configured on-the fly to adapt to system dynamics. The in- terplay between the layers of VMs and applications further complicates the problem of cloud configuration. Independent tuning of each aspect may not lead to optimal system wide performance. In this work, we propose a framework for coordinated configuration of VMs and resident applications. At the heart of the framework is a model-free hybrid reinforcement learning (RL) approach, which combines the advan- tages of Simplex method and RL method and is further enhanced by the use of system knowledge guided exploration policies. Experimental results on Xen based virtualized environments with TPC-W and TPC-C benchmarks demonstrate that the framework is able to drive a virtual server cluster into an optimal or near-optimal configuration state on the fly, in response to the change of workload. It improves the systems throughput by more than 30% over independent tuning strategies. In comparison with the coordinated tuning strategies based on basic RL or Simplex algorithm, the hybrid RL algorithm gains 25% to 40% throughput improvement

    HPC memory systems: Implications of system simulation and checkpointing

    Get PDF
    The memory system is a significant contributor for most of the current challenges in computer architecture: application performance bottlenecks and operational costs in large data-centers as HPC supercomputers. With the advent of emerging memory technologies, the exploration for novel designs on the memory hierarchy for HPC systems is an open invitation for computer architecture researchers to improve and optimize current designs and deployments. System simulation is the preferred approach to perform architectural explorations due to the low cost to prototype hardware systems, acceptable performance estimates, and accurate energy consumption predictions. Despite the broad presence and extensive usage of system simulators, their validation is not standardized; either because the main purpose of the simulator is not meant to mimic real hardware, or because the design assumptions are too narrow on a particular computer architecture topic. This thesis provides the first steps for a systematic methodology to validate system simulators when compared to real systems. We unveil real-machine´s micro-architectural parameters through a set of specially crafted micro-benchmarks. The unveiled parameters are used to upgrade the simulation infrastructure in order to obtain higher accuracy in the simulation domain. To evaluate the accuracy on the simulation domain, we propose the retirement factor, an extension to a well-known application´s performance methodology. Our proposal provides a new metric to measure the impact simulator´s parameter-tuning when looking for the most accurate configuration. We further present the delay queue, a modification to the memory controller that imposes a configurable delay for all memory transactions that reach the main memory devices; evaluated using the retirement factor, the delay queue allows us to identify the sources of deviations between the simulator infrastructure and the real system. Memory accesses directly affect application performance, both in the real-world machine as well as in the simulation accuracy. From single-read access to a unique memory location up to simultaneous read/write operations to a single or multiple memory locations, HPC applications memory usage differs from workload to workload. A property that allows to glimpse on the application´s memory usage is the workload´s memory footprint. In this work, we found a link between HPC workload´s memory footprint and simulation performance. Actual trends on HPC data-center memory deployments and current HPC application’s memory footprint led us to envision an opportunity for emerging memory technologies to include them as part of the reliability support on HPC systems. Emerging memory technologies such as 3D-stacked DRAM are getting deployed in current HPC systems but in limited quantities in comparison with standard DRAM storage making them suitable to use for low memory footprint HPC applications. We exploit and evaluate this characteristic enabling a Checkpoint-Restart library to support a heterogeneous memory system deployed with an emerging memory technology. Our implementation imposes negligible overhead while offering a simple interface to allocate, manage, and migrate data sets between heterogeneous memory systems. Moreover, we showed that the usage of an emerging memory technology it is not a direct solution to performance bottlenecks; correct data placement and crafted code implementation are critical when comes to obtain the best computing performance. Overall, this thesis provides a technique for validating main memory system simulators when integrated in a simulation infrastructure and compared to real systems. In addition, we explored a link between the workload´s memory footprint and simulation performance on current HPC workloads. Finally, we enabled low memory footprint HPC applications with resilience support while transparently profiting from the usage of emerging memory deployments.El sistema de memoria es el mayor contribuidor de los desafíos actuales en el campo de la arquitectura de ordenadores como lo son los cuellos de botella en el rendimiento de las aplicaciones, así como los costos operativos en los grandes centros de datos. Con la llegada de tecnologías emergentes de memoria, existe una invitación para que los investigadores mejoren y optimicen las implementaciones actuales con novedosos diseños en la jerarquía de memoria. La simulación de los ordenadores es el enfoque preferido para realizar exploraciones de arquitectura debido al bajo costo que representan frente a la realización de prototipos físicos, arrojando estimaciones de rendimiento aceptables con predicciones precisas. A pesar del amplio uso de simuladores de ordenadores, su validación no está estandarizada ya sea porque el propósito principal del simulador no es imitar al sistema real o porque las suposiciones de diseño son demasiado específicas. Esta tesis proporciona los primeros pasos hacia una metodología sistemática para validar simuladores de ordenadores cuando son comparados con sistemas reales. Primero se descubren los parámetros de microarquitectura en la máquina real a través de un conjunto de micro-pruebas diseñadas para actualizar la infraestructura de simulación con el fin de mejorar la precisión en el dominio de la simulación. Para evaluar la precisión de la simulación, proponemos "el factor de retiro", una extensión a una conocida herramienta para medir el rendimiento de las aplicaciones, pero enfocada al impacto del ajuste de parámetros en el simulador. Además, presentamos "la cola de retardo", una modificación virtual al controlador de memoria que agrega un retraso configurable a todas las transacciones de memoria que alcanzan la memoria principal. Usando el factor de retiro, la cola de retraso nos permite identificar el origen de las desviaciones entre la infraestructura del simulador y el sistema real. Todos los accesos de memoria afectan directamente el rendimiento de la aplicación. Desde el acceso de lectura a una única localidad memoria hasta operaciones simultáneas de lectura/escritura a una o varias localidades de memoria, una propiedad que permite reflejar el uso de memoria de la aplicación es su "huella de memoria". En esta tesis encontramos un vínculo entre la huella de memoria de las aplicaciones de alto desempeño y su rendimiento en simulación. Las tecnologías de memoria emergentes se están implementando en sistemas de alto desempeño en cantidades limitadas en comparación con la memoria principal haciéndolas adecuadas para su uso en aplicaciones con baja huella de memoria. En este trabajo, habilitamos y evaluamos el uso de un sistema de memoria heterogéneo basado en un sistema emergente de memoria. Nuestra implementación agrega una carga despreciable al mismo tiempo que ofrece una interfaz simple para ubicar, administrar y migrar datos entre sistemas de memoria heterogéneos. Además, demostramos que el uso de una tecnología de memoria emergente no es una solución directa a los cuellos de botella en el desempeño. La implementación es fundamental a la hora de obtener el mejor rendimiento ya sea ubicando correctamente los datos, o bien diseñando código especializado. En general, esta tesis proporciona una técnica para validar los simuladores respecto al sistema de memoria principal cuando se integra en una infraestructura de simulación y se compara con sistemas reales. Además, exploramos un vínculo entre la huella de memoria de la carga de trabajo y el rendimiento de la simulación en cargas de trabajo de aplicaciones de alto desempeño. Finalmente, habilitamos aplicaciones de alto desempeño con soporte de resiliencia mientras que se benefician de manera transparente con el uso de un sistema de memoria emergente.Postprint (published version

    Simulating Nonlinear Neutrino Oscillations on Next-Generation Many-Core Architectures

    Get PDF
    In this work an astrophysical simulation code, XFLAT, is developed to study neutrino oscillations in supernovae. XFLAT is a hybrid modular code which was designed to utilize multiple levels of parallelism through MPI, OpenMP, and SIMD instructions (vectorization). It can run on both the CPU and the Xeon Phi co-processor, the latter of which is based on the Intel Many Integrated Core Architecture (MIC). The performance of XFLAT on various system configurations and physics scenarios has been analyzed. In addition, the impact of I/O and the multi-node configuration on the Xeon Phi-equipped heterogeneous supercomputers such as Stampede at the Texas Advanced Computing Center (TACC) was investigated

    Enabling Fairness in Cloud Computing Infrastructures

    Full text link
    Cloud computing has emerged as a key technology in many ways over the past few years, evidenced by the fact that 93% of the organizations is either running applications or experimenting with Infrastructure-as-a-Service (IaaS) cloud. Hence, to meet the demands of a large set of target audience, IaaS cloud service providers consolidate applications belonging to multiple tenants. However, consolidation of applications leads to performance interference with each other as these applications end up competing for the shared resources violating QoS of the executing tenants. This dissertation investigates the implications of interference in consolidated cloud computing environments to enable fairness in the execution of applications across tenants. In this context, this dissertation identifies three key issues in cloud computing infrastructures. We observe that tenants using IaaS public clouds share multi-core datacenter servers. In such a situation, we identify that the applications belonging to tenants might compete for shared architectural resources like Last Level Cache (LLC) and bandwidth to memory, slowing down the execution time of applications. This necessitates a need for a technique that can accurately estimate the slowdown in execution time caused due to multi-tenant execution. Such slowdown estimates can be used to bill tenants appropriately enabling fairness among tenants. For private datacenters, where performance degradation cannot be tolerated, it becomes critical to detect interference and investigate its root cause. Under such circumstances, there is a need for a real-time, lightweight and scalable mechanism that can detect performance degradation and identify the root cause resource which applications are contending for (I/O, network, CPU, Shared Cache). Finally, the advent of microservice computing environments, calls for a need to rethink resource management strategies in multi-tenant execution scenarios. Specifically, we observe that the visibility enabled by microservices execution framework can be exploited to achieve high throughput and resource utilization while still meeting Service Level Agreements (SLAs) in multi-tenant execution scenarios. To enable this, we propose techniques that can dynamically batch and reorder requests propagating through individual microservice stages within an application.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149844/1/ramsri_1.pd

    Parallel programming in biomedical signal processing

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia BiomédicaPatients with neuromuscular and cardiorespiratory diseases need to be monitored continuously. This constant monitoring gives rise to huge amounts of multivariate data which need to be processed as soon as possible, so that their most relevant features can be extracted. The field of parallel processing, an area from the computational sciences, comes naturally as a way to provide an answer to this problem. For the parallel processing to succeed it is necessary to adapt the pre-existing signal processing algorithms to the modern architectures of computer systems with several processing units. In this work parallel processing techniques are applied to biosignals, connecting the area of computer science to the biomedical domain. Several considerations are made on how to design parallel algorithms for signal processing, following the data parallel paradigm. The emphasis is given to algorithm design, rather than the computing systems that execute these algorithms. Nonetheless, shared memory systems and distributed memory systems are mentioned in the present work. Two signal processing tools integrating some of the parallel programming concepts mentioned throughout this work were developed. These tools allow a fast and efficient analysis of long-term biosignals. The two kinds of analysis are focused on heart rate variability and breath frequency, and aim to the processing of electrocardiograms and respiratory signals, respectively. The proposed tools make use of the several processing units that most of the actual computers include in their architecture, giving the clinician a fast tool without him having to set up a system specifically meant to run parallel programs

    Autonomic Performance-Aware Resource Management in Dynamic IT Service Infrastructures

    Get PDF
    Model-based techniques are a powerful approach to engineering autonomic and self-adaptive systems. This thesis presents a model-based approach for proactive and autonomic performance-aware resource management in dynamic IT infrastructures. Core of the approach is an architecture-level modeling language to describe performance and resource management related aspects in such environments. With this approach, it is possible to autonomically find suitable system configurations at the model level
    corecore