11 research outputs found
Media gateway utilizando um GPU
Mestrado em Engenharia de Computadores e Telemátic
Towards resource-aware computing for task-based runtimes and parallel architectures
Current large scale systems show increasing power demands, to the point that it has become a huge strain on facilities and budgets. The increasing restrictions in terms of power consumption of High Performance Computing (HPC) systems and data centers have forced hardware vendors to include power capping capabilities in their commodity processors. Power capping opens up new opportunities for applications to directly manage their power behavior at user level. However, constraining power consumption causes the individual sockets of a parallel system to deliver different performance levels under the same power cap, even when they are equally designed, which is an effect caused by manufacturing variability. Modern chips suffer from heterogeneous power consumption due to manufacturing issues, a problem known as manufacturing or process variability. As a result, systems that do not consider such variability caused by manufacturing issues lead to performance degradations and wasted power. In order to avoid such negative impact, users and system administrators must actively counteract any manufacturing variability.
In this thesis we show that parallel systems benefit from taking into account the consequences of manufacturing variability, in terms of both performance and energy efficiency. In order to evaluate our work we have also implemented our own task-based version of the PARSEC benchmark suite. This allows to test our methodology using state-of-the-art parallelization techniques and real world workloads. We present two approaches to mitigate manufacturing variability, by power redistribution at runtime level and by power- and variability-aware job scheduling at system-wide level. A parallel runtime system can be used to effectively deal with this new kind of performance heterogeneity by compensating the uneven effects of power capping. In the context of a NUMA node composed of several multi core sockets, our system is able to optimize the energy and concurrency levels assigned to each socket to maximize performance. Applied transparently within the parallel runtime system, it does not require any programmer interaction like changing the application source code or manually reconfiguring the parallel system. We compare our novel runtime analysis with an offline approach and demonstrate that it can achieve equal performance at a fraction of the cost. The next approach presented in this theis, we show that it is possible to predict the impact of this variability on specific applications by using variability-aware power prediction models. Based on these power models, we propose two job scheduling policies that consider the effects of manufacturing variability for each application and that ensures that power
consumption stays under a system wide power budget. We evaluate our policies under different power budgets and traffic scenarios, consisting of both single- and multi-node parallel applications.Los sistemas modernos de gran escala muestran crecientes demandas de energía, hasta el punto de que se ha convertido en una gran presión para las instalaciones y los presupuestos. Las restricciones crecientes de consumo de energía de los sistemas de alto rendimiento (HPC) y los centros de datos han obligado a los proveedores de hardware a incluir capacidades de limitación de energía en sus procesadores. La limitación de energía abre nuevas oportunidades para que las aplicaciones administren directamente su comportamiento de energía a nivel de usuario. Sin embargo, la restricción en el consumo de energía de sockets individuales de un sistema paralelo resulta en diferentes niveles de rendimiento, por el mismo límite de potencia, incluso cuando están diseñados por igual. Esto es un efecto causado durante el proceso de la fabricación. Los chips modernos sufren de un consumo de energía heterogéneo debido a problemas de fabricación, un problema conocido como variabilidad del proceso o fabricación. Como resultado, los sistemas que no consideran este tipo de variabilidad causada por problemas de fabricación conducen a degradaciones del rendimiento y desperdicio de energía. Para evitar dicho impacto negativo, los usuarios y administradores del sistema deben contrarrestar activamente cualquier variabilidad de fabricación. En esta tesis, demostramos que los sistemas paralelos se benefician de tener en cuenta las consecuencias de la variabilidad de la fabricación, tanto en términos de rendimiento como de eficiencia energética. Para evaluar nuestro trabajo, también hemos implementado nuestra propia versión del paquete de aplicaciones de prueba PARSEC, basada en tareas paralelos. Esto permite probar nuestra metodología utilizando técnicas avanzadas de paralelización con cargas de trabajo del mundo real. Presentamos dos enfoques para mitigar la variabilidad de fabricación, mediante la redistribución de la energía a durante la ejecución de las aplicaciones y mediante la programación de trabajos a nivel de todo el sistema. Se puede utilizar un sistema runtime paralelo para tratar con eficacia este nuevo tipo de heterogeneidad de rendimiento, compensando los efectos desiguales de la limitación de potencia. En el contexto de un nodo NUMA compuesto de varios sockets y núcleos, nuestro sistema puede optimizar los niveles de energía y concurrencia asignados a cada socket para maximizar el rendimiento. Aplicado de manera transparente dentro del sistema runtime paralelo, no requiere ninguna interacción del programador como cambiar el código fuente de la aplicación o reconfigurar manualmente el sistema paralelo. Comparamos nuestro novedoso análisis de runtime con los resultados óptimos, obtenidos de una análisis manual exhaustiva, y demostramos que puede lograr el mismo rendimiento a una fracción del costo. El siguiente enfoque presentado en esta tesis, muestra que es posible predecir el impacto de la variabilidad de fabricación en aplicaciones específicas mediante el uso de modelos de predicción de potencia conscientes de la variabilidad. Basados en estos modelos de predicción de energía, proponemos dos políticas de programación de trabajos que consideran los efectos de la variabilidad de fabricación para cada aplicación y que aseguran que el consumo se mantiene bajo un presupuesto de energía de todo el sistema. Evaluamos nuestras políticas con diferentes presupuestos de energía y escenarios de tráfico, que consisten en aplicaciones paralelas que corren en uno o varios nodos.Postprint (published version
Towards resource-aware computing for task-based runtimes and parallel architectures
Current large scale systems show increasing power demands, to the point that it has become a huge strain on facilities and budgets. The increasing restrictions in terms of power consumption of High Performance Computing (HPC) systems and data centers have forced hardware vendors to include power capping capabilities in their commodity processors. Power capping opens up new opportunities for applications to directly manage their power behavior at user level. However, constraining power consumption causes the individual sockets of a parallel system to deliver different performance levels under the same power cap, even when they are equally designed, which is an effect caused by manufacturing variability. Modern chips suffer from heterogeneous power consumption due to manufacturing issues, a problem known as manufacturing or process variability. As a result, systems that do not consider such variability caused by manufacturing issues lead to performance degradations and wasted power. In order to avoid such negative impact, users and system administrators must actively counteract any manufacturing variability.
In this thesis we show that parallel systems benefit from taking into account the consequences of manufacturing variability, in terms of both performance and energy efficiency. In order to evaluate our work we have also implemented our own task-based version of the PARSEC benchmark suite. This allows to test our methodology using state-of-the-art parallelization techniques and real world workloads. We present two approaches to mitigate manufacturing variability, by power redistribution at runtime level and by power- and variability-aware job scheduling at system-wide level. A parallel runtime system can be used to effectively deal with this new kind of performance heterogeneity by compensating the uneven effects of power capping. In the context of a NUMA node composed of several multi core sockets, our system is able to optimize the energy and concurrency levels assigned to each socket to maximize performance. Applied transparently within the parallel runtime system, it does not require any programmer interaction like changing the application source code or manually reconfiguring the parallel system. We compare our novel runtime analysis with an offline approach and demonstrate that it can achieve equal performance at a fraction of the cost. The next approach presented in this theis, we show that it is possible to predict the impact of this variability on specific applications by using variability-aware power prediction models. Based on these power models, we propose two job scheduling policies that consider the effects of manufacturing variability for each application and that ensures that power
consumption stays under a system wide power budget. We evaluate our policies under different power budgets and traffic scenarios, consisting of both single- and multi-node parallel applications.Los sistemas modernos de gran escala muestran crecientes demandas de energía, hasta el punto de que se ha convertido en una gran presión para las instalaciones y los presupuestos. Las restricciones crecientes de consumo de energía de los sistemas de alto rendimiento (HPC) y los centros de datos han obligado a los proveedores de hardware a incluir capacidades de limitación de energía en sus procesadores. La limitación de energía abre nuevas oportunidades para que las aplicaciones administren directamente su comportamiento de energía a nivel de usuario. Sin embargo, la restricción en el consumo de energía de sockets individuales de un sistema paralelo resulta en diferentes niveles de rendimiento, por el mismo límite de potencia, incluso cuando están diseñados por igual. Esto es un efecto causado durante el proceso de la fabricación. Los chips modernos sufren de un consumo de energía heterogéneo debido a problemas de fabricación, un problema conocido como variabilidad del proceso o fabricación. Como resultado, los sistemas que no consideran este tipo de variabilidad causada por problemas de fabricación conducen a degradaciones del rendimiento y desperdicio de energía. Para evitar dicho impacto negativo, los usuarios y administradores del sistema deben contrarrestar activamente cualquier variabilidad de fabricación. En esta tesis, demostramos que los sistemas paralelos se benefician de tener en cuenta las consecuencias de la variabilidad de la fabricación, tanto en términos de rendimiento como de eficiencia energética. Para evaluar nuestro trabajo, también hemos implementado nuestra propia versión del paquete de aplicaciones de prueba PARSEC, basada en tareas paralelos. Esto permite probar nuestra metodología utilizando técnicas avanzadas de paralelización con cargas de trabajo del mundo real. Presentamos dos enfoques para mitigar la variabilidad de fabricación, mediante la redistribución de la energía a durante la ejecución de las aplicaciones y mediante la programación de trabajos a nivel de todo el sistema. Se puede utilizar un sistema runtime paralelo para tratar con eficacia este nuevo tipo de heterogeneidad de rendimiento, compensando los efectos desiguales de la limitación de potencia. En el contexto de un nodo NUMA compuesto de varios sockets y núcleos, nuestro sistema puede optimizar los niveles de energía y concurrencia asignados a cada socket para maximizar el rendimiento. Aplicado de manera transparente dentro del sistema runtime paralelo, no requiere ninguna interacción del programador como cambiar el código fuente de la aplicación o reconfigurar manualmente el sistema paralelo. Comparamos nuestro novedoso análisis de runtime con los resultados óptimos, obtenidos de una análisis manual exhaustiva, y demostramos que puede lograr el mismo rendimiento a una fracción del costo. El siguiente enfoque presentado en esta tesis, muestra que es posible predecir el impacto de la variabilidad de fabricación en aplicaciones específicas mediante el uso de modelos de predicción de potencia conscientes de la variabilidad. Basados en estos modelos de predicción de energía, proponemos dos políticas de programación de trabajos que consideran los efectos de la variabilidad de fabricación para cada aplicación y que aseguran que el consumo se mantiene bajo un presupuesto de energía de todo el sistema. Evaluamos nuestras políticas con diferentes presupuestos de energía y escenarios de tráfico, que consisten en aplicaciones paralelas que corren en uno o varios nodos
Transferring Pareto Frontiers across Heterogeneous Hardware Environments
Configurable software systems provide options that affect functional and non-functional system properties. Selection of options forms system configurations with corresponding properties values. Nowadays configurable software systems are ubiquitously used by software developers, system administrators, and end users due to their inherent adaptability. However, this flexibility comes at a cost. Configuration space exponentially explodes with addition of new options, making exhaustive system analysis impossible.
Nevertheless, many use cases require a deep understanding of a configurable system behavior, especially if the system is redeployed across heterogeneous hardware. This leads to the following research questions: (1) Is it possible to transfer configurable software property information across a collection of heterogeneous hardware? (2) Is it possible to transfer relative configurations optimality information across heterogeneous hardware?
We address the first question by proposing an approach for transferring performance prediction models of configurable systems across heterogeneous hardware. This approach builds a predictor model to approximate performance on a source hardware, and a separate transferrer model to transfer approximated performance values to a destination hardware. Experiments on three configurable software systems across 23 heterogeneous hardware environments demonstrated high accuracy (less than 10% error) for all studied combinations.
We address the second question by proposing an approach for approximation and transferring of Pareto frontiers of optimal configurations (based on multiple system properties) across heterogeneous hardware. This approach builds an individual predictor and transferrer for each analyzed system property. Using trained models, we can build an approximated Pareto frontier on a source hardware and transfer this frontier to a destination hardware. Experiments on five configurable systems across 34 heterogeneous hardware demonstrated feasibility of the approach and that the accuracy of a transferred frontier mainly depends on the approximation rather than the transferring process, while being linearly proportional to a training sample size
Alogorithms for fast implementation of high efficiency video coding
Recently, there is higher demand for video content in multimedia communication, which leads to increased requirements for storage and bandwidth posed to internet service providers. Due to this, it became necessary for the telecommunication standardization sector of the International Telecommunication Union (ITU-T) to launch a new video compression standard that would address the twin challenges of lowering both digital file sizes in storage media and transmission bandwidths in networks. The High Efficiency Video Compression (HEVC) also known as H.265 standard was launched in November 2013 to address these challenges. This new standard was able to cut down, by 50%, on existing media file sizes and bandwidths but its computational complexity leads to about 400% delay in HEVC video encoding. This study proposes a solution to the above problem based on three key areas of the HEVC. Firstly, two fast motion estimation algorithms are proposed based on triangle and pentagon structures to implement motion estimation and compensation in a shorter time. Secondly, an enhanced and optimized inter-prediction mode selection is proposed. Thirdly, an enhanced intra-prediction mode scheme with reduced latency is suggested. Based on the test model of the HEVC reference software, each individual algorithm manages to reduce the encoding time across all video classes by an average of 20-30%, with a best reduction of 70%, at a negligible loss in coding efficiency and video quality degradation. In practice, these algorithms would be able to enhance the performance of the HEVC compression standard, and enable higher resolution and higher frame rate video encoding as compared to the stateof- the-art technique
Towards Computational Efficiency of Next Generation Multimedia Systems
To address throughput demands of complex applications (like Multimedia), a next-generation system designer needs to co-design and co-optimize the hardware and software layers. Hardware/software knobs must be tuned in synergy to increase the throughput efficiency. This thesis provides such algorithmic and architectural solutions, while considering the new technology challenges (power-cap and memory aging). The goal is to maximize the throughput efficiency, under timing- and hardware-constraints
Design Space Exploration and Resource Management of Multi/Many-Core Systems
The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends
Recommended from our members
Dynamic and Dual Streaming Methods for H.264 Video and Parallel Performance Modeling
Traditional approaches to streaming H.264 video over a network typically rely on a single method of transport (i.e., reliable or unreliable) and/or use static values for parameters that can have a significant negative impact on the perceptual quality of the received video. This dissertation presents a dynamic method for wireless channel selection during video streaming, and explores the latency and QoE improvements yielded by the FDSP dual streaming method.
The increased workload that results from these dynamic methods can lead to a counterproductive impairment of streaming performance, and therefore requires efficient use of the multiple cores typically present in both sender and receiver (or server and client). This dissertation therefore presents a performance cost model which can be used to guide the parallelization of specific types of client or server-side streaming components -- specifically, programs containing non-DOALL loops that have inter-iteration data dependences which constrain their parallelism
An innovative vision system for industrial applications
Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 20-11-2015A pesar de que los sistemas de visión por computadora ocupan un
puesto predominante en nuestra sociedad, su estructura no sigue ningún
estándar. La implementación de aplicaciones de visión requiere de plataformas
de alto rendimiento tales como GPUs o FPGAs y el uso de sensores
de imagen con características muy distintas a las de la electrónica
de consumo. En la actualidad, cada fabricante y equipo de investigación
desarrollan sus plataformas de visión de forma independiente y sin
ningún tipo de intercompatibilidad. En esta tesis se presenta una nueva
plataforma de visión por computador utilizable en un amplio espectro
de aplicaciones. Las características de dicha plataforma se han definido
tras la implementación de tres aplicaciones de visión, basadas en: SOC,
FPGA y GPU, respectivamente. Como resultado, se ha definido una
plataforma modular con los siguientes componentes intercambiables:
Sensor, procesador de imágenes ”al vuelo”, unidad de procesado principal,
acelerador hardware y pila de software. Asimismo, se presenta
un algoritmo para realizar transformaciones geométricas, sintetizable en
FPGA y con una latencia de tan solo 90 líneas horizontales. Todos los
elementos software de esta plataforma están desarrollados con licencias
de Software Libre; durante el trascurso de esta tesis se han contribuido
y aceptado más de 200 cambios a distintos proyectos de Software Libre,
tales como: Linux, YoctoProject y U-boot, entre otros, promoviendo el
ecosistema necesario para la creación de una comunidad alrededor de
esta tesis.Tras la implementación de la plataforma en un producto comercial,
Qtechnology QT5022, y su uso en varias aplicaciones industriales
se ha demostrado que es posible el uso de una plataforma genérica de
visión que permita reutilizar elementos y comparar resultados objetivamenteDespite the fact that computer vision systems place an important role in
our society, its structure does not follow any standard. The implementation
of computer vision application require high performance platforms,
such as GPUs or FPGAs, and very specialized image sensors. Nowadays,
each manufacturer and research lab develops their own vision platform
independently without considering any inter-compatibility. This Thesis
introduces a new computer vision platform that can be used in a wide
spectrum of applications. The characteristics of the platform has been
defined after the implementation of three different computer vision applications,
based on: SOC, FPGA and GPU respectively. As a result, a
new modular platform has been defined with the following interchangeably
elements: Sensor, Image Processing Pipeline, Processing Unit, Acceleration
unit and Computer Vision Stack. This thesis also presents
an FPGA synthetizable algorithm for performing geometric transformations
on the fly, with a latency under 90 horizontal lines. All the software
elements of this platform have an Open Source licence; over the course
of this thesis, more than 200 patches have been contributed and accepted
into different Open Source projects like the Linux Kernel, Yocto Project
and U-boot, among others, promoting the required ecosystem for the
creation of a community around this novel system. The platform has
been validated in an industrial product, Qtechnology QT5022, used on
diverse industrial applications; demonstrating the great advantages of a
generic computer vision system as a platform for reusing elements and
comparing results objectivel