21 research outputs found
Recommended from our members
The dynamic simultaneous multithreaded processor
This dissertation investigates diverse techniques to support multithreading in modern high performance processors. The mechanisms studied expand the architecture of a high performance superscalar processor to control efficiently the interaction between software-controlled and hardware-controlled multithreading. Additionally, dynamic speculative mechanisms are proposed to exploit thread-level-parallelism (TLP) and instruction-level-parallelism (ILP) on a Simultaneous Multithreading (SMT) architecture. First, the hybrid multithreaded execution model is discussed. This model combines software-controlled multithreading with hardware support for efficient context switching and thread scheduling. A thread scheduling technique called set scheduling is introduced and its impact on the overall performance is described. An analytical model of the hybrid multithreaded execution is developed and validated by simulation. Through stochastic simulation, we find that the application of the hybrid multithreaded execution model results in higher processor utilization than traditional software-controlled multithreading. Next, in the main part of this dissertation, a new architecture is proposed: the Dynamic Simultaneous Multithreading (DSMT) processor. In this architecture, multiple threads are identified and created speculatively at runtime without compiler help. Subsequently, a SMT processor core executes those threads. The performance of a DSMT processor was evaluated with a new execution-driven simulator developed specifically for the purpose. Our experimental results based on simulation show that DSMT architecture has very good potential to improve SMT processor's performance when there is only a single task available for execution
Techniques of design optimisation for algorithms implemented in software
The overarching objective of this thesis was to develop tools for parallelising, optimising,
and implementing algorithms on parallel architectures, in particular General Purpose
Graphics Processors (GPGPUs). Two projects were chosen from different application areas
in which GPGPUs are used: a defence application involving image compression, and a
modelling application in bioinformatics (computational immunology). Each project had its
own specific objectives, as well as supporting the overall research goal.
The defence / image compression project was carried out in collaboration with the Jet
Propulsion Laboratories. The specific questions were: to what extent an algorithm designed
for bit-serial for the lossless compression of hyperspectral images on-board unmanned
vehicles (UAVs) in hardware could be parallelised, whether GPGPUs could be used to
implement that algorithm, and whether a software implementation with or without GPGPU
acceleration could match the throughput of a dedicated hardware (FPGA) implementation.
The dependencies within the algorithm were analysed, and the algorithm parallelised. The
algorithm was implemented in software for GPGPU, and optimised. During the optimisation
process, profiling revealed less than optimal device utilisation, but no further optimisations
resulted in an improvement in speed. The design had hit a local-maximum of performance.
Analysis of the arithmetic intensity and data-flow exposed flaws in the standard optimisation
metric of kernel occupancy used for GPU optimisation. Redesigning the implementation
with revised criteria (fused kernels, lower occupancy, and greater data locality) led to a new
implementation with 10x higher throughput. GPGPUs were shown to be viable for on-board
implementation of the CCSDS lossless hyperspectral image compression algorithm,
exceeding the performance of the hardware reference implementation, and providing
sufficient throughput for the next generation of image sensor as well.
The second project was carried out in collaboration with biologists at the University of
Arizona and involved modelling a complex biological system – VDJ recombination involved
in the formation of T-cell receptors (TCRs). Generation of immune receptors (T cell receptor
and antibodies) by VDJ recombination is an enormously complex process, which can
theoretically synthesize greater than 1018 variants. Originally thought to be a random
process, the underlying mechanisms clearly have a non-random nature that preferentially
creates a small subset of immune receptors in many individuals. Understanding this bias is a
longstanding problem in the field of immunology. Modelling the process of VDJ
recombination to determine the number of ways each immune receptor can be synthesized,
previously thought to be untenable, is a key first step in determining how this special
population is made. The computational tools developed in this thesis have allowed
immunologists for the first time to comprehensively test and invalidate a longstanding theory
(convergent recombination) for how this special population is created, while generating the
data needed to develop novel hypothesis
Accelerating Transactional Memory by Exploiting Platform Specificity
Transactional Memory (TM) is one of the most promising alternatives to lock-based concurrency, but there still remain obstacles that keep TM from being utilized in the real world. Performance, in terms of high scalability and low latency, is always one of the most important keys to general purpose usage. While most of the research in this area focuses on improving a specific single TM implementation and some default platform (a certain operating system, compiler and/or processor), little has been conducted on improving performance more generally, and across platforms.We found that by utilizing platform specificity, we could gain tremendous performance improvement and avoid unnecessary costs due to false assumptions of platform properties, on not only a single TM implementation, but many. In this dissertation, we will present our findings in four sections: 1) we discover and quantify hidden costs from inappropriate compiler instrumentations, and provide sug- gestions and solutions; 2) we boost a set of mainstream timestamp-based TM implementations with the x86-specific hardware cycle counter; 3) we explore compiler opportunities to reduce the transaction abort rate, by reordering read-modify-write operations — the whole technique can be applied to all TM implementations, and could be more effective with some help from compilers; and 4) we coordinate the state-of-the-art Intel Haswell TSX hardware TM with a software TM “Cohorts”, and develop a safe and flexible Hybrid TM, “HyCo”, to be our final performance boost in this dissertation.The impact of our research extends beyond Transactional Memory, to broad areas of concurrent programming. Some of our solutions and discussions, such as the synchronization between accesses of the hardware cycle counter and memory loads and stores, can be utilized to boost concurrent data structures and many timestamp-based systems and applications. Others, such as discussions of compiler instrumentation costs and reordering opportunities, provide additional insights to compiler designers. Our findings show that platform specificity must be taken into consideration to achieve peak performance
Enabling Hyperscale Web Services
Modern web services such as social media, online messaging, web search, video streaming, and online banking often support billions of users, requiring data centers that scale to hundreds of thousands of servers, i.e., hyperscale. In fact, the world continues to expect hyperscale computing to drive more futuristic applications such as virtual reality, self-driving cars, conversational AI, and the Internet of Things. This dissertation presents technologies that will enable tomorrow’s web services to meet the world’s expectations.
The key challenge in enabling hyperscale web services arises from two important trends. First, over the past few years, there has been a radical shift in hyperscale computing due to an unprecedented growth in data, users, and web service software functionality. Second, modern hardware can no longer support this growth in hyperscale trends due to a decline in hardware performance scaling. To enable this new hyperscale era, hardware architects must become more aware of hyperscale software needs and software researchers can no longer expect unlimited hardware performance scaling. In short, systems researchers can no longer follow the traditional approach of building each layer of the systems stack separately. Instead, they must rethink the synergy between the software and hardware worlds from the ground up. This dissertation establishes such a synergy to enable futuristic hyperscale web services.
This dissertation bridges the software and hardware worlds, demonstrating the importance of that bridge in realizing efficient hyperscale web services via solutions that span the systems stack. The specific goal is to design software that is aware of new hardware constraints and architect hardware that efficiently supports new hyperscale software requirements. This dissertation spans two broad thrusts: (1) a software and (2) a hardware thrust to analyze the complex hyperscale design space and use insights from these analyses to design efficient cross-stack solutions for hyperscale computation.
In the software thrust, this dissertation contributes uSuite, the first open-source benchmark suite of web services built with a new hyperscale software paradigm, that is used in academia and industry to study hyperscale behaviors. Next, this dissertation uses uSuite to study software threading implications in light of today’s hardware reality, identifying new insights in the age-old research area of software threading. Driven by these insights, this dissertation demonstrates how threading models must be redesigned at hyperscale by presenting an automated approach and tool, uTune, that makes intelligent run-time threading decisions.
In the hardware thrust, this dissertation architects both commodity and custom hardware to efficiently support hyperscale software requirements. First, this dissertation characterizes commodity hardware’s shortcomings, revealing insights that influenced commercial CPU designs. Based on these insights, this dissertation presents an approach and tool, SoftSKU, that enables cheap commodity hardware to efficiently support new hyperscale software paradigms, improving the efficiency of real-world web services that serve billions of users, saving millions of dollars, and meaningfully reducing the global carbon footprint. This dissertation also presents a hardware-software co-design, uNotify, that redesigns commodity hardware with minimal modifications by using existing hardware mechanisms more intelligently to overcome new hyperscale overheads.
Next, this dissertation characterizes how custom hardware must be designed at hyperscale, resulting in industry-academia benchmarking efforts, commercial hardware changes, and improved software development. Based on this characterization’s insights, this dissertation presents Accelerometer, an analytical model that estimates gains from hardware customization. Multiple hyperscale enterprises and hardware vendors use Accelerometer to make well-informed hardware decisions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169802/1/akshitha_1.pd
Designs for increasing reliability while reducing energy and increasing lifetime
In the last decades, the computing technology experienced tremendous developments. For instance, transistors' feature size shrank to half at every two years as consistently from the first time Moore stated his law. Consequently, number of transistors and core count per chip doubles at each generation. Similarly, petascale systems that have the capability of processing more than one billion calculation per second have been developed. As a matter of fact, exascale systems are predicted to be available at year 2020.
However, these developments in computer systems face a reliability wall. For instance, transistor feature sizes are getting so small that it becomes easier for high-energy particles to temporarily flip the state of a memory cell from 1-to-0 or 0-to-1. Also, even if we assume that fault-rate per transistor stays constant with scaling, the increase in total transistor and core count per chip will significantly increase the number of faults for future desktop and exascale systems. Moreover, circuit ageing is exacerbated due to increased manufacturing variability and thermal stresses, therefore, lifetime of processor structures are becoming shorter.
On the other side, due to the limited power budget of the computer systems such that mobile devices, it is attractive to scale down the voltage. However, when the voltage level scales to beyond the safe margin especially to the ultra-low level, the error rate increases drastically.
Nevertheless, new memory technologies such as NAND flashes present only limited amount of nominal lifetime, and when they exceed this lifetime, they can not guarantee storing of the data correctly leading to data retention problems.
Due to these issues, reliability became a first-class design constraint for contemporary computing in addition to power and performance. Moreover, reliability even plays increasingly important role when computer systems process sensitive and life-critical information such as health records, financial information, power regulation, transportation, etc.
In this thesis, we present several different reliability designs for detecting and correcting errors occurring in processor pipelines, L1 caches and non-volatile NAND flash memories due to various reasons. We design reliability solutions in order to serve three main purposes. Our first goal is to improve the reliability of computer systems by detecting and correcting random and non-predictable errors such as bit flips or ageing errors. Second, we aim to reduce the energy consumption of the computer systems by allowing them to operate reliably at ultra-low voltage level. Third, we target to increase the lifetime of new memory technologies by implementing efficient and low-cost reliability schemes
The readying of applications for heterogeneous computing
High performance computing is approaching a potentially significant change in architectural design. With pressures on the cost and sheer amount of power, additional architectural features are emerging which require a re-think to the programming models deployed over the last two decades.
Today's emerging high performance computing (HPC) systems are maximising performance per unit of power consumed resulting in the constituent parts of the system to be made up of a range of different specialised building blocks, each with their own purpose. This heterogeneity is not just limited to the hardware components but also in the mechanisms that exploit the hardware components. These multiple levels of parallelism, instruction sets and memory hierarchies, result in truly heterogeneous computing in all aspects of the global system.
These emerging architectural solutions will require the software to exploit tremendous amounts of on-node parallelism and indeed programming models to address this are emerging. In theory, the application developer can design new software using these models to exploit emerging low power architectures. However, in practice, real industrial scale applications last the lifetimes of many architectural generations and therefore require a migration path to these next generation supercomputing platforms.
Identifying that migration path is non-trivial: With applications spanning many decades, consisting of many millions of lines of code and multiple scientific algorithms, any changes to the programming model will be extensive and invasive and may turn out to be the incorrect model for the application in question.
This makes exploration of these emerging architectures and programming models using the applications themselves problematic. Additionally, the source code of many industrial applications is not available either due to commercial or security sensitivity constraints.
This thesis highlights this problem by assessing current and emerging hard- ware with an industrial strength code, and demonstrating those issues described. In turn it looks at the methodology of using proxy applications in place of real industry applications, to assess their suitability on the next generation of low power HPC offerings. It shows there are significant benefits to be realised in using proxy applications, in that fundamental issues inhibiting exploration of a particular architecture are easier to identify and hence address.
Evaluations of the maturity and performance portability are explored for a number of alternative programming methodologies, on a number of architectures and highlighting the broader adoption of these proxy applications, both within the authors own organisation, and across the industry as a whole
Nova combinação de hardware e de software para veículos de desporto automóvel baseada no processamento directo de funções gráficas
Doutoramento em Engenharia EletrónicaThe main motivation for the work presented here began with previously
conducted experiments with a programming concept at the time named
"Macro". These experiments led to the conviction that it would be possible to
build a system of engine control from scratch, which could eliminate many of
the current problems of engine management systems in a direct and intrinsic
way. It was also hoped that it would minimize the full range of software and
hardware needed to make a final and fully functional system.
Initially, this paper proposes to make a comprehensive survey of the state of
the art in the specific area of software and corresponding hardware of
automotive tools and automotive ECUs. Problems arising from such software
will be identified, and it will be clear that practically all of these problems stem
directly or indirectly from the fact that we continue to make comprehensive use
of extremely long and complex "tool chains". Similarly, in the hardware, it will
be argued that the problems stem from the extreme complexity and
inter-dependency inside processor architectures. The conclusions are
presented through an extensive list of "pitfalls" which will be thoroughly
enumerated, identified and characterized.
Solutions will also be proposed for the various current issues and for the
implementation of these same solutions. All this final work will be part of a
"proof-of-concept" system called "ECU2010". The central element of this
system is the before mentioned "Macro" concept, which is an graphical block
representing one of many operations required in a automotive system having
arithmetic, logic, filtering, integration, multiplexing functions among others. The
end result of the proposed work is a single tool, fully integrated, enabling the
development and management of the entire system in one simple visual
interface. Part of the presented result relies on a hardware platform fully
adapted to the software, as well as enabling high flexibility and scalability in
addition to using exactly the same technology for ECU, data logger and
peripherals alike.
Current systems rely on a mostly evolutionary path, only allowing online
calibration of parameters, but never the online alteration of their own
automotive functionality algorithms. By contrast, the system developed and
described in this thesis had the advantage of following a "clean-slate"
approach, whereby everything could be rethought globally. In the end, out of all
the system characteristics, "LIVE-Prototyping" is the most relevant feature,
allowing the adjustment of automotive algorithms (eg. Injection, ignition,
lambda control, etc.) 100% online, keeping the engine constantly working,
without ever having to stop or reboot to make such changes. This consequently
eliminates any "turnaround delay" typically present in current automotive
systems, thereby enhancing the efficiency and handling of such systems.A principal motivação para o trabalho que conduziu a esta tese residiu na
constatação de que os actuais métodos de modelação de centralinas
automóveis conduzem a significativos problemas de desenvolvimento e
manutenção. Como resultado dessa constatação, o objectivo deste trabalho
centrou-se no desenvolvimento de um conceito de arquitectura que rompe
radicalmente com os modelos state-of-the-art e que assenta num conjunto de
conceitos que vieram a ser designados de "Macro" e "Celular ECU". Com este
modelo pretendeu-se simultaneamente minimizar a panóplia de software e de
hardware necessários à obtenção de uma sistema funcional final.
Inicialmente, esta tese propõem-se fazer um levantamento exaustivo do
estado da arte na área específica do software e correspondente hardware das
ferramentas e centralinas automóveis. Os problemas decorrentes de tal
software serão identificados e, dessa identificação deverá ficar claro, que
praticamente todos esses problemas têm origem directa ou indirecta no facto
de se continuar a fazer um uso exaustivo de "tool chains" extremamente
compridas e complexas. De forma semelhante, no hardware, os problemas
têm origem na extrema complexidade e inter-dependência das arquitecturas
dos processadores. As consequências distribuem-se por uma extensa lista de
"pitfalls" que também serão exaustivamente enumeradas, identificadas e
caracterizadas.
São ainda propostas soluções para os diversos problemas actuais e
correspondentes implementações dessas mesmas soluções. Todo este
trabalho final faz parte de um sistema "proof-of-concept" designado
"ECU2010". O elemento central deste sistema é o já referido conceito de
“Macro”, que consiste num bloco gráfico que representa uma de muitas
operações necessárias num sistema automóvel, como sejam funções
aritméticas, lógicas, de filtragem, de integração, de multiplexagem, entre
outras. O resultado final do trabalho proposto assenta numa única ferramenta,
totalmente integrada que permite o desenvolvimento e gestão de todo o
sistema de forma simples numa única interface visual. Parte do resultado
apresentado assenta numa plataforma hardware totalmente adaptada ao
software, bem como na elevada flexibilidade e escalabilidade, para além de
permitir a utilização de exactamente a mesma tecnologia quer para a
centralina, como para o datalogger e para os periféricos.
Os sistemas actuais assentam num percurso maioritariamente evolutivo,
apenas permitindo a calibração online de parâmetros, mas nunca a alteração
online dos próprios algoritmos das funcionalidades automóveis. Pelo contrário,
o sistema desenvolvido e descrito nesta tese apresenta a vantagem de seguir
um "clean-slate approach", pelo que tudo pode ser globalmente repensado. No
final e para além de todas as restantes características, o
“LIVE-PROTOTYPING” é a funcionalidade mais relevante, ao permitir alterar
algoritmos automóveis (ex: injecção, ignição, controlo lambda, etc.) de forma
100% online, mantendo o motor constantemente a trabalhar e sem nunca ter
de o parar ou re-arrancar para efectuar tais alterações. Isto elimina
consequentemente qualquer "turnaround delay" tipicamente presente em
qualquer sistema automóvel actual, aumentando de forma significativa a
eficiência global do sistema e da sua utilização
Anales del XIII Congreso Argentino de Ciencias de la Computación (CACIC)
Contenido:
Arquitecturas de computadoras
Sistemas embebidos
Arquitecturas orientadas a servicios (SOA)
Redes de comunicaciones
Redes heterogéneas
Redes de Avanzada
Redes inalámbricas
Redes móviles
Redes activas
Administración y monitoreo de redes y servicios
Calidad de Servicio (QoS, SLAs)
Seguridad informática y autenticación, privacidad
Infraestructura para firma digital y certificados digitales
Análisis y detección de vulnerabilidades
Sistemas operativos
Sistemas P2P
Middleware
Infraestructura para grid
Servicios de integración (Web Services o .Net)Red de Universidades con Carreras en Informática (RedUNCI