23 research outputs found
Energy autonomous systems : future trends in devices, technology, and systems
The rapid evolution of electronic devices since the beginning of the nanoelectronics era has brought about exceptional computational power in an ever shrinking system footprint. This has enabled among others the wealth of nomadic battery powered wireless systems (smart phones, mp3 players, GPS, …) that society currently enjoys. Emerging integration technologies enabling even smaller volumes and the associated increased functional density may bring about a new revolution in systems targeting wearable healthcare, wellness, lifestyle and industrial monitoring applications
PALS: Distributed Gradient Clocking on Chip
Consider an arbitrary network of communicating modules on a chip, each
requiring a local signal telling it when to execute a computational step. There
are three common solutions to generating such a local clock signal: (i) by
deriving it from a single, central clock source, (ii) by local, free-running
oscillators, or (iii) by handshaking between neighboring modules. Conceptually,
each of these solutions is the result of a perceived dichotomy in which
(sub)systems are either clocked or asynchronous. We present a solution and its
implementation that lies between these extremes. Based on a distributed
gradient clock synchronization algorithm, we show a novel design providing
modules with local clocks, the frequency bounds of which are almost as good as
those of free-running oscillators, yet neighboring modules are guaranteed to
have a phase offset substantially smaller than one clock cycle. Concretely,
parameters obtained from a 15nm ASIC simulation running at 2GHz yield
mathematical worst-case bounds of 20ps on the phase offset for a
node grid network
Pipelined Asynchronous High Level Synthesis for General Programs
High-level synthesis (HLS) translates algorithms from software programming language into hardware. We use the dataflow HLS methodology to translate programs into asynchronous circuits by implementing programs using asynchronous dataflow elements as hardware building blocks. We extend the prior work in dataflow synthesis in the following aspects:i) we propose Fluid to synthesize pipelined dataflow circuits for real-world programs with complex control flows, which are not supported in the previous work; ii) we propose PipeLink to permit pipelined access to shared resources in the dataflow circuit. Dataflow circuit results in distributed control and an implicitly pipelined implementation. However, resource sharing in the presence of pipelining is challenging in this context due to the absence of a global scheduler. Traditional solutions to this problem impose restrictions on pipelining to guarantee mutually exclusive access to the shared resource, but PipeLink removes such restrictions and can generate pipelined asynchronous dataflow circuits for shared function calls, pipelined memory accesses and function pointers; iii) we apply several dataflow optimizations to improve the quality of the synthesized dataflow circuits; iv) we implement our system (Fluid + PipeLink) on the LLVM compiler framework, which allows us to take advantage of the optimization efforts from the compiler community; v) we compare our system with a widely-used academic HLS tool and two commercial HLS tools. Compared to commercial (academic) HLS tools, our system results in 12X (20X) reduction in energy, 1.29X (1.64X) improvement in throughput, 1.27X (1.61X) improvement in latency at a cost of 2.4X (1.61X) increase in the area
Interim research assessment 2003-2005 - Computer Science
This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities
Design and Evaluation of Low-Latency Communication Middleware on High Performance Computing Systems
[Resumen]El interés en Java para computación paralela está motivado por sus interesantes
caracterÃsticas, tales como su soporte multithread, portabilidad, facilidad de aprendizaje,alta productividad y el aumento significativo en su rendimiento omputacional.
No obstante, las aplicaciones paralelas en Java carecen generalmente de mecanismos
de comunicación eficientes, los cuales utilizan a menudo protocolos basados
en sockets incapaces de obtener el máximo provecho de las redes de baja latencia,
obstaculizando la adopción de Java en computación de altas prestaciones (High Per-
formance Computing, HPC). Esta Tesis Doctoral presenta el diseño, implementación
y evaluación de soluciones de comunicación en Java que superan esta limitación. En
consecuencia, se desarrollaron múltiples dispositivos de comunicación a bajo nivel
para paso de mensajes en Java (Message-Passing in Java, MPJ) que aprovechan al
máximo el hardware de red subyacente mediante operaciones de acceso directo a memoria remota que proporcionan comunicaciones de baja latencia. También se incluye una biblioteca de paso de mensajes en Java totalmente funcional, FastMPJ, en la
cual se integraron los dispositivos de comunicación. La evaluación experimental ha
mostrado que las primitivas de comunicación de FastMPJ son competitivas en comparación con bibliotecas nativas, aumentando significativamente la escalabilidad de
aplicaciones MPJ. Por otro lado, esta Tesis analiza el potencial de la computación en
la nube (cloud computing) para HPC, donde el modelo de distribución de infraestructura
como servicio (Infrastructure as a Service, IaaS) emerge como una alternativa
viable a los sistemas HPC tradicionales. La evaluación del rendimiento de recursos
cloud especÃficos para HPC del proveedor lÃder, Amazon EC2, ha puesto de manifiesto el impacto significativo que la virtualización impone en la red, impidiendo
mover las aplicaciones intensivas en comunicaciones a la nube. La clave reside en un soporte de virtualización apropiado, como el acceso directo al hardware de red, junto
con las directrices para la optimización del rendimiento sugeridas en esta Tesis.[Resumo]O interese en Java para computación paralela está motivado polas súas interesantes caracterÃsticas, tales como o seu apoio multithread, portabilidade, facilidade de aprendizaxe, alta produtividade e o aumento signi cativo no seu rendemento computacional. No entanto, as aplicacións paralelas en Java carecen xeralmente de mecanismos de comunicación e cientes, os cales adoitan usar protocolos baseados en sockets que son incapaces de obter o máximo proveito das redes de baixa latencia, obstaculizando a adopción de Java na computación de altas prestacións (High
Performance Computing, HPC). Esta Tese de Doutoramento presenta o deseño, implementaci
ón e avaliación de solucións de comunicación en Java que superan esta limitación. En consecuencia, desenvolvéronse múltiples dispositivos de comunicación a baixo nivel para paso de mensaxes en Java (Message-Passing in Java, MPJ) que aproveitan ao máaximo o hardware de rede subxacente mediante operacións de acceso
directo a memoria remota que proporcionan comunicacións de baixa latencia.
Tamén se inclúe unha biblioteca de paso de mensaxes en Java totalmente funcional,
FastMPJ, na cal foron integrados os dispositivos de comunicación. A avaliación experimental amosou que as primitivas de comunicación de FastMPJ son competitivas
en comparación con bibliotecas nativas, aumentando signi cativamente a escalabilidade
de aplicacións MPJ. Por outra banda, esta Tese analiza o potencial da computación na nube (cloud computing) para HPC, onde o modelo de distribución de infraestrutura como servizo (Infrastructure as a Service, IaaS) xorde como unha alternativa viable aos sistemas HPC tradicionais. A ampla avaliación do rendemento de recursos cloud especÃfi cos para HPC do proveedor lÃder, Amazon EC2, puxo de manifesto o impacto signi ficativo que a virtualización impón na rede, impedindo mover as aplicacións intensivas en comunicacións á nube. A clave atópase no soporte de virtualización apropiado, como o acceso directo ao hardware de rede, xunto coas directrices para a optimización do rendemento suxeridas nesta Tese.[Abstract]The use of Java for parallel computing is becoming more promising owing to
its appealing features, particularly its multithreading support, portability, easy-tolearn properties, high programming productivity and the noticeable improvement in its computational performance. However, parallel Java applications generally su er
from inefficient communication middleware, most of which use socket-based protocols
that are unable to take full advantage of high-speed networks, hindering the
adoption of Java in the High Performance Computing (HPC) area. This PhD Thesis
presents the design, development and evaluation of scalable Java communication
solutions that overcome these constraints. Hence, we have implemented several lowlevel
message-passing devices that fully exploit the underlying network hardware while taking advantage of Remote Direct Memory Access (RDMA) operations to provide low-latency communications. Moreover, we have developed a productionquality Java message-passing middleware, FastMPJ, in which the devices have been integrated seamlessly, thus allowing the productive development of Message-Passing in Java (MPJ) applications. The performance evaluation has shown that FastMPJ communication primitives are competitive with native message-passing libraries, improving signi cantly the scalability of MPJ applications. Furthermore, this Thesis
has analyzed the potential of cloud computing towards spreading the outreach of
HPC, where Infrastructure as a Service (IaaS) o erings have emerged as a feasible
alternative to traditional HPC systems. Several cloud resources from the leading
IaaS provider, Amazon EC2, which speci cally target HPC workloads, have been
thoroughly assessed. The experimental results have shown the signi cant impact
that virtualized environments still have on network performance, which hampers
porting communication-intensive codes to the cloud. The key is the availability of
the proper virtualization support, such as the direct access to the network hardware,
along with the guidelines for performance optimization suggested in this Thesis
Asynchronous spike event coding scheme for programmable analogue arrays and its computational applications
This work is the result of the definition, design and evaluation of a novel method to interconnect
the computational elements - commonly known as Configurable Analogue Blocks (CABs) - of
a programmable analogue array. This method is proposed for total or partial replacement of the
conventional methods due to serious limitations of the latter in terms of scalability.
With this method, named Asynchronous Spike Event Coding (ASEC) scheme, analogue signals
from CABs outputs are encoded as time instants (spike events) dependent upon those signals
activity and are transmitted asynchronously by employing the Address Event Representation
(AER) protocol. Power dissipation is dependent upon input signal activity and no spike events
are generated when the input signal is constant.
On-line, programmable computation is intrinsic to ASEC scheme and is performed without additional
hardware. The ability of the communication scheme to perform computation enhances
the computation power of the programmable analogue array. The design methodology and a
CMOS implementation of the scheme are presented together with test results from prototype
integrated circuits (ICs)
Programming Languages and Systems
This open access book constitutes the proceedings of the 30th European Symposium on Programming, ESOP 2021, which was held during March 27 until April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The 24 papers included in this volume were carefully reviewed and selected from 79 submissions. They deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems