248 research outputs found
Implementation of a Wiki-based Information and Communication System for Academia Europaea
In recent years, online collaboration tools such as wikis have experienced a tremendous expansion. Their success and popularity is due to their simple and efficient content creation facilities. Moreover, the formatting syntax is basically easy to learn allowing users to easily create and edit web content and thus share knowledge with each other.Despite these advantages, the content creation process is still obscure if authors belong to a computer inexperienced user group. In this paper we present our experiences, challenges, and problems with a specific application of a wiki-based system called the Academia Europaea Information and Communication System. Furthermore, key solutions for bypassing user interaction barriers are discussed throughout this paper
Editorial
Welcome to our regular edition of CIT devoted to the 2011 ITI (Information Technology Interfaces) Conference. The following 10 papers (out of 100) were selected from those originally published in the ITI 2011 Conference Proceedings (IEEE Catalog Number CFP11498-PRT). As in previous years, papers were selected after having received high ratings from both of their respective international reviewers and as being representative of the broad scope character of the
ITI Conference in terms of the topics solicited over these last 33 years.
Out of 11 conference topics, eight are represented by at least one paper
Developing a distributed electronic health-record store for India
The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
Self-organizing distributed digital library supporting audio-video
The StreamOnTheFly network combines peer-to-peer networking and open-archive principles for community radio channels and TV stations in Europe. StreamOnTheFly demonstrates new methods of archive management and personalization technologies for both audio and video. It also provides a collaboration platform for community purposes that suits the flexible activity patterns of these kinds of broadcaster communities
A hardware runtime for task-based programming models
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Task-based programming models such as OpenMP 5.0 and OmpSs are simple to use and powerful enough to exploit task parallelism of applications over multicore, manycore and heterogeneous systems. However, their software-only runtimes introduce relevant overhead when targeting fine-grained tasks, resulting in performance losses. To overcome this drawback, we present a hardware runtime Picos++ that accelerates critical runtime functions such as task dependence analysis, nested task support, and heterogeneous task scheduling. As a proof-of-concept, the Picos++ hardware runtime has been integrated with a compiler infrastructure that supports parallel task-based programming models. A FPGA SoC running Linux OS has been used to implement the hardware accelerated part of Picos++, integrated with a heterogeneous system composed of 4 symmetric multiprocessor (SMP) cores and several hardware functional accelerators (HwAccs) for task execution. Results show significant improvements on energy and performance compared to state-of-the-art parallel software-only runtimes. With Picos++, applications can achieve up to 7.6x speedup and save up to 90 percent of energy, when using 4 threads and up to 4 HwAccs, and even reach a speedup of 16x over the software alternative when using 12 HwAccs and small tasks.Peer ReviewedPostprint (author's final draft
Acceleration for Timing-Aware Gate-Level Logic Simulation with One-Pass GPU Parallelism
Witnessing the advancing scale and complexity of chip design and benefiting
from high-performance computation technologies, the simulation of Very Large
Scale Integration (VLSI) Circuits imposes an increasing requirement for
acceleration through parallel computing with GPU devices. However, the
conventional parallel strategies do not fully align with modern GPU abilities,
leading to new challenges in the parallelism of VLSI simulation when using GPU,
despite some previous successful demonstrations of significant acceleration. In
this paper, we propose a novel approach to accelerate 4-value logic
timing-aware gate-level logic simulation using waveform-based GPU parallelism.
Our approach utilizes a new strategy that can effectively handle the dependency
between tasks during the parallelism, reducing the synchronization requirement
between CPU and GPU when parallelizing the simulation on combinational
circuits. This approach requires only one round of data transfer and hence
achieves one-pass parallelism. Moreover, to overcome the difficulty within the
adoption of our strategy in GPU devices, we design a series of data structures
and tune them to dynamically allocate and store new-generated output with
uncertain scale. Finally, experiments are carried out on industrial-scale
open-source benchmarks to demonstrate the performance gain of our approach
compared to several state-of-the-art baselines
Chapter One – An Overview of Architecture-Level Power- and Energy-Efficient Design Techniques
Power dissipation and energy consumption became the primary design constraint for almost all computer systems in the last 15 years. Both computer architects and circuit designers intent to reduce power and energy (without a performance degradation) at all design levels, as it is currently the main obstacle to continue with further scaling according to Moore's law. The aim of this survey is to provide a comprehensive overview of power- and energy-efficient “state-of-the-art” techniques. We classify techniques by component where they apply to, which is the most natural way from a designer point of view. We further divide the techniques by the component of power/energy they optimize (static or dynamic), covering in that way complete low-power design flow at the architectural level. At the end, we conclude that only a holistic approach that assumes optimizations at all design levels can lead to significant savings.Peer ReviewedPostprint (published version
- …