9,555 research outputs found
An intelligent processing environment for real-time simulation
The development of a highly efficient and thus truly intelligent processing environment for real-time general purpose simulation of continuous systems is described. Such an environment can be created by mapping the simulation process directly onto the University of Alamba's OPERA architecture. To facilitate this effort, the field of continuous simulation is explored, highlighting areas in which efficiency can be improved. Areas in which parallel processing can be applied are also identified, and several general OPERA type hardware configurations that support improved simulation are investigated. Three direct execution parallel processing environments are introduced, each of which greatly improves efficiency by exploiting distinct areas of the simulation process. These suggested environments are candidate architectures around which a highly intelligent real-time simulation configuration can be developed
Efficient resources assignment schemes for clustered multithreaded processors
New feature sizes provide larger number of transistors per chip that architects could use in order to further exploit instruction level parallelism. However, these technologies bring also new challenges that complicate conventional monolithic processor designs. On the one hand, exploiting instruction level parallelism is leading us to diminishing returns and therefore exploiting other sources of parallelism like thread level parallelism is needed in order to keep raising performance with a reasonable hardware complexity. On the other hand, clustering architectures have been widely studied in order to reduce the inherent complexity of current monolithic processors. This paper studies the synergies and trade-offs between two concepts, clustering and simultaneous multithreading (SMT), in order to understand the reasons why conventional SMT resource assignment schemes are not so effective in clustered processors. These trade-offs are used to propose a novel resource assignment scheme that gets and average speed up of 17.6% versus Icount improving fairness in 24%.Peer ReviewedPostprint (published version
Green inter-cluster interference management in uplink of multi-cell processing systems
This paper examines the uplink of cellular systems employing base station cooperation for joint signal processing. We consider clustered cooperation and investigate effective techniques for managing inter-cluster interference to improve users' performance in terms of both spectral and energy efficiency. We use information theoretic analysis to establish general closed form expressions for the system achievable sum rate and the users' Bit-per-Joule capacity while adopting a realistic user device power consumption model. Two main inter-cluster interference management approaches are identified and studied, i.e., through: 1) spectrum re-use; and 2) users' power control. For the former case, we show that isolating clusters by orthogonal resource allocation is the best strategy. For the latter case, we introduce a mathematically tractable user power control scheme and observe that a green opportunistic transmission strategy can significantly reduce the adverse effects of inter-cluster interference while exploiting the benefits from cooperation. To compare the different approaches in the context of real-world systems and evaluate the effect of key design parameters on the users' energy-spectral efficiency relationship, we fit the analytical expressions into a practical macrocell scenario. Our results demonstrate that significant improvement in terms of both energy and spectral efficiency can be achieved by energy-aware interference management
A C-DAG task model for scheduling complex real-time tasks on heterogeneous platforms: preemption matters
Recent commercial hardware platforms for embedded real-time systems feature
heterogeneous processing units and computing accelerators on the same
System-on-Chip. When designing complex real-time application for such
architectures, the designer needs to make a number of difficult choices: on
which processor should a certain task be implemented? Should a component be
implemented in parallel or sequentially? These choices may have a great impact
on feasibility, as the difference in the processor internal architectures
impact on the tasks' execution time and preemption cost. To help the designer
explore the wide space of design choices and tune the scheduling parameters, in
this paper we propose a novel real-time application model, called C-DAG,
specifically conceived for heterogeneous platforms. A C-DAG allows to specify
alternative implementations of the same component of an application for
different processing engines to be selected off-line, as well as conditional
branches to model if-then-else statements to be selected at run-time. We also
propose a schedulability analysis for the C-DAG model and a heuristic
allocation algorithm so that all deadlines are respected. Our analysis takes
into account the cost of preempting a task, which can be non-negligible on
certain processors. We demonstrate the effectiveness of our approach on a large
set of synthetic experiments by comparing with state of the art algorithms in
the literature
Better than $1/Mflops sustained: a scalable PC-based parallel computer for lattice QCD
We study the feasibility of a PC-based parallel computer for medium to large
scale lattice QCD simulations. The E\"otv\"os Univ., Inst. Theor. Phys. cluster
consists of 137 Intel P4-1.7GHz nodes with 512 MB RDRAM. The 32-bit, single
precision sustained performance for dynamical QCD without communication is 1510
Mflops/node with Wilson and 970 Mflops/node with staggered fermions. This gives
a total performance of 208 Gflops for Wilson and 133 Gflops for staggered QCD,
respectively (for 64-bit applications the performance is approximately halved).
The novel feature of our system is its communication architecture. In order to
have a scalable, cost-effective machine we use Gigabit Ethernet cards for
nearest-neighbor communications in a two-dimensional mesh. This type of
communication is cost effective (only 30% of the hardware costs is spent on the
communication). According to our benchmark measurements this type of
communication results in around 40% communication time fraction for lattices
upto 48^3\cdot96 in full QCD simulations. The price/sustained-performance ratio
for full QCD is better than 1.5/Mflops for
staggered) quarks for practically any lattice size, which can fit in our
parallel computer. The communication software is freely available upon request
for non-profit organizations.Comment: 14 pages, 3 figures, final version to appear in Comp.Phys.Com
The Design of a System Architecture for Mobile Multimedia Computers
This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile Digital Companion, energy management plays a crucial role in the architecture. As the Companion must remain usable in a variety of environments, it has to be flexible and adaptable to various operating conditions. The Mobile Digital Companion has an unconventional architecture that saves energy by using system decomposition at different levels of the architecture and exploits locality of reference with dedicated, optimised modules. The approach is based on dedicated functionality and the extensive use of energy reduction techniques at all levels of system design. The system has an architecture with a general-purpose processor accompanied by a set of heterogeneous autonomous programmable modules, each providing an energy efficient implementation of dedicated tasks. A reconfigurable internal communication network switch exploits locality of reference and eliminates wasteful data copies
- …