17,503 research outputs found
The Design of a System Architecture for Mobile Multimedia Computers
This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile Digital Companion, energy management plays a crucial role in the architecture. As the Companion must remain usable in a variety of environments, it has to be flexible and adaptable to various operating conditions. The Mobile Digital Companion has an unconventional architecture that saves energy by using system decomposition at different levels of the architecture and exploits locality of reference with dedicated, optimised modules. The approach is based on dedicated functionality and the extensive use of energy reduction techniques at all levels of system design. The system has an architecture with a general-purpose processor accompanied by a set of heterogeneous autonomous programmable modules, each providing an energy efficient implementation of dedicated tasks. A reconfigurable internal communication network switch exploits locality of reference and eliminates wasteful data copies
Building Programmable Wireless Networks: An Architectural Survey
In recent times, there have been a lot of efforts for improving the ossified
Internet architecture in a bid to sustain unstinted growth and innovation. A
major reason for the perceived architectural ossification is the lack of
ability to program the network as a system. This situation has resulted partly
from historical decisions in the original Internet design which emphasized
decentralized network operations through co-located data and control planes on
each network device. The situation for wireless networks is no different
resulting in a lot of complexity and a plethora of largely incompatible
wireless technologies. The emergence of "programmable wireless networks", that
allow greater flexibility, ease of management and configurability, is a step in
the right direction to overcome the aforementioned shortcomings of the wireless
networks. In this paper, we provide a broad overview of the architectures
proposed in literature for building programmable wireless networks focusing
primarily on three popular techniques, i.e., software defined networks,
cognitive radio networks, and virtualized networks. This survey is a
self-contained tutorial on these techniques and its applications. We also
discuss the opportunities and challenges in building next-generation
programmable wireless networks and identify open research issues and future
research directions.Comment: 19 page
Recommended from our members
Performance modelling of wormhole-routed hypercubes with bursty traffice and finite buffers
An open queueing network model (QNM) is proposed for wormhole-routed hypercubes with finite
buffers and deterministic routing subject to a compound Poisson arrival process (CPP) with geometrically
distributed batches or, equivalently, a generalised exponential (GE) interarrival time distribution. The GE/G/1/K
queue and appropriate GE-type flow formulae are adopted, as cost-effective building blocks, in a queue-by-queue
decomposition of the entire network. Consequently, analytic expressions for the channel holding time, buffering
delay, contention blocking and mean message latency are determined. The validity of the analytic approximations
is demonstrated against results obtained through simulation experiments. Moreover, it is shown that the wormholerouted
hypercubes suffer progressive performance degradation with increasing traffic variability (burstiness)
A neural network-based framework for financial model calibration
A data-driven approach called CaNN (Calibration Neural Network) is proposed
to calibrate financial asset price models using an Artificial Neural Network
(ANN). Determining optimal values of the model parameters is formulated as
training hidden neurons within a machine learning framework, based on available
financial option prices. The framework consists of two parts: a forward pass in
which we train the weights of the ANN off-line, valuing options under many
different asset model parameter settings; and a backward pass, in which we
evaluate the trained ANN-solver on-line, aiming to find the weights of the
neurons in the input layer. The rapid on-line learning of implied volatility by
ANNs, in combination with the use of an adapted parallel global optimization
method, tackles the computation bottleneck and provides a fast and reliable
technique for calibrating model parameters while avoiding, as much as possible,
getting stuck in local minima. Numerical experiments confirm that this
machine-learning framework can be employed to calibrate parameters of
high-dimensional stochastic volatility models efficiently and accurately.Comment: 34 pages, 9 figures, 11 table
The Octopus switch
This chapter1 discusses the interconnection architecture of the Mobile Digital Companion. The approach to build a low-power handheld multimedia computer presented here is to have autonomous, reconfigurable modules such as network, video and audio devices, interconnected by a switch rather than by a bus, and to offload as much as work as possible from the CPU to programmable modules placed in the data streams. Thus, communication between components is not broadcast over a bus but delivered exactly where it is needed, work is carried out where the data passes through, bypassing the memory. The amount of buffering is minimised, and if it is required at all, it is placed right on the data path, where it is needed. A reconfigurable internal communication network switch called Octopus exploits locality of reference and eliminates wasteful data copies. The switch is implemented as a simplified ATM switch and provides Quality of Service guarantees and enough bandwidth for multimedia applications. We have built a testbed of the architecture, of which we will present performance and energy consumption characteristics
Object Database Scalability for Scientific Workloads
We describe the PetaByte-scale computing challenges posed by the next generation of particle physics experiments, due to start operation in 2005. The computing models adopted by the experiments call for systems capable of handling sustained data acquisition rates of at least 100 MBytes/second into an Object Database, which will have to handle several PetaBytes of accumulated data per year. The systems will be used to schedule CPU intensive reconstruction and analysis tasks on the highly complex physics Object data which need then be served to clients located at universities and laboratories worldwide. We report on measurements with a prototype system that makes use of a 256 CPU HP Exemplar X Class machine running the Objectivity/DB database. Our results show excellent scalability for up to 240 simultaneous database clients, and aggregate I/O rates exceeding 150 Mbytes/second, indicating the viability of the computing models
- …