775 research outputs found
60 GHz High Data Rate Wireless Communication System
This paper presents the design and the realization of a 60 GHz wireless
Gigabit Ethernet communication system. A differential encoded binary phase
shift keying modulation (DBPSK) and differential demodulation schemes are
adopted for the IF blocks. The Gigabit Ethernet interface allows a high speed
transfer of multimedia files via a 60 GHz wireless link. First measurement
results are shown for 875 Mbps data rate.Comment: 5 pages
Challenging the challenge: handling data in the Gigabit/s range
The ALICE experiment at CERN will propose unprecedented requirements for
event building and data recording. New technologies will be adopted as well as
ad-hoc frameworks, from the acquisition of experimental data up to the transfer
onto permanent media and its later access. These issues justify a careful,
in-depth planning and preparation. The ALICE Data Challenge is a very important
step of this development process where simulated detector data is moved from
dummy data sources up to the recording media using processing elements and
data-paths as realistic as possible. We will review herein the current status
of past, present and future ALICE Data Challenges, with particular reference to
the sessions held in 2002 when - for the first time - streams worth one week of
ALICE data were recorded onto tape media at sustained rates exceeding 300 MB/s.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 9 pages, PDF. PSN MOGT00
When should I use network emulation ?
The design and development of a complex system requires an adequate methodology and efficient instrumental support in order to early detect and correct anomalies in the functional and non-functional properties of the tested protocols. Among the various tools used to provide experimental support for such developments, network emulation relies on real-time production of impairments on real traffic according to a communication model, either realistically or not. This paper aims at simply presenting to newcomers in network emulation (students, engineers, ...) basic principles and practices illustrated with a few commonly used tools. The motivation behind is to fill a gap in terms of introductory and pragmatic papers in this domain. The study particularly considers centralized approaches, allowing cheap and easy implementation in the context of research labs or industrial developments. In addition, an architectural model for emulation systems is proposed, defining three complementary levels, namely hardware, impairment and model levels. With the help of this architectural framework, various existing tools are situated and described. Various approaches for modeling the emulation actions are studied, such as impairment-based scenarios and virtual architectures, real-time discrete simulation and trace-based systems. Those modeling approaches are described and compared in terms of services and we study their ability to respond to various designer needs to assess when emulation is needed
When Should I Use Network Emulation?
The design and development of a complex system requires an adequate
methodology and efficient instrumental support in order to early detect and
correct anomalies in the functional and non-functional properties of the tested
protocols. Among the various tools used to provide experimental support for
such developments, network emulation relies on real-time production of
impairments on real traffic according to a communication model, either
realistically or not.
This paper aims at simply presenting to newcomers in network emulation
(students, engineers, ...) basic principles and practices illustrated with a
few commonly used tools. The motivation behind is to fill a gap in terms of
introductory and pragmatic papers in this domain.
The study particularly considers centralized approaches, allowing cheap and
easy implementation in the context of research labs or industrial developments.
In addition, an architectural model for emulation systems is proposed, defining
three complementary levels, namely hardware, impairment and model levels. With
the help of this architectural framework, various existing tools are situated
and described. Various approaches for modeling the emulation actions are
studied, such as impairment-based scenarios and virtual architectures,
real-time discrete simulation and trace-based systems. Those modeling
approaches are described and compared in terms of services and we study their
ability to respond to various designer needs to assess when emulation is
needed
A High Speed Networked Signal Processing Platform for Multi-element Radio Telescopes
A new architecture is presented for a Networked Signal Processing System
(NSPS) suitable for handling the real-time signal processing of multi-element
radio telescopes. In this system, a multi-element radio telescope is viewed as
an application of a multi-sensor, data fusion problem which can be decomposed
into a general set of computing and network components for which a practical
and scalable architecture is enabled by current technology. The need for such a
system arose in the context of an ongoing program for reconfiguring the Ooty
Radio Telescope (ORT) as a programmable 264-element array, which will enable
several new observing capabilities for large scale surveys on this mature
telescope. For this application, it is necessary to manage, route and combine
large volumes of data whose real-time collation requires large I/O bandwidths
to be sustained. Since these are general requirements of many multi-sensor
fusion applications, we first describe the basic architecture of the NSPS in
terms of a Fusion Tree before elaborating on its application for the ORT. The
paper addresses issues relating to high speed distributed data acquisition,
Field Programmable Gate Array (FPGA) based peer-to-peer networks supporting
significant on-the fly processing while routing, and providing a last mile
interface to a typical commodity network like Gigabit Ethernet. The system is
fundamentally a pair of two co-operative networks, among which one is part of a
commodity high performance computer cluster and the other is based on
Commercial-Off The-Shelf (COTS) technology with support from software/firmware
components in the public domain.Comment: 19 pages, 4 eps figures, To be published in Experimental Astronomy
(Springer
Digital Frequency Domain Multiplexer for mm-Wavelength Telescopes
An FPGA based digital signal processing (DSP) system for biasing and reading
out multiplexed bolometric detectors for mm-wavelength telescopes is presented.
This readout system is being deployed for balloon-borne and ground based
cosmology experiments with the primary goal of measuring the signature of
inflation with the Cosmic Microwave Background Radiation. The system consists
of analog superconducting electronics running at 250mK and 4K, coupled to
digital room temperature backend electronics described here. The digital
electronics perform the real time functionality with DSP algorithms implemented
in firmware. A soft embedded processor provides all of the slow housekeeping
control and communications. Each board in the system synthesizes
multi-frequency combs of 8 to 32 carriers in the MHz band to bias the
detectors. After the carriers have been modulated with the sky-signal by the
detectors, the same boards digitize the comb directly. The carriers are mixed
down to base-band and low pass filtered. The signal bandwidth of 0.050 Hz - 100
Hz places extreme requirements on stability and requires powerful filtering
techniques to recover the sky-signal from the MHz carriers.Comment: 6 pages, 6 figures, Submitted May 2007 to IEEE Transactions on
Nuclear Science (TNS
COMPSs-Mobile: parallel programming for mobile-cloud computing
The advent of Cloud and the popularization of mobile devices have led us to a shift in computing access. Computing users will have an interaction display while the real computation will be performed remotely, in the Cloud. COMPSs-Mobile is a framework that aims to ease the development of energy-efficient and high-performing
applications for this environment. The framework provides an infrastructure-unaware programming model that allows developers to code regular Android applications that, transparently, are parallelized, and partially offloaded to remote resources. This paper gives an overview of the programming model and describes the internal components of the toolkit which supports it focusing on the offloading and checkpointing mechanisms. It also presents the results of some tests conducted to evaluate the behavior of the solution and to measure the potential benefits in Android applications.Peer ReviewedPostprint (published version
- …