41,586 research outputs found
Optimality Properties, Distributed Strategies, and Measurement-Based Evaluation of Coordinated Multicell OFDMA Transmission
The throughput of multicell systems is inherently limited by interference and
the available communication resources. Coordinated resource allocation is the
key to efficient performance, but the demand on backhaul signaling and
computational resources grows rapidly with number of cells, terminals, and
subcarriers. To handle this, we propose a novel multicell framework with
dynamic cooperation clusters where each terminal is jointly served by a small
set of base stations. Each base station coordinates interference to neighboring
terminals only, thus limiting backhaul signalling and making the framework
scalable. This framework can describe anything from interference channels to
ideal joint multicell transmission.
The resource allocation (i.e., precoding and scheduling) is formulated as an
optimization problem (P1) with performance described by arbitrary monotonic
functions of the signal-to-interference-and-noise ratios (SINRs) and arbitrary
linear power constraints. Although (P1) is non-convex and difficult to solve
optimally, we are able to prove: 1) Optimality of single-stream beamforming; 2)
Conditions for full power usage; and 3) A precoding parametrization based on a
few parameters between zero and one. These optimality properties are used to
propose low-complexity strategies: both a centralized scheme and a distributed
version that only requires local channel knowledge and processing. We evaluate
the performance on measured multicell channels and observe that the proposed
strategies achieve close-to-optimal performance among centralized and
distributed solutions, respectively. In addition, we show that multicell
interference coordination can give substantial improvements in sum performance,
but that joint transmission is very sensitive to synchronization errors and
that some terminals can experience performance degradations.Comment: Published in IEEE Transactions on Signal Processing, 15 pages, 7
figures. This version corrects typos related to Eq. (4) and Eq. (28
SQPR: Stream Query Planning with Reuse
When users submit new queries to a distributed stream processing system (DSPS), a query planner must allocate physical resources, such as CPU cores, memory and network bandwidth, from a set of hosts to queries. Allocation decisions must provide the correct mix of resources required by queries, while achieving an efficient overall allocation to scale in the number of admitted queries. By exploiting overlap between queries and reusing partial results, a query planner can conserve resources but has to carry out more complex planning decisions. In this paper, we describe SQPR, a query planner that targets DSPSs in data centre environments with heterogeneous resources. SQPR models query admission, allocation and reuse as a single constrained optimisation problem and solves an approximate version to achieve scalability. It prevents individual resources from becoming bottlenecks by re-planning past allocation decisions and supports different allocation objectives. As our experimental evaluation in comparison with a state-of-the-art planner shows SQPR makes efficient resource allocation decisions, even with a high utilisation of resources, with acceptable overheads
Model-driven Scheduling for Distributed Stream Processing Systems
Distributed Stream Processing frameworks are being commonly used with the
evolution of Internet of Things(IoT). These frameworks are designed to adapt to
the dynamic input message rate by scaling in/out.Apache Storm, originally
developed by Twitter is a widely used stream processing engine while others
includes Flink, Spark streaming. For running the streaming applications
successfully there is need to know the optimal resource requirement, as
over-estimation of resources adds extra cost.So we need some strategy to come
up with the optimal resource requirement for a given streaming application. In
this article, we propose a model-driven approach for scheduling streaming
applications that effectively utilizes a priori knowledge of the applications
to provide predictable scheduling behavior. Specifically, we use application
performance models to offer reliable estimates of the resource allocation
required. Further, this intuition also drives resource mapping, and helps
narrow the estimated and actual dataflow performance and resource utilization.
Together, this model-driven scheduling approach gives a predictable application
performance and resource utilization behavior for executing a given DSPS
application at a target input stream rate on distributed resources.Comment: 54 page
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
Data Grids have been adopted as the platform for scientific communities that
need to share, access, transport, process and manage large data collections
distributed worldwide. They combine high-end computing technologies with
high-performance networking and wide-area storage management techniques. In
this paper, we discuss the key concepts behind Data Grids and compare them with
other data sharing and distribution paradigms such as content delivery
networks, peer-to-peer networks and distributed databases. We then provide
comprehensive taxonomies that cover various aspects of architecture, data
transportation, data replication and resource allocation and scheduling.
Finally, we map the proposed taxonomy to various Data Grid systems not only to
validate the taxonomy but also to identify areas for future exploration.
Through this taxonomy, we aim to categorise existing systems to better
understand their goals and their methodology. This would help evaluate their
applicability for solving similar problems. This taxonomy also provides a "gap
analysis" of this area through which researchers can potentially identify new
issues for investigation. Finally, we hope that the proposed taxonomy and
mapping also helps to provide an easy way for new practitioners to understand
this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
Receive Combining vs. Multi-Stream Multiplexing in Downlink Systems with Multi-Antenna Users
In downlink multi-antenna systems with many users, the multiplexing gain is
strictly limited by the number of transmit antennas and the use of these
antennas. Assuming that the total number of receive antennas at the
multi-antenna users is much larger than , the maximal multiplexing gain can
be achieved with many different transmission/reception strategies. For example,
the excess number of receive antennas can be utilized to schedule users with
effective channels that are near-orthogonal, for multi-stream multiplexing to
users with well-conditioned channels, and/or to enable interference-aware
receive combining. In this paper, we try to answer the question if the data
streams should be divided among few users (many streams per user) or many users
(few streams per user, enabling receive combining). Analytic results are
derived to show how user selection, spatial correlation, heterogeneous user
conditions, and imperfect channel acquisition (quantization or estimation
errors) affect the performance when sending the maximal number of streams or
one stream per scheduled user---the two extremes in data stream allocation.
While contradicting observations on this topic have been reported in prior
works, we show that selecting many users and allocating one stream per user
(i.e., exploiting receive combining) is the best candidate under realistic
conditions. This is explained by the provably stronger resilience towards
spatial correlation and the larger benefit from multi-user diversity. This
fundamental result has positive implications for the design of downlink systems
as it reduces the hardware requirements at the user devices and simplifies the
throughput optimization.Comment: Published in IEEE Transactions on Signal Processing, 16 pages, 11
figures. The results can be reproduced using the following Matlab code:
https://github.com/emilbjornson/one-or-multiple-stream
- …