13,581 research outputs found
The Role of Inter-Controller Traffic for Placement of Distributed SDN Controllers
We consider a distributed Software Defined Networking (SDN) architecture
adopting a cluster of multiple controllers to improve network performance and
reliability. Besides the Openflow control traffic exchanged between controllers
and switches, we focus on the control traffic exchanged among the controllers
in the cluster, needed to run coordination and consensus algorithms to keep the
controllers synchronized. We estimate the effect of the inter-controller
communications on the reaction time perceived by the switches depending on the
data-ownership model adopted in the cluster. The model is accurately validated
in an operational Software Defined WAN (SDWAN). We advocate a careful placement
of the controllers, that should take into account both the above kinds of
control traffic. We evaluate, for some real ISP network topologies, the delay
tradeoffs for the controllers placement problem and we propose a novel
evolutionary algorithm to find the corresponding Pareto frontier. Our work
provides novel quantitative tools to optimize the planning and the design of
the network supporting the control plane of SDN networks, especially when the
network is very large and in-band control plane is adopted. We also show that
for operational distributed controllers (e.g. OpenDaylight and ONOS), the
location of the controller which acts as a leader in the consensus algorithm
has a strong impact on the reactivity perceived by switches.Comment: 14 page
Deterministic Time-Space Tradeoffs for k-SUM
Given a set of numbers, the -SUM problem asks for a subset of numbers
that sums to zero. When the numbers are integers, the time and space complexity
of -SUM is generally studied in the word-RAM model; when the numbers are
reals, the complexity is studied in the real-RAM model, and space is measured
by the number of reals held in memory at any point.
We present a time and space efficient deterministic self-reduction for the
-SUM problem which holds for both models, and has many interesting
consequences. To illustrate:
* -SUM is in deterministic time and space
. In general, any
polylogarithmic-time improvement over quadratic time for -SUM can be
converted into an algorithm with an identical time improvement but low space
complexity as well. * -SUM is in deterministic time and space
, derandomizing an algorithm of Wang.
* A popular conjecture states that 3-SUM requires time on the
word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the
(seemingly weaker) conjecture that every -space algorithm for
-SUM requires at least time on the word-RAM.
* For , -SUM is in deterministic time and
space
Using Hashing to Solve the Dictionary Problem (In External Memory)
We consider the dictionary problem in external memory and improve the update
time of the well-known buffer tree by roughly a logarithmic factor. For any
\lambda >= max {lg lg n, log_{M/B} (n/B)}, we can support updates in time
O(\lambda / B) and queries in sublogarithmic time, O(log_\lambda n). We also
present a lower bound in the cell-probe model showing that our data structure
is optimal.
In the RAM, hash tables have been used to solve the dictionary problem faster
than binary search for more than half a century. By contrast, our data
structure is the first to beat the comparison barrier in external memory. Ours
is also the first data structure to depart convincingly from the indivisibility
paradigm
Complex Block Floating-Point Format with Box Encoding For Wordlength Reduction in Communication Systems
We propose a new complex block floating-point format to reduce implementation
complexity. The new format achieves wordlength reduction by sharing an exponent
across the block of samples, and uses box encoding for the shared exponent to
reduce quantization error. Arithmetic operations are performed on blocks of
samples at time, which can also reduce implementation complexity. For a case
study of a baseband quadrature amplitude modulation (QAM) transmitter and
receiver, we quantify the tradeoffs in signal quality vs. implementation
complexity using the new approach to represent IQ samples. Signal quality is
measured using error vector magnitude (EVM) in the receiver, and implementation
complexity is measured in terms of arithmetic complexity as well as memory
allocation and memory input/output rates. The primary contributions of this
paper are (1) a complex block floating-point format with box encoding of the
shared exponent to reduce quantization error, (2) arithmetic operations using
the new complex block floating-point format, and (3) a QAM transceiver case
study to quantify signal quality vs. implementation complexity tradeoffs using
the new format and arithmetic operations.Comment: 6 pages, 9 figures, submitted to Asilomar Conference on Signals,
Systems, and Computers 201
Progress-Space Tradeoffs in Single-Writer Memory Implementations
Many algorithms designed for shared-memory distributed systems assume the single-writer multi- reader (SWMR) setting where each process is provided with a unique register that can only be written by the process and read by all. In a system where computation is performed by a bounded number n of processes coming from a large (possibly unbounded) set of potential participants, the assumption of an SWMR memory is no longer reasonable. If only a bounded number of multi- writer multi-reader (MWMR) registers are provided, we cannot rely on an a priori assignment of processes to registers. In this setting, implementing an SWMR memory, or equivalently, ensuring stable writes (i.e., every written value persists in the memory), is desirable.
In this paper, we propose an SWMR implementation that adapts the number of MWMR registers used to the desired progress condition. For any given k from 1 to n, we present an algorithm that uses n + k ? 1 registers to implement a k-lock-free SWMR memory. In the special case of 2-lock-freedom, we also give a matching lower bound of n + 1 registers, which supports our conjecture that the algorithm is space-optimal. Our lower bound holds for the strictly weaker progress condition of 2-obstruction-freedom, which suggests that the space complexity for k-obstruction-free and k-lock-free SWMR implementations might coincide
Middleware-based Database Replication: The Gaps between Theory and Practice
The need for high availability and performance in data management systems has
been fueling a long running interest in database replication from both academia
and industry. However, academic groups often attack replication problems in
isolation, overlooking the need for completeness in their solutions, while
commercial teams take a holistic approach that often misses opportunities for
fundamental innovation. This has created over time a gap between academic
research and industrial practice.
This paper aims to characterize the gap along three axes: performance,
availability, and administration. We build on our own experience developing and
deploying replication systems in commercial and academic settings, as well as
on a large body of prior related work. We sift through representative examples
from the last decade of open-source, academic, and commercial database
replication systems and combine this material with case studies from real
systems deployed at Fortune 500 customers. We propose two agendas, one for
academic research and one for industrial R&D, which we believe can bridge the
gap within 5-10 years. This way, we hope to both motivate and help researchers
in making the theory and practice of middleware-based database replication more
relevant to each other.Comment: 14 pages. Appears in Proc. ACM SIGMOD International Conference on
Management of Data, Vancouver, Canada, June 200
CATS: linearizability and partition tolerance in scalable and self-organizing key-value stores
Distributed key-value stores provide scalable, fault-tolerant, and self-organizing
storage services, but fall short of guaranteeing linearizable consistency
in partially synchronous, lossy, partitionable, and dynamic networks, when data
is distributed and replicated automatically by the principle of consistent hashing.
This paper introduces consistent quorums as a solution for achieving atomic
consistency. We present the design and implementation of CATS, a distributed
key-value store which uses consistent quorums to guarantee linearizability and partition tolerance in such adverse and dynamic network conditions. CATS is
scalable, elastic, and self-organizing; key properties for modern cloud storage
middleware. Our system shows that consistency can be achieved with practical
performance and modest throughput overhead (5%) for read-intensive workloads
Early Quantitative Assessment of Non-Functional Requirements
Non-functional requirements (NFRs) of software systems are a well known source of uncertainty in effort estimation. Yet, quantitatively approaching NFR early in a project is hard. This paper makes a step towards reducing the impact of uncertainty due to NRF. It offers a solution that incorporates NFRs into the functional size quantification process. The merits of our solution are twofold: first, it lets us quantitatively assess the NFR modeling process early in the project, and second, it lets us generate test cases for NFR verification purposes. We chose the NFR framework as a vehicle to integrate NFRs into the requirements modeling process and to apply quantitative assessment procedures. Our solution proposal also rests on the functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard. We extend its use for NFR testing purposes, which is an essential step for improving NFR development and testing effort estimates, and consequently for managing the scope of NFRs. We discuss the advantages of our approach and the open questions related to its design as well
A New Quantum Lower Bound Method, with Applications to Direct Product Theorems and Time-Space Tradeoffs
We give a new version of the adversary method for proving lower bounds on
quantum query algorithms. The new method is based on analyzing the eigenspace
structure of the problem at hand. We use it to prove a new and optimal strong
direct product theorem for 2-sided error quantum algorithms computing k
independent instances of a symmetric Boolean function: if the algorithm uses
significantly less than k times the number of queries needed for one instance
of the function, then its success probability is exponentially small in k. We
also use the polynomial method to prove a direct product theorem for 1-sided
error algorithms for k threshold functions with a stronger bound on the success
probability. Finally, we present a quantum algorithm for evaluating solutions
to systems of linear inequalities, and use our direct product theorems to show
that the time-space tradeoff of this algorithm is close to optimal.Comment: 16 pages LaTeX. Version 2: title changed, proofs significantly
cleaned up and made selfcontained. This version to appear in the proceedings
of the STOC 06 conferenc
- âŠ