273,443 research outputs found
Distributed design of network codes for wireless multiple unicasts
Previous results on network coding for low-power
wireless transmissions of multiple unicasts rely on opportunistic
coding or centralized optimization to reduce the power
consumption. This paper proposes a distributed strategy for
reducing the power consumption in a network coded wireless
network with multiple unicasts. We apply a simple network
coding strategy called “reverse carpooling,” which uses only
XOR and forwarding operations. In this paper, we use the
rectangular grid as a simple network model and attempt to
increase network coding opportunities without the overhead
required for centralized design or coordination. The proposed
technique designates “reverse carpooling lines” analogous to
a collection of bus routes in a crowded city. Each individual
unicast then chooses a route from its source to its destination
independently but in a manner that maximizes the fraction
of its path spent on reverse carpooling lines. Intermediate
nodes apply reverse carpooling opportunistically along these
routes. Our network optimization attempts to choose the reverse
carpooling lines in a manner that maximizes the expected power
savings with respect to the random choice of sources and sinks
Peer to Peer Optimistic Collaborative Editing on XML-like trees
Collaborative editing consists in editing a common document shared by several
independent sites. This may give rise to conficts when two different users
perform simultaneous uncompatible operations. Centralized systems solve this
problem by using locks that prevent some modifications to occur and leave the
resolution of confict to users. On the contrary, peer to peer (P2P) editing
doesn't allow locks and the optimistic approach uses a Integration
Transformation IT that reconciliates the conficting operations and ensures
convergence (all copies are identical on each site). Two properties TP1 and
TP2, relating the set of allowed operations Op and the transformation IT, have
been shown to ensure the correctness of the process. The choice of the set Op
is crucial to define an integration operation that satisfies TP1 and TP2. Many
existing algorithms don't satisfy these properties and are indeed incorrect
i.e. convergence is not guaranteed. No algorithm enjoying both properties is
known for strings and little work has been done for XML trees in a pure P2P
framework (that doesn't use time-stamps for instance). We focus on editing
unranked unordered labeled trees, so-called XML-like trees that are considered
for instance in the Harmony pro ject. We show that no transformation satisfying
TP1 and TP2 can exist for a first set of operations but we show that TP1 and
TP2 hold for a richer set of operations. We show how to combine our approach
with any convergent editing process on strings (not necessarily based on
integration transformation) to get a convergent process
Optimal measurements to access classical correlations of two-qubit states
We analyze the optimal measurements accessing classical correlations in
arbitrary two-qubit states. Two-qubit states can be transformed into the
canonical forms via local unitary operations. For the canonical forms, we
investigate the probability distribution of the optimal measurements. The
probability distribution of the optimal measurement is found to be centralized
in the vicinity of a specific von Neumann measurement, which we call the
maximal-correlation-direction measurement (MCDM). We prove that for the states
with zero-discord and maximally mixed marginals, the MCDM is the very optimal
measurement. Furthermore, we give an upper bound of quantum discord based on
the MCDM, and investigate its performance for approximating the quantum
discord.Comment: 8 pages, 3 figures, version accepted by Phys. Rev.
Using a Cray Y-MP as an array processor for a RISC Workstation
As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described
Recommended from our members
Centralized vs. decentralized computing : organizational considerations and management options
The long-standing debate over whether to centralize or decentralize computing is examined in terms of the fundamental organizational and economic factors at stake. The traditional debate is examined and found to focus predominantly on issues of efficiency vs. effectiveness, with solutions based on a rationalistic strategy of optimizing in this tradeoff. A more behavioralistic assessment suggests that the driving issues in the debate are the politics of organization and resources, centering on the issue of control. The economics of computing deployment decisions is presented as an important issue, but one that often serves as a field of argument that is based on more political concerns. The current situation facing managers of computing, given the advent of small and comparatively inexpensive computers, is examined in detail, and a set of management options for dealing with this persistent issue is presented
Data Structures for Task-based Priority Scheduling
Many task-parallel applications can benefit from attempting to execute tasks
in a specific order, as for instance indicated by priorities associated with
the tasks. We present three lock-free data structures for priority scheduling
with different trade-offs on scalability and ordering guarantees. First we
propose a basic extension to work-stealing that provides good scalability, but
cannot provide any guarantees for task-ordering in-between threads. Next, we
present a centralized priority data structure based on -fifo queues, which
provides strong (but still relaxed with regard to a sequential specification)
guarantees. The parameter allows to dynamically configure the trade-off
between scalability and the required ordering guarantee. Third, and finally, we
combine both data structures into a hybrid, -priority data structure, which
provides scalability similar to the work-stealing based approach for larger
, while giving strong ordering guarantees for smaller . We argue for
using the hybrid data structure as the best compromise for generic,
priority-based task-scheduling.
We analyze the behavior and trade-offs of our data structures in the context
of a simple parallelization of Dijkstra's single-source shortest path
algorithm. Our theoretical analysis and simulations show that both the
centralized and the hybrid -priority based data structures can give strong
guarantees on the useful work performed by the parallel Dijkstra algorithm. We
support our results with experimental evidence on an 80-core Intel Xeon system
Regional Data Archiving and Management for Northeast Illinois
This project studies the feasibility and implementation options for establishing a regional data archiving system to help monitor
and manage traffic operations and planning for the northeastern Illinois region. It aims to provide a clear guidance to the
regional transportation agencies, from both technical and business perspectives, about building such a comprehensive
transportation information system. Several implementation alternatives are identified and analyzed. This research is carried
out in three phases.
In the first phase, existing documents related to ITS deployments in the broader Chicago area are summarized, and a
thorough review is conducted of similar systems across the country. Various stakeholders are interviewed to collect
information on all data elements that they store, including the format, system, and granularity. Their perception of a data
archive system, such as potential benefits and costs, is also surveyed. In the second phase, a conceptual design of the
database is developed. This conceptual design includes system architecture, functional modules, user interfaces, and
examples of usage. In the last phase, the possible business models for the archive system to sustain itself are reviewed. We
estimate initial capital and recurring operational/maintenance costs for the system based on realistic information on the
hardware, software, labor, and resource requirements. We also identify possible revenue opportunities.
A few implementation options for the archive system are summarized in this report; namely:
1. System hosted by a partnering agency
2. System contracted to a university
3. System contracted to a national laboratory
4. System outsourced to a service provider
The costs, advantages and disadvantages for each of these recommended options are also provided.ICT-R27-22published or submitted for publicationis peer reviewe
- …
