944 research outputs found
Dynamic Studies of Multiterminal DC-AC Systems
In transient stability programs that use a static dc network representation, the procedure to determine the control mode of operation and the solution of the multiterminal dc system is complex and time consuming. A systematic approach that is based on a linear programming formulation is presented in this thesis. The constraints incorporated in the LB formulation automatically ensure that the solution obtained is feasible. It is shown that the method is not only computationally efficient but also versatile in its ability to handle many of the common control characteristics, such as those of the constant angle (extinction and ignition), constant voltage, constant power and current controls, voltage dependent current order limiter (VDCOL), end-stops, and also simulate the dynamics of power modulation and restart. As some applications require a three-phase detailed representation of the ac/dc system, a technique for detailed simulation of the dc converter and controls is also presented. The developed dynamic simulation program is used to investigate the problem of on-line network flow control using converter controls of a multiterminal dc system. In view of fast response of the dc powers to converter controls, a control method is proposed that extends the application of ac network flow control to dynamic situations. Possible applications of the method are to regulate power flows in a select group of ac lines, to smoothly steer the ac/dc system from its present state to some desired state and to enhance the dynamic performance of the ac system by controlling the transient changes in key or ’’backbone” ac lines
Optimal Embedding of Functions for In-Network Computation: Complexity Analysis and Algorithms
We consider optimal distributed computation of a given function of
distributed data. The input (data) nodes and the sink node that receives the
function form a connected network that is described by an undirected weighted
network graph. The algorithm to compute the given function is described by a
weighted directed acyclic graph and is called the computation graph. An
embedding defines the computation communication sequence that obtains the
function at the sink. Two kinds of optimal embeddings are sought, the embedding
that---(1)~minimizes delay in obtaining function at sink, and (2)~minimizes
cost of one instance of computation of function. This abstraction is motivated
by three applications---in-network computation over sensor networks, operator
placement in distributed databases, and module placement in distributed
computing.
We first show that obtaining minimum-delay and minimum-cost embeddings are
both NP-complete problems and that cost minimization is actually MAX SNP-hard.
Next, we consider specific forms of the computation graph for which polynomial
time solutions are possible. When the computation graph is a tree, a polynomial
time algorithm to obtain the minimum delay embedding is described. Next, for
the case when the function is described by a layered graph we describe an
algorithm that obtains the minimum cost embedding in polynomial time. This
algorithm can also be used to obtain an approximation for delay minimization.
We then consider bounded treewidth computation graphs and give an algorithm to
obtain the minimum cost embedding in polynomial time
A Linear-Time Algorithm for Integral Multiterminal Flows in Trees
In this paper, we study the problem of finding an integral multiflow which maximizes the sum of flow values between every two terminals in an undirected tree with a nonnegative integer edge capacity and a set of terminals. In general, it is known that the flow value of an integral multiflow is bounded by the cut value of a cut-system which consists of disjoint subsets each of which contains exactly one terminal or has an odd cut value, and there exists a pair of an integral multiflow and a cut-system whose flow value and cut value are equal; i.e., a pair of a maximum integral multiflow and a minimum cut. In this paper, we propose an O(n)-time algorithm that finds such a pair of an integral multiflow and a cut-system in a given tree instance with n vertices. This improves the best previous results by a factor of Omega(n). Regarding a given tree in an instance as a rooted tree, we define O(n) rooted tree instances taking each vertex as a root, and establish a recursive formula on maximum integral multiflow values of these instances to design a dynamic programming that computes the maximum integral multiflow values of all O(n) rooted instances in linear time. We can prove that the algorithm implicitly maintains a cut-system so that not only a maximum integral multiflow but also a minimum cut-system can be constructed in linear time for any rooted instance whenever it is necessary. The resulting algorithm is rather compact and succinct
Network Information Flow with Correlated Sources
In this paper, we consider a network communications problem in which multiple
correlated sources must be delivered to a single data collector node, over a
network of noisy independent point-to-point channels. We prove that perfect
reconstruction of all the sources at the sink is possible if and only if, for
all partitions of the network nodes into two subsets S and S^c such that the
sink is always in S^c, we have that H(U_S|U_{S^c}) < \sum_{i\in S,j\in S^c}
C_{ij}. Our main finding is that in this setup a general source/channel
separation theorem holds, and that Shannon information behaves as a classical
network flow, identical in nature to the flow of water in pipes. At first
glance, it might seem surprising that separation holds in a fairly general
network situation like the one we study. A closer look, however, reveals that
the reason for this is that our model allows only for independent
point-to-point channels between pairs of nodes, and not multiple-access and/or
broadcast channels, for which separation is well known not to hold. This
``information as flow'' view provides an algorithmic interpretation for our
results, among which perhaps the most important one is the optimality of
implementing codes using a layered protocol stack.Comment: Final version, to appear in the IEEE Transactions on Information
Theory -- contains (very) minor changes based on the last round of review
Security in the economic operation of power systems
Imperial Users onl
- …