385 research outputs found
Multi-Stage Robust Transmission Constrained Unit Commitment: A Decomposition Framework with Implicit Decision Rules
With the integration of large-scale renewable energy sources to power
systems, many optimization methods have been applied to solve the
stochastic/uncertain transmission-constrained unit commitment (TCUC) problem.
Among all methods, two-stage and multi-stage robust optimization-based methods
are the most widely adopted ones. In the two-stage methods, nonanticipativity
of economic dispatch (ED) decisions are not considered. While in multi-stage
methods, explicit decision rules (for example, affine decision rules) are
usually adopted to guarantee nonanticipativity of ED decisions. With explicit
decision rules, the computational burden can be heavy and the optimality of the
solution is affected. In this paper, a multi-stage robust TCUC formulation with
implicit decision rules is proposed, as well as a decomposition framework to
solve it. The solutions are proved to be multi-stage robust and
nonanticipativity of ED decisions is guaranteed. Meanwhile, a computationally
efficient time-decoupled solution method for the feasibility check subproblems
is also proposed such that the method is suitable for large-scale TCUC problems
with uncertain loads/renewable injections. Numerical tests are conducted on the
IEEE 118-bus system and Polish 2383-bus system. Performances of several
state-of-the-art methods are compared
Robust Transmission Constrained Unit Commitment:A Column Merging Method
With rapid integration of power sources with uncertainty, robustness must be
carefully considered in the transmission constrained unit commitment (TCUC)
problem. The overall computational complexity of the robust TCUC methods is
closely related to the vertex number of the uncertainty set. The vertex number
is further associated with 1) the period number in the scheduling horizon as
well as 2) the number of nodes with uncertain injections. In this paper, a
column merging method (CMM) is proposed to reduce the computation burden by
merging the uncertain nodes, while still guar-anteeing the robustness of the
solution. By the CMM, the transmission constraints are modified, with the
parameters obtained based on an analytical solution of a uniform approximation
problem, so that the computational time is negligi-ble. The CMM is applied
under a greedy-algorithm based framework, where the number of merged nodes and
the ap-proximation error can be well balanced. The CMM is designed as a
preprocessing tool to improve the solution efficiency for robust TCUC problems
and is compatible with many solution methods (like two-stage and multi-stage
robust optimi-zation methods). Numerical tests show the method is effective
Meta Reinforcement Learning with Task Embedding and Shared Policy
Despite significant progress, deep reinforcement learning (RL) suffers from
data-inefficiency and limited generalization. Recent efforts apply
meta-learning to learn a meta-learner from a set of RL tasks such that a novel
but related task could be solved quickly. Though specific in some ways,
different tasks in meta-RL are generally similar at a high level. However, most
meta-RL methods do not explicitly and adequately model the specific and shared
information among different tasks, which limits their ability to learn training
tasks and to generalize to novel tasks. In this paper, we propose to capture
the shared information on the one hand and meta-learn how to quickly abstract
the specific information about a task on the other hand. Methodologically, we
train an SGD meta-learner to quickly optimize a task encoder for each task,
which generates a task embedding based on past experience. Meanwhile, we learn
a policy which is shared across all tasks and conditioned on task embeddings.
Empirical results on four simulated tasks demonstrate that our method has
better learning capacity on both training and novel tasks and attains up to 3
to 4 times higher returns compared to baselines.Comment: Accepted to IJCAI 201
Moss: A Scalable Tool for Efficiently Sampling and Counting 4- and 5-Node Graphlets
Counting the frequencies of 3-, 4-, and 5-node undirected motifs (also know
as graphlets) is widely used for understanding complex networks such as social
and biology networks. However, it is a great challenge to compute these metrics
for a large graph due to the intensive computation. Despite recent efforts to
count triangles (i.e., 3-node undirected motif counting), little attention has
been given to developing scalable tools that can be used to characterize 4- and
5-node motifs. In this paper, we develop computational efficient methods to
sample and count 4- and 5- node undirected motifs. Our methods provide unbiased
estimators of motif frequencies, and we derive simple and exact formulas for
the variances of the estimators. Moreover, our methods are designed to fit
vertex centric programming models, so they can be easily applied to current
graph computing systems such as Pregel and GraphLab. We conduct experiments on
a variety of real-word datasets, and experimental results show that our methods
are several orders of magnitude faster than the state-of-the-art methods under
the same estimation errors
A Variable Reduction Method for Large-Scale Security Constrained Unit Commitment
Efficient methods for large-scale security constrained unit commitment (SCUC)
problems have long been an important research topic and a challenge especially
in market clearing computation. For large-scale SCUC, the Lagrangian relaxation
methods (LR) and the mixed integer programming methods (MIP) are most widely
adopted. However, LR usually suffers from slow convergence; and the
computational burden of MIP is heavy when the binary variable number is large.
In this paper, a variable reduction method is proposed: First, the time-coupled
constraints in the original SCUC problem are relaxed and many single-period
SCUC problems (s-UC) are obtained. Second, LR is used to solve the s-UCs.
Different from traditional LR with iterative subgradient method, it is found
that the optimal multipliers and the approximate UC solutions of s-UCs can be
obtained by solving linear programs. Third, a criterion for choosing and fixing
the UC variables in the SCUC problem is established, hence the number of binary
variables is reduced. Last, the SCUC with reduced binary variables is solved by
MIP solver to obtain the final UC solution. The proposed method is tested on
the IEEE 118-bus system and a 6484-bus system. The results show the method is
very efficient and effective
A Fast Sketch Method for Mining User Similarities over Fully Dynamic Graph Streams
Many real-world networks such as Twitter and YouTube are given as fully
dynamic graph streams represented as sequences of edge insertions and
deletions. (e.g., users can subscribe and unsubscribe to channels on YouTube).
Existing similarity estimation methods such as MinHash and OPH are customized
to static graphs. We observe that they are indeed sampling methods and exhibit
a sampling bias when applied to fully dynamic graph streams, which results in
large estimation errors. To solve this challenge, we develop a fast and
accurate sketch method VOS. VOS processes each edge in the graph stream of
interest with small time complexity O(1) and uses small memory space to build a
compact sketch of the dynamic graph stream over time. Based on the sketch built
on-the-fly, we develop a method to estimate user similarities over time. We
conduct extensive experiments and the experimental results demonstrate the
efficiency and efficacy of our method.Comment: Accepted in ICDE 2019 (4-page short paper
Social Sensor Placement in Large Scale Networks: A Graph Sampling Perspective
Sensor placement for the purpose of detecting/tracking news outbreak and
preventing rumor spreading is a challenging problem in a large scale online
social network (OSN). This problem is a kind of subset selection problem:
choosing a small set of items from a large population so to maximize some
prespecified set function. However, it is known to be NP-complete. Existing
heuristics are very costly especially for modern OSNs which usually contain
hundreds of millions of users. This paper aims to design methods to find
\emph{good solutions} that can well trade off efficiency and accuracy. We first
show that it is possible to obtain a high quality solution with a probabilistic
guarantee from a "{\em candidate set}" of the underlying social network. By
exploring this candidate set, one can increase the efficiency of placing social
sensors. We also present how this candidate set can be obtained using "{\em
graph sampling}", which has an advantage over previous methods of not requiring
the prior knowledge of the complete network topology. Experiments carried out
on two real datasets demonstrate not only the accuracy and efficiency of our
approach, but also effectiveness in detecting and predicting news outbreak.Comment: 10 pages, 8 figure
Design of Efficient Sampling Methods on Hybrid Social-Affiliation Networks
Graph sampling via crawling has become increasingly popular and important in
the study of measuring various characteristics of large scale complex networks.
While powerful, it is known to be challenging when the graph is loosely
connected or disconnected which slows down the convergence of random walks and
can cause poor estimation accuracy.
In this work, we observe that the graph under study, or called target graph,
usually does not exist in isolation. In many situations, the target graph is
related to an auxiliary graph and an affiliation graph, and the target graph
becomes well connected when we view it from the perspective of these three
graphs together, or called a hybrid social-affiliation graph in this paper.
When directly sampling the target graph is difficult or inefficient, we can
indirectly sample it efficiently with the assistances of the other two graphs.
We design three sampling methods on such a hybrid social-affiliation network.
Experiments conducted on both synthetic and real datasets demonstrate the
effectiveness of our proposed methods.Comment: 11 pages, 13 figures, technique repor
Sampling Online Social Networks by Random Walk with Indirect Jumps
Random walk-based sampling methods are gaining popularity and importance in
characterizing large networks. While powerful, they suffer from the slow mixing
problem when the graph is loosely connected, which results in poor estimation
accuracy. Random walk with jumps (RWwJ) can address the slow mixing problem but
it is inapplicable if the graph does not support uniform vertex sampling (UNI).
In this work, we develop methods that can efficiently sample a graph without
the necessity of UNI but still enjoy the similar benefits as RWwJ. We observe
that many graphs under study, called target graphs, do not exist in isolation.
In many situations, a target graph is related to an auxiliary graph and a
bipartite graph, and they together form a better connected {\em two-layered
network structure}. This new viewpoint brings extra benefits to graph sampling:
if directly sampling a target graph is difficult, we can sample it indirectly
with the assistance of the other two graphs. We propose a series of new graph
sampling techniques by exploiting such a two-layered network structure to
estimate target graph characteristics. Experiments conducted on both synthetic
and real-world networks demonstrate the effectiveness and usefulness of these
new techniques.Comment: 14 pages, 17 figures, extended versio
A Feasible Graph Partition Framework for Random Walks Implemented by Parallel Computing in Big Graph
Graph partition is a fundamental problem of parallel computing for big graph
data. Many graph partition algorithms have been proposed to solve the problem
in various applications, such as matrix computations and PageRank, etc., but
none has pay attention to random walks. Random walks is a widely used method to
explore graph structure in lots of fields. The challenges of graph partition
for random walks include the large number of times of communication between
partitions, lots of replications of the vertices, unbalanced partition, etc. In
this paper, we propose a feasible graph partition framework for random walks
implemented by parallel computing in big graph. The framework is based on two
optimization functions to reduce the bandwidth, memory and storage cost in the
condition that the load balance is guaranteed. In this framework, several
greedy graph partition algorithms are proposed. We also propose five metrics
from different perspectives to evaluate the performance of these algorithms. By
running the algorithms on the big graph data set of real world, the
experimental results show that these algorithms in the framework are capable of
solving the problem of graph partition for random walks for different needs,
e.g. the best result is improved more than 70 times in reducing the times of
communication
- …