680 research outputs found
Minimum Sparsity of Unobservable Power Network Attacks
Physical security of power networks under power injection attacks that alter
generation and loads is studied. The system operator employs Phasor Measurement
Units (PMUs) for detecting such attacks, while attackers devise attacks that
are unobservable by such PMU networks. It is shown that, given the PMU
locations, the solution to finding the sparsest unobservable attacks has a
simple form with probability one, namely, , where
is defined as the vulnerable vertex connectivity of an augmented
graph. The constructive proof allows one to find the entire set of the sparsest
unobservable attacks in polynomial time. Furthermore, a notion of the potential
impact of unobservable attacks is introduced. With optimized PMU deployment,
the sparsest unobservable attacks and their potential impact as functions of
the number of PMUs are evaluated numerically for the IEEE 30, 57, 118 and
300-bus systems and the Polish 2383, 2737 and 3012-bus systems. It is observed
that, as more PMUs are added, the maximum potential impact among all the
sparsest unobservable attacks drops quickly until it reaches the minimum
sparsity.Comment: submitted to IEEE Transactions on Automatic Contro
An Empirical Analysis of Approximation Algorithms for the Unweighted Tree Augmentation Problem
In this thesis, we perform an experimental study of approximation algorithms for the tree augmentation problem (TAP). TAP is a fundamental problem in network design. The goal of TAP is to add the minimum number of edges from a given edge set to a tree so that it becomes 2-edge connected. Formally, given a tree T = (V, E), where V denotes the set of vertices and E denotes the set of edges in the tree, and a set of edges (or links) L ⊆ V × V disjoint from E, the objective is to find a set of edges to add to the tree F ⊆ L such that the augmented tree (V, E ∪ F) is 2-edge connected. Our goal is to establish a baseline performance for each approximation algorithm on actual instances rather than worst-case instances. In particular, we are interested in whether the algorithms rank on practical instances is consistent with their worst-case guarantee rankings. We are also interested in whether preprocessing times, implementation difficulties, and running times justify the use of an algorithm in practice. We profiled and analyzed five approximation algorithms, viz., the Frederickson algorithm, the Nagamochi algorithm, the Even algorithm, the Adjiashivili algorithm, and the Grandoni algorithm. Additionally, we used an integer program and a simple randomized algorithm as benchmarks. The performance of each algorithm was measured using space, time, and quality comparison metrics. We found that the simple randomized is competitive with the approximation algorithms and that the algorithms rank according to their theoretical guarantees. The randomized algorithm is simpler to implement and understand. Furthermore, the randomized algorithm runs faster and uses less space than any of the more sophisticated approximation algorithms
On the König deficiency of zero-reducible graphs
A confluent and terminating reduction system is introduced for graphs,which preserves the number of their perfect matchings. A union-find algorithm is presented to carry out reduction in almost linear time. The König property is investigated in the context of reduction by introducing the König deficiency of a graph G as the difference between the vertex covering number and thematching number ofG. It is shown that the problem of finding the König deficiency of a graph is NP-complete even if we know that the graph reduces to the empty graph. Finally, the König deficiency of graphs G having a vertex v such that G − v has a unique perfect matching is studied in connection with reduction
Diameter Minimization by Shortcutting with Degree Constraints
We consider the problem of adding a fixed number of new edges to an
undirected graph in order to minimize the diameter of the augmented graph, and
under the constraint that the number of edges added for each vertex is bounded
by an integer. The problem is motivated by network-design applications, where
we want to minimize the worst case communication in the network without
excessively increasing the degree of any single vertex, so as to avoid
additional overload. We present three algorithms for this task, each with their
own merits. The special case of a matching augmentation, when every vertex can
be incident to at most one new edge, is of particular interest, for which we
show an inapproximability result, and provide bounds on the smallest achievable
diameter when these edges are added to a path. Finally, we empirically evaluate
and compare our algorithms on several real-life networks of varying types.Comment: A shorter version of this work has been accepted at the IEEE ICDM
2022 conferenc
LAD models, trees and an analog of the fundamental theorem of arithmetic
International audienceMotivated by applications of Logical Analysis Data (LAD) in medical contexts, original discrete optimization problems are proposed. When a patient arrives with a presumption of a disease, he is submitted to a sequence of tests. From one patient to another, the tests allowing to detect the disease may vary. A subset of tests whose results detect the disease in a given part of the population is called a pattern, which has its own prevalence in the population. If there is only a limited number of tests that can be done, which ones must be selected in order to maximize the number of spotted patients ? Or, if each test has a cost, in which order the tests have to be done, in order to minimize the cost ? It is the kind of questions that are investigated in this paper. For various special cases, polynomial algorithms are proposed, especially when the hypergraph whose vertices are the tests and whose edges are the patterns is a tree graph. One of these questions involves a criterion which is not a number but a sequence of numbers. The objective is then to find the best sequence for the lexicographic order. To solve this question, a new product on finite sequences is defined, namely the maximum shuffle product, which maps two sequences to their shuffle that is maximal for the lexicographic order. Surprisingly, this product leads to a theorem similar to the fundamental theorem of arithmetic: every sequence can be written uniquely as the product of prime sequences, with the suitable definition of prime sequences
Multi-agent pathfinding for unmanned aerial vehicles
Unmanned aerial vehicles (UAVs), commonly known as drones, have become more and
more prevalent in recent years. In particular, governmental organizations and companies
around the world are starting to research how UAVs can be used to perform tasks such
as package deliver, disaster investigation and surveillance of key assets such as pipelines,
railroads and bridges. NASA is currently in the early stages of developing an air traffic
control system specifically designed to manage UAV operations in low-altitude airspace.
Companies such as Amazon and Rakuten are testing large-scale drone deliver services in
the USA and Japan.
To perform these tasks, safe and conflict-free routes for concurrently operating UAVs must
be found. This can be done using multi-agent pathfinding (mapf) algorithms, although
the correct choice of algorithms is not clear. This is because many state of the art mapf
algorithms have only been tested in 2D space in maps with many obstacles, while UAVs
operate in 3D space in open maps with few obstacles. In addition, when an unexpected
event occurs in the airspace and UAVs are forced to deviate from their original routes
while inflight, new conflict-free routes must be found. Planning for these unexpected
events is commonly known as contingency planning. With manned aircraft, contingency
plans can be created in advance or on a case-by-case basis while inflight. The scale at
which UAVs operate, combined with the fact that unexpected events may occur anywhere
at any time make both advanced planning and planning on a case-by-case basis impossible.
Thus, a new approach is needed. Online multi-agent pathfinding (online mapf) looks to
be a promising solution. Online mapf utilizes traditional mapf algorithms to perform path
planning in real-time. That is, new routes for UAVs are found while inflight.
The primary contribution of this thesis is to present one possible approach to UAV
contingency planning using online multi-agent pathfinding algorithms, which can be used
as a baseline for future research and development. It also provides an in-depth overview
and analysis of offline mapf algorithms with the goal of determining which ones are likely
to perform best when applied to UAVs. Finally, to further this same goal, a few different
mapf algorithms are experimentally tested and analyzed
- …