2,952 research outputs found
Attacking Shortest Paths by Cutting Edges
Identifying shortest paths between nodes in a network is a common graph
analysis problem that is important for many applications involving routing of
resources. An adversary that can manipulate the graph structure could alter
traffic patterns to gain some benefit (e.g., make more money by directing
traffic to a toll road). This paper presents the Force Path Cut problem, in
which an adversary removes edges from a graph to make a particular path the
shortest between its terminal nodes. We prove that this problem is APX-hard,
but introduce PATHATTACK, a polynomial-time approximation algorithm that
guarantees a solution within a logarithmic factor of the optimal value. In
addition, we introduce the Force Edge Cut and Force Node Cut problems, in which
the adversary targets a particular edge or node, respectively, rather than an
entire path. We derive a nonconvex optimization formulation for these problems,
and derive a heuristic algorithm that uses PATHATTACK as a subroutine. We
demonstrate all of these algorithms on a diverse set of real and synthetic
networks, illustrating the network types that benefit most from the proposed
algorithms.Comment: 37 pages, 11 figures; Extended version of arXiv:2104.0376
Concentration of Measure Inequalities in Information Theory, Communications and Coding (Second Edition)
During the last two decades, concentration inequalities have been the subject
of exciting developments in various areas, including convex geometry,
functional analysis, statistical physics, high-dimensional statistics, pure and
applied probability theory, information theory, theoretical computer science,
and learning theory. This monograph focuses on some of the key modern
mathematical tools that are used for the derivation of concentration
inequalities, on their links to information theory, and on their various
applications to communications and coding. In addition to being a survey, this
monograph also includes various new recent results derived by the authors. The
first part of the monograph introduces classical concentration inequalities for
martingales, as well as some recent refinements and extensions. The power and
versatility of the martingale approach is exemplified in the context of codes
defined on graphs and iterative decoding algorithms, as well as codes for
wireless communication. The second part of the monograph introduces the entropy
method, an information-theoretic technique for deriving concentration
inequalities. The basic ingredients of the entropy method are discussed first
in the context of logarithmic Sobolev inequalities, which underlie the
so-called functional approach to concentration of measure, and then from a
complementary information-theoretic viewpoint based on transportation-cost
inequalities and probability in metric spaces. Some representative results on
concentration for dependent random variables are briefly summarized, with
emphasis on their connections to the entropy method. Finally, we discuss
several applications of the entropy method to problems in communications and
coding, including strong converses, empirical distributions of good channel
codes, and an information-theoretic converse for concentration of measure.Comment: Foundations and Trends in Communications and Information Theory, vol.
10, no 1-2, pp. 1-248, 2013. Second edition was published in October 2014.
ISBN to printed book: 978-1-60198-906-
Structural Agnostic Modeling: Adversarial Learning of Causal Graphs
A new causal discovery method, Structural Agnostic Modeling (SAM), is
presented in this paper. Leveraging both conditional independencies and
distributional asymmetries in the data, SAM aims at recovering full causal
models from continuous observational data along a multivariate non-parametric
setting. The approach is based on a game between players estimating each
variable distribution conditionally to the others as a neural net, and an
adversary aimed at discriminating the overall joint conditional distribution,
and that of the original data. An original learning criterion combining
distribution estimation, sparsity and acyclicity constraints is used to enforce
the end-to-end optimization of the graph structure and parameters through
stochastic gradient descent. Besides the theoretical analysis of the approach
in the large sample limit, SAM is extensively experimentally validated on
synthetic and real data
Beurteilung der Resttragfähigkeit von Bauwerken mit Hilfe der Fuzzy-Logik und Entscheidungstheorie
Whereas the design of new structures is almost completely regulated by codes, there are no objective ways for the evaluation of existing facilities. Experts often are not familiar with the new tasks in system identification and try to retrieve at least some information from available documents. They therefore make compromises which, for many stakeholders, are not satisfying. Consequently, this publication presents a more objective and more realistic method for condition assessment. Necessary basics for this task are fracture mechanics combined with computational analysis, methods and techniques for geometry recording and material investigation, ductility and energy dissipation, risk analysis and uncertainty consideration. Present tools for evaluation perform research on how to analytically conceptualize a structure directly from given loads and measured response. Since defects are not necessarily visible or in a direct way detectable, several damage indices are combined and integrated in a model of the real system. Fuzzy-sets are ideally suited to illustrate parametric/data uncertainty and system- or model uncertainty. Trapezoidal membership functions may very well represent the condition state of structural components as function of damage extent or performance. Tthe residual load-bearing capacity can be determined by successively performing analyses in three steps. The "Screening assessment" shall eliminate a large majority of structures from detailed consideration and advise on immediate precautions to save lives and high economic values. Here, the defects have to be explicitly defined and located. If this is impossible, an "approximate evaluation" should follow describing system geometry, material properties and failure modes in detail. Here, a fault-tree helps investigate defaults in a systematic way avoiding random search or negligence of important features or damage indices. In order to inform about the structural system it is deemed essential not only due to its conceptual clarity, but also due to its applicational simplicity. It therefore represents an important prerequisite in condition assessment though special circumstances might require "fur-ther investigations" to consider the actual material parameters and unaccounted reserves due to spatial or other secondary contributions. Here, uncertainties with respect to geometry, material, loading or modeling should in no case be neglected, but explicitly quantified. Postulating a limited set of expected failure modes is not always sufficient, since detectable signature changes are seldom directly attributable and every defect might -together with other unforeseen situations- become decisive. So, a determination of all possible scenarios to consider every imaginable influence would be required. Risk is produced by a combination of various and ill-defined failure modes. Due to the interaction of many variables there is no simple and reliable way to predict which failure mode is dominant. Risk evaluation therefore comprises the estimation of the prognostic factor with respect to undesir-able events, component importance and the expected damage extent.Während die Bemessung von Tragwerken im allgemeinen durch Vorschriften geregelt ist, gibt es für die Zustandsbewertung bestehender Bauwerken noch keine objektiven Richtlinien. Viele Experten sind mit der neuen Problematik (Systemidentifikation anhand von Belastung und daraus entstehender Strukturantwort) noch nicht vertraut und begnügen sich daher mit Kompromißlösungen. Für viele Bauherren ist dies unbefriedigend, weshalb hier eine objektivere und wirklichkeitsnähere Zustandsbewertung vorgestellt wird. Wichtig hierfür sind theoretische Grundlagen der Schadensanalyse, Methoden und Techniken zur Geometrie- und Materialerkundung, Duktilität und Energieabsorption, Risikoanalyse und Beschreibung von Unsicherheiten. Da nicht alle Schäden offensichtlich sind, kombiniert man zur Zeit mehrere Zustandsindikatoren, bereitet die registrierten Daten gezielt auf, und integriert sie vor einer endgültigen Bewertung in ein validiertes Modell. Werden deterministische Nachweismethoden mit probabilstischen kombiniert, lassen sich nur zufällige Fehler problemlos minimieren. Systematische Fehler durch ungenaue Modellierung oder vagem Wissen bleiben jedoch bestehen. Daß Entscheidungsträger mit unsicheren, oft sogar widersprüchlichen Angaben subjektiv urteilen, ist also nicht zu vermeiden. In dieser Arbeit wird gezeigt, wie mit Hilfe eines dreistufigen Bewertungsverfahrens Tragglieder in Qualitätsklassen eingestuft werden können. Abhängig von ihrem mittleren Schadensausmaß, ihrer Strukturbedeutung I (wiederum von ihrem Stellenwert bzw. den Konsequenzen ihrer Schädigung abhängig) und ihrem Prognosefaktor L ergibt sich ihr Versagensrisiko mit. Das Risiko für eine Versagen der Gesamtstruktur wird aus der Topologie ermittelt. Wenn das mittlere Schadensausmaß nicht eindeutig festgelegt werden kann, oder wenn die Material-, Geometrie- oder Lastangaben vage sind, wird im Rahmen "Weitergehender Untersuchungen" ein mathematisches Verfahren basierend auf der Fuzzy-Logik vorgeschlagen. Es filtert auch bei komplexen Ursache-Wirkungsbeziehungen die dominierende Schadensursache heraus und vermeidet, daß mit Unsicherheiten behaftete Parameter für zuverlässige Absolutwerte gehalten werden. Um den mittleren Schadensindex und daraus das Risiko zu berechnen, werden die einzelnen Schadensindizes (je nach Fehlermodus) abhängig von ihrer Bedeutung mit Wichtungsfaktoren belegt,und zusätzlich je nach Art, Bedeutung und Zuverlässigkeit der erhaltenen Information durch Gamma dividiert. Hiermit wurde ein neues Verfahren zur Analyse komplexer Versagensmechanismen vorgestellt, welches nachvollziehbare Schlußfolgerungen ermöglicht
Recommended from our members
Intelligent genetic algorithms for next-generation broadband multi-carrier CDMA wireless networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This dissertation proposes a novel intelligent system architecture for next-generation broadband multi-carrier CDMA wireless networks. In our system, two novel and similar intelligent genetic algorithms, namely Minimum Distance guided GAs (MDGAs) are invented for both peak-to-average power ratio (PAPR) reduction at the transmitter side and multi-user detection (MUD) at the receiver side. Meanwhile, we derive a theoretical BER performance analysis for the proposed MC-CDMA system in A WGN channel. Our analytical results show that the theoretical BER performance of synchronized MC-CDMA system is the same as that of the synchronized DS-CDMA system which is also used as a theoretical guidance of our novel MUD receiver design. In contrast to traditional GAs, our MDGAs start with a balanced ratio of exploration and exploitation which is maintained throughout the process. In our algorithms, a new replacement strategy is designed which increases significantly the convergence rate
and reduces dramatically computational complexity as compared to the conventional GAs. The simulation results demonstrate that, if compared to those schemes using exhaustive search and traditional GAs, (1) our MDGA-based P APR reduction scheme achieves 99.52% and 50+% reductions in computational complexity, respectively; (2)
our MDGA-based MUD scheme achieves 99.54% and 50+% reductions in computational complexity, respectively. The use of one core MDGA solution for both issues can ease the hardware design and dramatically reduce the implementation cost in practice
Recommended from our members
Enabling Resilience in Cyber-Physical-Human Water Infrastructures
Rapid urbanization and growth in urban populations have forced community-scale infrastructures (e.g., water, power and natural gas distribution systems, and transportation networks) to operate at their limits. Aging (and failing) infrastructures around the world are becoming increasingly vulnerable to operational degradation, extreme weather, natural disasters and cyber attacks/failures. These trends have wide-ranging socioeconomic consequences and raise public safety concerns. In this thesis, we introduce the notion of cyber-physical-human infrastructures (CPHIs) - smart community-scale infrastructures that bridge technologies with physical infrastructures and people. CPHIs are highly dynamic stochastic systems characterized by complex physical models that exhibit regionwide variability and uncertainty under disruptions. Failures in these distributed settings tend to be difficult to predict and estimate, and expensive to repair. Real-time fault identification is crucial to ensure continuity of lifeline services to customers at adequate levels of quality. Emerging smart community technologies have the potential to transform our failing infrastructures into robust and resilient future CPHIs.In this thesis, we explore one such CPHI - community water infrastructures. Current urban water infrastructures, that are decades (sometimes over a 100 years) old, encompass diverse geophysical regimes. Water stress concerns include the scarcity of supply and an increase in demand due to urbanization. Deterioration and damage to the infrastructure can disrupt water service; contamination events can result in economic and public health consequences. Unfortunately, little investment has gone into modernizing this key lifeline.To enhance the resilience of water systems, we propose an integrated middleware framework for quick and accurate identification of failures in complex water networks that exhibit uncertain behavior. Our proposed approach integrates IoT-based sensing, domain-specific models and simulations with machine learning methods to identify failures (pipe breaks, contamination events). The composition of techniques results in cost-accuracy-latency tradeoffs in fault identification, inherent in CPHIs due to the constraints imposed by cyber components, physical mechanics and human operators. Three key resilience problems are addressed in this thesis; isolation of multiple faults under a small number of failures, state estimation of the water systems under extreme events such as earthquakes, and contaminant source identification in water networks using human-in-the-loop based sensing. By working with real world water agencies (WSSC, DC and LADWP, LA), we first develop an understanding of operations of water CPHI systems. We design and implement a sensor-simulation-data integration framework AquaSCALE, and apply it to localize multiple concurrent pipe failures. We use a mixture of infrastructure measurements (i.e., historical and live water pressure/flow), environmental data (i.e., weather) and human inputs (i.e., twitter feeds), combined and enhanced with the domain model and supervised learning techniques to locate multiple failures at fine levels of granularity (individual pipeline level) with detection time reduced by orders of magnitude (from hours/days to minutes). We next consider the resilience of water infrastructures under extreme events (i.e., earthquakes) - the challenge here is the lack of apriori knowledge and the increased number and severity of damages to infrastructures. We present a graphical model based approach for efficient online state estimation, where the offline graph factorization partitions a given network into disjoint subgraphs, and the belief propagation based inference is executed on-the-fly in a distributed manner on those subgraphs. Our proposed approach can isolate 80% broken pipes and 99% loss-of-service to end-users during an earthquake.Finally, we address issues of water quality - today this is a human-in-the-loop process where operators need to gather water samples for lab tests. We incorporate the necessary abstractions with event processing methods into a workflow, which iteratively selects and refines the set of potential failure points via human-driven grab sampling. Our approach utilizes Hidden Markov Model based representations for event inference, along with reinforcement learning methods for further refining event locations and reducing the cost of human efforts.The proposed techniques are integrated into a middleware architecture, which enables components to communicate/collaborate with one another. We validate our approaches through a prototype implementation with multiple real-world water networks, supply-demand patterns from water utilities and policies set by the U.S. EPA. While our focus here is on water infrastructures in a community, the developed end-to-end solution is applicable to other infrastructures and community services which operate in disruptive and resource-constrained environments
- …