531 research outputs found
Entangled networks, synchronization, and optimal network topology
A new family of graphs, {\it entangled networks}, with optimal properties in
many respects, is introduced. By definition, their topology is such that
optimizes synchronizability for many dynamical processes. These networks are
shown to have an extremely homogeneous structure: degree, node-distance,
betweenness, and loop distributions are all very narrow. Also, they are
characterized by a very interwoven (entangled) structure with short average
distances, large loops, and no well-defined community-structure. This family of
nets exhibits an excellent performance with respect to other flow properties
such as robustness against errors and attacks, minimal first-passage time of
random walks, efficient communication, etc. These remarkable features convert
entangled networks in a useful concept, optimal or almost-optimal in many
senses, and with plenty of potential applications computer science or
neuroscience.Comment: Slightly modified version, as accepted in Phys. Rev. Let
Optimal network topologies: Expanders, Cages, Ramanujan graphs, Entangled networks and all that
We report on some recent developments in the search for optimal network
topologies. First we review some basic concepts on spectral graph theory,
including adjacency and Laplacian matrices, and paying special attention to the
topological implications of having large spectral gaps. We also introduce
related concepts as ``expanders'', Ramanujan, and Cage graphs. Afterwards, we
discuss two different dynamical feautures of networks: synchronizability and
flow of random walkers and so that they are optimized if the corresponding
Laplacian matrix have a large spectral gap. From this, we show, by developing a
numerical optimization algorithm that maximum synchronizability and fast random
walk spreading are obtained for a particular type of extremely homogeneous
regular networks, with long loops and poor modular structure, that we call
entangled networks. These turn out to be related to Ramanujan and Cage graphs.
We argue also that these graphs are very good finite-size approximations to
Bethe lattices, and provide almost or almost optimal solutions to many other
problems as, for instance, searchability in the presence of congestion or
performance of neural networks. Finally, we study how these results are
modified when studying dynamical processes controlled by a normalized (weighted
and directed) dynamics; much more heterogeneous graphs are optimal in this
case. Finally, a critical discussion of the limitations and possible extensions
of this work is presented.Comment: 17 pages. 11 figures. Small corrections and a new reference. Accepted
for pub. in JSTA
A Hybrid Approach to Network Robustness Optimization using Edge Rewiring and Edge Addition
Networks are ubiquitous in the modern world. From computer and telecommunication networks to road networks and power grids, networks make up many crucial pieces of infrastructure that we interact with on a daily basis. These networks can be subjected to damage from many different sources, both random and targeted. If one of these networks receives too much damage, it may be rendered inoperable, which can have disastrous consequences. For this reason, it is in the best interests of those responsible for these networks to ensure that they are highly robust to failure. Since it is not usually feasible to rebuild most existing networks from scratch to make them more resilient, it is necessary to have an approach that can modify an existing network to make it more robust to failure. Previous work has established several methods of accomplishing this task, including edge rewiring and edge addition. Both of these methods can be very useful for optimizing network robustness, but each comes with its own set of limitations. This thesis proposes a new hybrid approach to network robustness optimization that combines both of these approaches. Four edge rewiring based metaheuristic approaches were modified to incorporate one of three different edge addition strategies. A comparative study was performed on these new hybrid optimizers, comparing them to each other and to the vanilla edge rewiring only approach on both synthetic and real world networks. Experiments showed that this new hybrid approach to network robustness optimization leads to much more highly robust networks than an edge rewiring only approach
Cyber Network Resilience against Self-Propagating Malware Attacks
Self-propagating malware (SPM) has led to huge financial losses, major data
breaches, and widespread service disruptions in recent years. In this paper, we
explore the problem of developing cyber resilient systems capable of mitigating
the spread of SPM attacks. We begin with an in-depth study of a well-known
self-propagating malware, WannaCry, and present a compartmental model called
SIIDR that accurately captures the behavior observed in real-world attack
traces. Next, we investigate ten cyber defense techniques, including existing
edge and node hardening strategies, as well as newly developed methods based on
reconfiguring network communication (NodeSplit) and isolating communities. We
evaluate all defense strategies in detail using six real-world communication
graphs collected from a large retail network and compare their performance
across a wide range of attacks and network topologies. We show that several of
these defenses are able to efficiently reduce the spread of SPM attacks modeled
with SIIDR. For instance, given a strong attack that infects 97% of nodes when
no defense is employed, strategically securing a small number of nodes (0.08%)
reduces the infection footprint in one of the networks down to 1%.Comment: 20 page
Effect of edge removal on topological and functional robustness of complex networks
We study the robustness of complex networks subject to edge removal. Several
network models and removing strategies are simulated. Rather than the existence
of the giant component, we use total connectedness as the criterion of
breakdown. The network topologies are introduced a simple traffic dynamics and
the total connectedness is interpreted not only in the sense of topology but
also in the sense of function. We define the topological robustness and the
functional robustness, investigate their combined effect and compare their
relative importance to each other. The results of our study provide an
alternative view of the overall robustness and highlight efficient ways to
improve the robustness of the network models.Comment: 21 pages, 9 figure
Achieving Small World Properties using Bio-Inspired Techniques in Wireless Networks
It is highly desirable and challenging for a wireless ad hoc network to have
self-organization properties in order to achieve network wide characteristics.
Studies have shown that Small World properties, primarily low average path
length and high clustering coefficient, are desired properties for networks in
general. However, due to the spatial nature of the wireless networks, achieving
small world properties remains highly challenging. Studies also show that,
wireless ad hoc networks with small world properties show a degree distribution
that lies between geometric and power law. In this paper, we show that in a
wireless ad hoc network with non-uniform node density with only local
information, we can significantly reduce the average path length and retain the
clustering coefficient. To achieve our goal, our algorithm first identifies
logical regions using Lateral Inhibition technique, then identifies the nodes
that beamform and finally the beam properties using Flocking. We use Lateral
Inhibition and Flocking because they enable us to use local state information
as opposed to other techniques. We support our work with simulation results and
analysis, which show that a reduction of up to 40% can be achieved for a
high-density network. We also show the effect of hopcount used to create
regions on average path length, clustering coefficient and connectivity.Comment: Accepted for publication: Special Issue on Security and Performance
of Networks and Clouds (The Computer Journal
Recommended from our members
TOWARDS RELIABLE CIRCUMVENTION OF INTERNET CENSORSHIP
The Internet plays a crucial role in today\u27s social and political movements by facilitating the free circulation of speech, information, and ideas; democracy and human rights throughout the world critically depend on preserving and bolstering the Internet\u27s openness. Consequently, repressive regimes, totalitarian governments, and corrupt corporations regulate, monitor, and restrict the access to the Internet, which is broadly known as Internet \emph{censorship}. Most countries are improving the internet infrastructures, as a result they can implement more advanced censoring techniques. Also with the advancements in the application of machine learning techniques for network traffic analysis have enabled the more sophisticated Internet censorship. In this thesis, We take a close look at the main pillars of internet censorship, we will introduce new defense and attacks in the internet censorship literature.
Internet censorship techniques investigate users’ communications and they can decide to interrupt a connection to prevent a user from communicating with a specific entity. Traffic analysis is one of the main techniques used to infer information from internet communications. One of the major challenges to traffic analysis mechanisms is scaling the techniques to today\u27s exploding volumes of network traffic, i.e., they impose high storage, communications, and computation overheads. We aim at addressing this scalability issue by introducing a new direction for traffic analysis, which we call \emph{compressive traffic analysis}. Moreover, we show that, unfortunately, traffic analysis attacks can be conducted on Anonymity systems with drastically higher accuracies than before by leveraging emerging learning mechanisms. We particularly design a system, called \deepcorr, that outperforms the state-of-the-art by significant margins in correlating network connections. \deepcorr leverages an advanced deep learning architecture to \emph{learn} a flow correlation function tailored to complex networks. Also to be able to analyze the weakness of such approaches we show that an adversary can defeat deep neural network based traffic analysis techniques by applying statistically undetectable \emph{adversarial perturbations} on the patterns of live network traffic.
We also design techniques to circumvent internet censorship. Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. We propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to \emph{all} previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it \emph{downstream-only} decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Then, we propose to use game theoretic approaches to model the arms races between the censors and the censorship circumvention tools. This will allow us to analyze the effect of different parameters or censoring behaviors on the performance of censorship circumvention tools. We apply our methods on two fundamental problems in internet censorship.
Finally, to bring our ideas to practice, we designed a new censorship circumvention tool called \name. \name aims at increasing the collateral damage of censorship by employing a ``mass\u27\u27 of normal Internet users, from both censored and uncensored areas, to serve as circumvention proxies
Provider and peer selection in the evolving internet ecosystem
The Internet consists of thousands of autonomous networks connected together to provide end-to-end reachability. Networks of different sizes, and with different functions and business objectives, interact and co-exist in the evolving "Internet Ecosystem". The Internet ecosystem is highly dynamic, experiencing growth (birth of new networks), rewiring (changes in the connectivity of existing networks), as well as deaths (of existing networks). The dynamics of the Internet ecosystem are determined both by external "environmental" factors (such as the state of the global economy or the popularity of new Internet applications) and the complex incentives and objectives of each network. These dynamics have major implications on how the future Internet will look like. How does the Internet evolve? What is the Internet heading towards, in terms of topological, performance, and economic organization? How do given optimization strategies affect the profitability of different
networks? How do these strategies affect the Internet in terms of topology, economics, and performance?
In this thesis, we take some steps towards answering the above questions using a combination of measurement and modeling approaches. We first study the evolution of the Autonomous System (AS) topology over the last decade. In particular, we classify ASes and inter-AS links according to their business function, and study separately their evolution over the last 10 years. Next, we focus on enterprise customers and content providers at the edge of the Internet, and propose algorithms for a stub network to choose its upstream providers to maximize its utility (either monetary cost, reliability or performance). Third, we develop a model for interdomain network formation, incorporating the effects of economics, geography, and the provider/peer selections strategies of different types of networks. We use this model to examine the "outcome" of these strategies, in terms of the topology, economics and performance of the resulting internetwork. We also investigate the effect of external factors, such as the nature of the interdomain traffic matrix, customer preferences in provider selection, and pricing/cost structures. Finally, we focus on a recent trend due to the increasing amount of traffic flowing from content providers (who generate content), to access providers (who serve end users). This has led to a tussle between content providers and access providers, who have threatened to prioritize certain types of traffic, or charge content providers directly -- strategies that are viewed as violations of "network neutrality". In our work, we evaluate various pricing and connection strategies that access providers can use to remain profitable without violating network neutrality.Ph.D.Committee Chair: Dovrolis, Constantine; Committee Member: Ammar, Mostafa; Committee Member: Feamster, Nick; Committee Member: Willinger, Walter; Committee Member: Zegura, Elle
- …