247 research outputs found
Dynamic Virtual Network Restoration with Optimal Standby Virtual Router Selection
Title form PDF of title page, viewed on September 4, 2015Dissertation advisor: Deep MedhiVitaIncludes bibliographic references (pages 141-157)Thesis (Ph.D.)--School of Computing and Engineering and Department of Mathematics and Statistics. University of Missouri--Kansas City, 2015Network virtualization technologies allow service providers to request partitioned,
QoS guaranteed and fault-tolerant virtual networks provisioned by the substrate network
provider (i.e., physical infrastructure provider). A virtualized networking environment
(VNE) has common features such as partition, flexibility, etc., but fault-tolerance requires
additional efforts to provide survivability against failures on either virtual networks or the
substrate network.
Two common survivability paradigms are protection (proactive) and restoration
(reactive). In the protection scheme, the substrate network provider (SNP) allocates redundant
resources (e.g., nodes, paths, bandwidths, etc) to protect against potential failures
in the VNE. In the restoration scheme, the SNP dynamically allocates resources to restore
the networks, and it usually occurs after the failure is detected.
In this dissertation, we design a restoration scheme that can be dynamically implemented
in a centralized manner by an SNP to achieve survivability against node failures
in the VNE. The proposed restoration scheme is designed to be integrated with a protection
scheme, where the SNP allocates spare virtual routers (VRs) as standbys for the
virtual networks (VN) and they are ready to serve in the restoration scheme after a node
failure has been identified. These standby virtual routers (S-VR) are reserved as a sharedbackup
for any single node failure, and during the restoration procedure, one of the S-VR
will be selected to replace the failed VR. In this work, we present an optimal S-VR selection
approach to simultaneously restore multiple VNs affected by failed VRs, where
these VRs may be affected by failures within themselves or at their substrate host (i.e.,
power outage, hardware failures, maintenance, etc.). Furthermore, the restoration scheme
is embedded into a dynamic reconfiguration scheme (DRS), so that the affected VNs can
be dynamically restored by a centralized virtual network manager (VNM).
We first introduce a dynamic reconfiguration scheme (DRS) against node failures
in a VNE, and then present an experimental study by implementing this DRS over a
realistic VNE using GpENI testbed. For this experimental study, we ran the DRS to
restore one VN with a single-VR failure, and the results showed that with a proper S-VR
selection, the performance of the affected VN could be well restored. Next, we proposed
an Mixed-Integer Linear Programming (MILP) model with dual–goals to optimally select
S-VRs to restore all VNs affected by VR failures while load balancing. We also present a
heuristic algorithm based on the model. By considering a number of factors, we present
numerical studies to show how the optimal selection is affected. The results show that
the proposed heuristic’s performance is close to the optimization model when there were
sufficient standby virtual routers for each virtual network and the substrate nodes have
the capability to support multiple standby virtual routers to be in service simultaneously.
Finally, we present the design of a software-defined resilient VNE with the optimal S-VR
selection model, and discuss a prototype implementation on the GENI testbed.Introduction -- Literature survey -- Dynamic reconfiguration scheme in a VNE -- An experimental study on GpENI-VNI -- Optimal standby virtual router selection model -- Prototype design and implementation on GENI -- Conclusion and future work -- Appendix A. Resource Specification (RSpec) in GENI -- Appendix B. Optimal S-VR Selection Model in AMP
Performance Improvement of Multicommodity Flow of Tactile and Best Effort Packet in Internet Network
Fault diagnosis for IP-based network with real-time conditions
BACKGROUND:
Fault diagnosis techniques have been based on many paradigms, which derive from diverse areas
and have different purposes: obtaining a representation model of the network for fault localization,
selecting optimal probe sets for monitoring network devices, reducing fault detection time, and
detecting faulty components in the network. Although there are several solutions for diagnosing
network faults, there are still challenges to be faced: a fault diagnosis solution needs to always be
available and able enough to process data timely, because stale results inhibit the quality and speed
of informed decision-making. Also, there is no non-invasive technique to continuously diagnose the
network symptoms without leaving the system vulnerable to any failures, nor a resilient technique
to the network's dynamic changes, which can cause new failures with different symptoms.
AIMS:
This thesis aims to propose a model for the continuous and timely diagnosis of IP-based networks
faults, independent of the network structure, and based on data analytics techniques.
METHOD(S):
This research's point of departure was the hypothesis of a fault propagation phenomenon that
allows the observation of failure symptoms at a higher network level than the fault origin. Thus, for
the model's construction, monitoring data was collected from an extensive campus network in
which impact link failures were induced at different instants of time and with different duration.
These data correspond to widely used parameters in the actual management of a network. The
collected data allowed us to understand the faults' behavior and how they are manifested at a
peripheral level.
Based on this understanding and a data analytics process, the first three modules of our model,
named PALADIN, were proposed (Identify, Collection and Structuring), which define the data
collection peripherally and the necessary data pre-processing to obtain the description of the
network's state at a given moment. These modules give the model the ability to structure the data
considering the delays of the multiple responses that the network delivers to a single monitoring
probe and the multiple network interfaces that a peripheral device may have.
Thus, a structured data stream is obtained, and it is ready to be analyzed. For this analysis, it was
necessary to implement an incremental learning framework that respects networks' dynamic
nature. It comprises three elements, an incremental learning algorithm, a data rebalancing strategy,
and a concept drift detector. This framework is the fourth module of the PALADIN model named
Diagnosis.
In order to evaluate the PALADIN model, the Diagnosis module was implemented with 25 different
incremental algorithms, ADWIN as concept-drift detector and SMOTE (adapted to streaming scenario) as the rebalancing strategy. On the other hand, a dataset was built through the first
modules of the PALADIN model (SOFI dataset), which means that these data are the incoming data
stream of the Diagnosis module used to evaluate its performance.
The PALADIN Diagnosis module performs an online classification of network failures, so it is a
learning model that must be evaluated in a stream context. Prequential evaluation is the most used
method to perform this task, so we adopt this process to evaluate the model's performance over
time through several stream evaluation metrics.
RESULTS:
This research first evidences the phenomenon of impact fault propagation, making it possible to
detect fault symptoms at a monitored network's peripheral level. It translates into non-invasive
monitoring of the network. Second, the PALADIN model is the major contribution in the fault
detection context because it covers two aspects. An online learning model to continuously process
the network symptoms and detect internal failures. Moreover, the concept-drift detection and
rebalance data stream components which make resilience to dynamic network changes possible.
Third, it is well known that the amount of available real-world datasets for imbalanced stream
classification context is still too small. That number is further reduced for the networking context.
The SOFI dataset obtained with the first modules of the PALADIN model contributes to that number
and encourages works related to unbalanced data streams and those related to network fault
diagnosis.
CONCLUSIONS:
The proposed model contains the necessary elements for the continuous and timely diagnosis of IPbased
network faults; it introduces the idea of periodical monitorization of peripheral network
elements and uses data analytics techniques to process it. Based on the analysis, processing, and
classification of peripherally collected data, it can be concluded that PALADIN achieves the
objective. The results indicate that the peripheral monitorization allows diagnosing faults in the
internal network; besides, the diagnosis process needs an incremental learning process, conceptdrift
detection elements, and rebalancing strategy.
The results of the experiments showed that PALADIN makes it possible to learn from the network
manifestations and diagnose internal network failures. The latter was verified with 25 different
incremental algorithms, ADWIN as concept-drift detector and SMOTE (adapted to streaming
scenario) as the rebalancing strategy.
This research clearly illustrates that it is unnecessary to monitor all the internal network elements
to detect a network's failures; instead, it is enough to choose the peripheral elements to be
monitored. Furthermore, with proper processing of the collected status and traffic descriptors, it is
possible to learn from the arriving data using incremental learning in cooperation with data
rebalancing and concept drift approaches. This proposal continuously diagnoses the network
symptoms without leaving the system vulnerable to failures while being resilient to the network's
dynamic changes.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Manuel Molina López.- Secretario: Juan Carlos Dueñas López.- Vocal: Juan Manuel Corchado Rodrígue
Towards Robust Traffic Engineering in IP Networks
To deliver a reliable communication service it is essential for
the network operator to manage how traffic flows in the network.
The paths taken by the traffic is controlled by the routing function.
Traditional ways of tuning routing in IP networks are designed
to be simple to manage and are not designed to adapt to the
traffic situation in the network. This can lead to congestion in
parts of the network while other parts of the network is
far from fully utilized. In this thesis we explore issues related
to optimization of the routing function to balance load in the network.
We investigate methods for efficient derivation of the
traffic situation using link count measurements. The advantage
of using link counts is that they are easily obtained and yield
a very limited amount of data. We evaluate and show that estimation
based on link counts give the operator a fast and accurate description
of the traffic demands. For the evaluation we have access to a unique data
set of complete traffic demands from an operational
IP backbone.
Furthermore, we evaluate performance of search heuristics to
set weights in link-state routing protocols. For the evaluation
we have access to complete traffic data from a Tier-1 IP network.
Our findings confirm previous studies who use partial traffic data or
synthetic traffic data. We find that optimization using estimated
traffic demands has little significance to the performance of
the load balancing.
Finally, we device an algorithm that finds a routing setting that is
robust to shifts in traffic patterns due to changes in the
interdomain routing. A set of worst case scenarios caused by the interdomain routing changes
is identified and used to solve a robust routing problem. The evaluation
indicates that performance of the robust routing is close to optimal for
a wide variety of traffic scenarios.
The main contribution of this thesis is that we demonstrate that it is
possible to estimate the traffic matrix with good accuracy and to develop
methods that optimize the routing settings to give strong and robust network
performance. Only minor changes might be necessary in order to implement our
algorithms in existing networks
Smart transportation systems (STSs) in critical conditions
In the context of smart transportation systems (STSs) in smart cities, the use of applications that can help in case of critical conditions is a key point. Examples of critical conditions may be natural-disaster events such as earthquakes, hurricanes, floods, and manmade ones such as terrorist attacks and toxic waste spills. Disaster events are often combined with the destruction of the local telecommunication infrastructure, if any, and this implies real problems to the rescue operations.The quick deployment of a telecommunication infrastructure is essential for emergency and safety operations as well as the rapid network reconfigurability, the availability of open source software, the efficient interoperability, and the scalability of the technological solutions. The topic is very hot and many research groups are focusing on these issues. Consequently, the deployment of a smart network is fundamental. It is needed to support both applications that can tolerate delays and applications requiring dedicated resources for real-time services such as traffic alert messages, and public safety messages. The guarantee of quality of service (QoS) for such applications is a key requirement.In this chapter we will analyze the principal issues of the networking aspects and will propose a solution mainly based on software defined networking (SDN). We will evaluate the benefit of such paradigm in the mentioned context focusing on the incremental deployment of such solution in the existing metropolitan networks and we will design a "QoS App" able to manage the quality of service on top of the SDN controller
Monitoring Changes in the Stability of Networks Using Eigenvector Centrality
Monitoring networks for anomalies is a typical duty of network operators. The conventional
monitoring tools available today tend to almost ignore the topological characteristics
of the whole network. This thesis takes a different approach from the conventional
monitoring tools, by employing the principle of Eigenvector Centrality. Traditionally,
this principle is used to analyse vulnerability and social aspects of networks.
The proposed model reveals that topological characteristics of a network can be used
to improve the conventional unreliability predictors, and to give a better indicator
of its potential weaknesses. An effective expected adjacency matrix, k, is introduced in
this work to be used with centrality calculations, and it reflects the factors which affect
the reliability of a network, for e.g. link downtimes, link metrics, packet loss,
etc. Using these calculations, all network backbone routers are assigned values which
correspond to the importance of those routers in comparison to the rest of the network
nodes. Furthermore, to observe how vulnerable each node could be, nodes are
ranked according to the importance values, where the nodes with high ranking values
are more vulnerable. This model is able to analyse temporal stability of the network,
observing and comparing the rate of change in node ranking values and connectivity
caused by the network link failures. The results show that the proposed model is
dynamic, and changes according to the dynamics of the topology of the network, i.e.
upgrading, link failures, etc.Master i nettverks- og systemadministrasjo
IP and ATM integration: A New paradigm in multi-service internetworking
ATM is a widespread technology adopted by many to support advanced data communication, in particular efficient Internet services provision. The expected challenges of multimedia communication together with the increasing massive utilization of IP-based applications urgently require redesign of networking solutions in terms of both new functionalities and enhanced performance. However, the networking context is affected by so many changes, and to some extent chaotic growth, that any approach based on a structured and complex top-down architecture is unlikely to be applicable. Instead, an approach based on finding out the best match between realistic service requirements and the pragmatic, intelligent use of technical opportunities made available by the product market seems more appropriate. By following this approach, innovations and improvements can be introduced at different times, not necessarily complying with each other according to a coherent overall design. With the aim of pursuing feasible innovations in the different networking aspects, we look at both IP and ATM internetworking in order to investigating a few of the most crucial topics/ issues related to the IP and ATM integration perspective. This research would also address various means of internetworking the Internet Protocol (IP) and Asynchronous Transfer Mode (ATM) with an objective of identifying the best possible means of delivering Quality of Service (QoS) requirements for multi-service applications, exploiting the meritorious features that IP and ATM have to offer. Although IP and ATM often have been viewed as competitors, their complementary strengths and limitations from a natural alliance that combines the best aspects of both the technologies. For instance, one limitation of ATM networks has been the relatively large gap between the speed of the network paths and the control operations needed to configure those data paths to meet changing user needs. IP\u27s greatest strength, on the other hand, is the inherent flexibility and its capacity to adapt rapidly to changing conditions. These complementary strengths and limitations make it natural to combine IP with ATM to obtain the best that each has to offer. Over time many models and architectures have evolved for IP/ATM internetworking and they have impacted the fundamental thinking in internetworking IP and ATM. These technologies, architectures, models and implementations will be reviewed in greater detail in addressing possible issues in integrating these architectures s in a multi-service, enterprise network. The objective being to make recommendations as to the best means of interworking the two in exploiting the salient features of one another to provide a faster, reliable, scalable, robust, QoS aware network in the most economical manner. How IP will be carried over ATM when a commercial worldwide ATM network is deployed is not addressed and the details of such a network still remain in a state of flux to specify anything concrete. Our research findings culminated with a strong recommendation that the best model to adopt, in light of the impending integrated service requirements of future multi-service environments, is an ATM core with IP at the edges to realize the best of both technologies in delivering QoS guarantees in a seamless manner to any node in the enterprise
Network overload avoidance by traffic engineering and content caching
The Internet traffic volume continues to grow at a great rate, now driven by video and TV distribution. For network operators it is important to avoid congestion in the network, and to meet service level agreements with their customers. This thesis presents work on two methods operators can use to reduce links loads in their networks: traffic engineering and content caching.
This thesis studies access patterns for TV and video and the potential for caching. The investigation is done both using simulation and by analysis of logs from a large TV-on-Demand system over four months.
The results show that there is a small set of programs that account for a large fraction of the requests and that a comparatively small local cache can be used to significantly reduce the peak link loads during prime time. The investigation also demonstrates how the popularity of programs changes over time and shows that the access pattern in a TV-on-Demand system very much depends on the content type.
For traffic engineering the objective is to avoid congestion in the network and to make better use of available resources by adapting the routing to the current traffic situation. The main challenge for traffic engineering in IP networks is to cope with the dynamics of Internet traffic demands.
This thesis proposes L-balanced routings that route the traffic on the shortest paths possible but make sure that no link is utilised to more than a given level L. L-balanced routing gives efficient routing of traffic and controlled spare capacity to handle unpredictable changes in traffic. We present an L-balanced routing algorithm and a heuristic search method for finding L-balanced weight settings for the legacy routing protocols OSPF and IS-IS. We show that the search and the resulting weight settings work well in real network scenarios
Internet of Things From Hype to Reality
The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions
- …