13 research outputs found
Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art
Software-Defined Networking (SDN) is an evolutionary networking paradigm
which has been adopted by large network and cloud providers, among which are
Tech Giants. However, embracing a new and futuristic paradigm as an alternative
to well-established and mature legacy networking paradigm requires a lot of
time along with considerable financial resources and technical expertise.
Consequently, many enterprises can not afford it. A compromise solution then is
a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN
functionalities are leveraged while existing traditional network
infrastructures are acknowledged. Recently, hSDN has been seen as a viable
networking solution for a diverse range of businesses and organizations.
Accordingly, the body of literature on hSDN research has improved remarkably.
On this account, we present this paper as a comprehensive state-of-the-art
survey which expands upon hSDN from many different perspectives
Auto-bandwidth control in dynamically reconfigured hybrid-SDN MPLS networks
The proposition of this work is based on the steady evolution of bandwidth demanding technology, which currently and more so in future, requires operators to use expensive infrastructure capability smartly to maximise its use in a very competitive environment. In this thesis, a traffic engineering control loop is proposed that dynamically adjusts the bandwidth and route of flows of Multi-Protocol Label Switching (MPLS) tunnels in response to changes in traffic demand. Available bandwidth is shifted to where the demand is, and where the demand requirement has dropped, unused allocated bandwidth is returned to the network. An MPLS network enhanced with Software-defined Networking (SDN) features is implemented. The technology known as hybrid SDN combines the programmability features of SDN with the robust MPLS label switched path features along with traffic engineering enhancements introduced by routing protocols such as Border Gateway Patrol-Traffic Engineering (BGP-TE) and Open Shortest Path First-Traffic Engineering (OSPF-TE). The implemented mixed-integer linear programming formulation using the minimisation of maximum link utilisation and minimum link cost objective functions, combined with the programmability of the hybrid SDN network allows for source to destination demand fluctuations. A key driver to this research is the programmability of the MPLS network, enhanced by the contributions that the SDN controller technology introduced. The centralised view of the network provides the network state information needed to drive the mathematical modelling of the network. The path computation element further enables control of the label switched path's bandwidths, which is adjusted based on current demand and optimisation method used. The hose model is used to specify a range of traffic conditions. The most important benefit of the hose model is the flexibility that is allowed in how the traffic matrix can change if the aggregate traffic demand does not exceed the hose maximum bandwidth specification. To this end, reserved hose bandwidth can now be released to the core network to service demands from other sites
Data science for health-care: Patient condition recognition
>Magister Scientiae - MScThe emergence of the Internet of Things (IoT) and Artificial Intelligence (AI) have elicited
increased interest in many areas of our daily lives. These include health, agriculture, aviation,
manufacturing, cities management and many others. In the health sector, portable vital
sign monitoring devices are being developed using the IoT technology to collect patients’ vital
signs in real-time. The vital sign data acquired by wearable devices is quantitative and machine
learning techniques can be applied to find hidden patterns in the dataset and help the medical
practitioner with decision making. There are about 30000 diseases known to man and no human
being can possibly remember all of them, their relations to other diseases, their symptoms
and whether the symptoms exhibited by the patients are early warnings of a fatal disease. In
light of this, Medical Decision Support Systems (MDSS) can provide assistance in making
these crucial assessments. In most decision support systems factors a ect each other; they can
be contradictory, competitive, and complementary. All these factors contribute to the overall
decision and have di erent degrees of influence [85]. However, while there is more need for automated
processes to improve the health-care sector, most of MDSS and the associated devices
are still under clinical trials. This thesis revisits cyber physical health systems (CPHS) with
the objective of designing and implementing a data analytics platform that provides patient
condition monitoring services in terms of patient prioritisation and disease identification [1].
Di erent machine learning algorithms are investigated by the platform as potential candidate
for achieving patient prioritisation. These include multiple linear regression, multiple logistic
regression, classification and regression decision trees, single hidden layer neural networks
and deep neural networks. Graph theory concepts are used to design and implement disease
identification. The data analytics platform analyses data from biomedical sensors and other
descriptive data provided by the patients (this can be recent data or historical data) stored in a
cloud which can be private local health Information organisation (LHIO) or belonging to a regional
health information organisation (RHIO). Users of the data analytics platform consisting
of medical practitioners and patients are assumed to interact with the platform through cities’
pharmacies , rural E-Health kiosks end user applications
Secure Multi-Path Selection with Optimal Controller Placement Using Hybrid Software-Defined Networks with Optimization Algorithm
The Internet's growth in popularity requires computer networks for both agility and resilience. Recently, unable to satisfy the computer needs for traditional networking systems. Software Defined Networking (SDN) is known as a paradigm shift in the networking industry. Many organizations are used SDN due to their efficiency of transmission. Striking the right balance between SDN and legacy switching capabilities will enable successful network scenarios in architecture networks. Therefore, this object grand scenario for a hybrid network where the external perimeter transport device is replaced with an SDN device in the service provider network. With the moving away from older networks to SDN, hybrid SDN includes both legacy and SDN switches. Existing models of SDN have limitations such as overfitting, local optimal trapping, and poor path selection efficiency. This paper proposed a Deep Kronecker Neural Network (DKNN) to improve its efficiency with a moderate optimization method for multipath selection in SDN. Dynamic resource scheduling is used for the reward function the learning performance is improved by the deep reinforcement learning (DRL) technique. The controller for centralised SDN acts as a network brain in the control plane. Among the most important duties network is selected for the best SDN controller. It is vulnerable to invasions and the controller becomes a network bottleneck. This study presents an intrusion detection system (IDS) based on the SDN model that runs as an application module within the controller. Therefore, this study suggested the feature extraction and classification of contractive auto-encoder with a triple attention-based classifier. Additionally, this study leveraged the best performing SDN controllers on which many other SDN controllers are based on OpenDayLight (ODL) provides an open northbound API and supports multiple southbound protocols. Therefore, one of the main issues in the multi-controller placement problem (CPP) that addresses needed in the setting of SDN specifically when different aspects in interruption, ability, authenticity and load distribution are being considered. Introducing the scenario concept, CPP is formulated as a robust optimization problem that considers changes in network status due to power outages, controller’s capacity, load fluctuations and changes in switches demand. Therefore, to improve network performance, it is planned to improve the optimal amount of controller placements by simulated annealing using different topologies the modified Dragonfly optimization algorithm (MDOA)
A Decentralized SDN Framework and Its Applications to Heterogeneous Internets
Motivated by the internets of the future, which will likely be considerably larger in size as well as highly heterogeneous and decentralized, we propose Decentralize- SDN, a Software-Defined Networking (SDN) framework that enables both physical- as well as logical distribution of the SDN control plane. D-SDN accomplishes network control distribution by defining a hierarchy of controllers that can "match" an internet's organizational and administrative structure. By delegating control between main controllers and secondary controllers, DSDN is able to accommodate administrative decentralization and autonomy, as well as possible disruptions that may be part of the operation of future internets. D-SDN specifies the protocols used for communication between main controllers as well as for main controller secondary controller- and secondary controller-secondary controller communication. Another distinguishing feature of D-SDN is that it incorporates security as an integral part of the framework and its underlying protocols. This paper describes our D-SDN framework as well as its protocols. It also presents our prototype implementation and proof-of-concept experimentation on a real testbed in which we showcase two use cases, namely network capacity sharing and public safety network services
Overcoming Bandwidth Fluctuations in Hybrid Networks with QoS-Aware Adaptive Routing
With an escalating reliance on sensor-driven scientific endeavors in challenging terrains, the significance of robust hybrid networks, formed by a combination of wireless and wired links, is more noticeable than ever. These networks serve as essential channels for data streaming to centralized data centers, but their efficiency is often degraded by bandwidth fluctuations and network congestion. Especially in bandwidth-sensitive hybrid networks, these issues present demanding challenges to Quality of Service (QoS). Traditional network management solutions fail to provide an adaptive response to these dynamic challenges, thereby underscoring the need for innovative solutions. This thesis introduces a novel approach leveraging the concept of Software-Defined Networking (SDN) to establish a dynamic, congestion-aware routing mechanism. This proposed mechanism stands out by comprising a unique strategy of using bandwidth-based measurements, which help accurately detect and localize network congestion. Unlike traditional methodologies that rely on rigid route management, our approach demonstrates dynamic data flow route adjustment. Experimental data indicate promising outcomes with clear improvements in network utilization and application performance. Furthermore, the proposed algorithm exhibits remarkable scalability, providing quick route-finding solutions for various data flows, without impacting system performance. Thus, this thesis contributes to the ongoing discourse on enhancing hybrid network efficiency in challenging conditions, setting the stage for future explorations in this area
Recommended from our members
Design and development of an SDN robotic system with intelligent openflow IOT testbeds for power assessment, prediction and fault management
This thesis was submitted for the award of Docctor of Philosophy and was awarded by Brunel University LondonCurrent wind turbine and power grid industry have relatively little research and
development with regards to implementing novel communication network and intel-
ligent system to overcome issues that pertain to network failure and lack of monitor-
ing. Wind turbine location could be a big concern when it comes to identifying an
efficient location for future wind turbine and the impact of a site with non-efficient
meteorological parameters can result in relocation of a wind turbine and revenue-
loss. Unplanned wind turbine shutdowns that are considered to be one of the major
revenue-loss factors of a modern wind farm business. Typically, the unplanned wind
turbine shutdown is a result of sensors fail due to harsh environment challenges that
prevent hardware status from being available on the monitoring system. The above
mentioned research problems pertain to wind turbine site assessment and predic-
tion of power. In this thesis, a novel programmable software-defined robotics and
IoT testbeds are proposed with the fusion of Artificial Intelligence and optimiza-
tion methods to solve specific problems related to wind turbine site assessment and
fault management. The site selection process is implemented using proposed aerial
and ground robotic systems that are incorporated with Software-Defined Networks
and OpenFlow switching capabilities. A second stage development of the system is
proposing a prediction platform that run on the aerial robot cluster using neural net-
works optimization regression techniques. To overcome the unplanned wind turbine
network outage, an IoT micro cloud cluster system is proposed that act as immedi-
ate fail-over platform to provide continuous health readings of the wind turbine to
ensure the turbine in question will not get shutdown unnecessarily. The proposed
system help in minimizing revenue-loss caused by stopping a wind turbine from op-
eration and help maintain generated power stability on the grid. Additionally, since
large wind farms require an agile and scalable management of selecting the most
efficient wind turbine location install. Thus, a softwarized cognitive routing proto-
col is proposed. The group of quadcopters is a redundant failover Software-Defined
Network/OpenFlow system that can cover every single way point of the farm land.
Although, power consumption is essential for the continuity the service, a Software-
Defined charging system testbed is proposed that uses inductive power transfer wit
Proceedings of the 2004 ONR Decision-Support Workshop Series: Interoperability
In August of 1998 the Collaborative Agent Design Research Center (CADRC) of the California Polytechnic State University in San Luis Obispo (Cal Poly), approached Dr. Phillip Abraham of the Office of Naval Research (ONR) with the proposal for an annual workshop focusing on emerging concepts in decision-support systems for military applications. The proposal was considered timely by the ONR Logistics Program Office for at least two reasons. First, rapid advances in information systems technology over the past decade had produced distributed collaborative computer-assistance capabilities with profound potential for providing meaningful support to military decision makers. Indeed, some systems based on these new capabilities such as the Integrated Marine Multi-Agent Command and Control System (IMMACCS) and the Integrated Computerized Deployment System (ICODES) had already reached the field-testing and final product stages, respectively.
Second, over the past two decades the US Navy and Marine Corps had been increasingly challenged by missions demanding the rapid deployment of forces into hostile or devastate dterritories with minimum or non-existent indigenous support capabilities. Under these conditions Marine Corps forces had to rely mostly, if not entirely, on sea-based support and sustainment operations. Particularly today, operational strategies such as Operational Maneuver From The Sea (OMFTS) and Sea To Objective Maneuver (STOM) are very much in need of intelligent, near real-time and adaptive decision-support tools to assist military commanders and their staff under conditions of rapid change and overwhelming data loads.
In the light of these developments the Logistics Program Office of ONR considered it timely to provide an annual forum for the interchange of ideas, needs and concepts that would address the decision-support requirements and opportunities in combined Navy and Marine Corps sea-based warfare and humanitarian relief operations. The first ONR Workshop was held April 20-22, 1999 at the Embassy Suites Hotel in San Luis Obispo, California. It focused on advances in technology with particular emphasis on an emerging family of powerful computer-based tools, and concluded that the most able members of this family of tools appear to be computer-based agents that are capable of communicating within a virtual environment of the real world. From 2001 onward the venue of the Workshop moved from the West Coast to Washington, and in 2003 the sponsorship was taken over by ONR’s Littoral Combat/Power Projection (FNC) Program Office (Program Manager: Mr. Barry Blumenthal). Themes and keynote speakers of past Workshops have included:
1999: ‘Collaborative Decision Making Tools’ Vadm Jerry Tuttle (USN Ret.); LtGen Paul Van Riper (USMC Ret.);Radm Leland Kollmorgen (USN Ret.); and, Dr. Gary Klein (KleinAssociates)
2000: ‘The Human-Computer Partnership in Decision-Support’ Dr. Ronald DeMarco (Associate Technical Director, ONR); Radm CharlesMunns; Col Robert Schmidle; and, Col Ray Cole (USMC Ret.)
2001: ‘Continuing the Revolution in Military Affairs’ Mr. Andrew Marshall (Director, Office of Net Assessment, OSD); and,Radm Jay M. Cohen (Chief of Naval Research, ONR)
2002: ‘Transformation ... ’ Vadm Jerry Tuttle (USN Ret.); and, Steve Cooper (CIO, Office ofHomeland Security)
2003: ‘Developing the New Infostructure’ Richard P. Lee (Assistant Deputy Under Secretary, OSD); and, MichaelO’Neil (Boeing)
2004: ‘Interoperability’ MajGen Bradley M. Lott (USMC), Deputy Commanding General, Marine Corps Combat Development Command; Donald Diggs, Director, C2 Policy, OASD (NII
GENERIC AND ADAPTIVE METADATA MANAGEMENT FRAMEWORK FOR SCIENTIFIC DATA REPOSITORIES
Der rapide technologische Fortschritt hat in verschiedenen Forschungsdisziplinen zu vielfältigen Weiterentwicklungen in Datenakquise und -verarbeitung geführt. Hi- eraus wiederum resultiert ein immenses Wachstum an Daten und Metadaten, gener- iert durch wissenschaftliche Experimente. Unabhängig vom konkreten Forschungs- gebiet ist die wissenschaftliche Praxis immer stärker durch Daten und Metadaten gekennzeichnet. In der Folge intensivieren Universitäten, Forschungsgemeinschaften und Förderagenturen ihre Bemühungen, wissenschaftliche Daten effizient zu sichten, zu speichern und auszuwerten. Die wesentlichen Ziele wissenschaftlicher Daten- Repositorien sind die Etablierung von Langzeitspeicher, der Zugriff auf Daten, die Bereitstellung von Daten für die Wiederverwendung und deren Referenzierung, die Erfassung der Datenquelle zur Reproduzierbarkeit sowie die Bereitstellung von Meta- daten, Anmerkungen oder Verweisen zur Vermittlung domänenspezifischen Wis- sens, das zur Interpretation der Daten notwendig ist. Wissenschaftliche Datenspe- icher sind hochkomplexe Systeme, bestehend aus Elementen aus unterschiedlichen Forschungsfeldern, wie z. B. Algorithmen für Datenkompression und Langzeit- datenarchivierung, Frameworks für das Metadaten- und Annotations-management, Workflow-Provenance und Provenance-Interoperabilität zwischen heterogenen Work- flowsystemen, Autorisierungs und Authentifizierungsinfrastrukturen sowie Visual- isierungswerkzeuge für die Dateninterpretation.
Die vorliegende Arbeit beschreibt eine modulare Architektur für ein wis- senschaftliches Datenarchiv, die Forschungsgemeinschaften darin unterstützt, ihre Daten und Metadaten gezielt über den jeweiligen Lebenszyklus hinweg zu orchestri- eren. Diese Architektur besteht aus Komponenten, die vier Forschungsfelder repräsen- tieren. Die erste Komponente ist ein Client zur Datenübertragung (“data transfer client”). Er bietet eine generische Schnittstelle für die Erfassung von Daten und den Zugriff auf Daten aus wissenschaftlichen Datenakquisesystemen.
Die zweite Komponente ist das MetaStore-Framework, ein adaptives Metadaten- Management-Framework, das die Handhabung sowohl statischer als auch dynamis- cher Metadatenmodelle ermöglicht. Um beliebige Metadatenschemata behandeln zu können, basiert die Entwicklung des MetaStore-Frameworks auf dem komponen- tenbasierten dynamischen Kompositions-Entwurfsmuster (component-based dynamic composition design pattern). Der MetaStore ist außerdem mit einem Annotations- framework für die Handhabung von dynamischen Metadaten ausgestattet.
Die dritte Komponente ist eine Erweiterung des MetaStore-Frameworks zur au- tomatisierten Behandlung von Provenance-Metadaten für BPEL-basierte Workflow- Management-Systeme. Der von uns entworfene und implementierte Prov2ONE Al- gorithmus übersetzt dafür die Struktur und Ausführungstraces von BPEL-Workflow- Definitionen automatisch in das Provenance-Modell ProvONE. Hierbei ermöglicht die Verfügbarkeit der vollständigen BPEL-Provenance-Daten in ProvONE nicht nur eine aggregierte Analyse der Workflow-Definition mit ihrem Ausführungstrace, sondern gewährleistet auch die Kompatibilität von Provenance-Daten aus unterschiedlichen Spezifikationssprachen.
Die vierte Komponente unseres wissenschaftlichen Datenarchives ist das Provenance-Interoperabilitätsframework ProvONE - Provenance Interoperability Framework (P-PIF). Dieses gewährleistet die Interoperabilität von Provenance-Daten heterogener Provenance-Modelle aus unterschiedlichen Workflowmanagementsyste- men. P-PIF besteht aus zwei Komponenten: dem Prov2ONE-Algorithmus für SCUFL und MoML Workflow-Spezifikationen und Workflow-Management-System- spezifischen Adaptern zur Extraktion, Übersetzung und Modellierung retrospektiver Provenance-Daten in das ProvONE-Provenance-Modell. P-PIF kann sowohl Kon- trollfluss als auch Datenfluss nach ProvONE übersetzen. Die Verfügbarkeit hetero- gener Provenance-Traces in ProvONE ermöglicht das Vergleichen, Analysieren und Anfragen von Provenance-Daten aus unterschiedlichen Workflowsystemen.
Wir haben die Komponenten des in dieser Arbeit vorgestellten wissenschaftlichen Datenarchives wie folgt evaluiert: für den Client zum Datentrasfer haben wir die Daten-übertragungsleistung mit dem Standard-Protokoll für Nanoskopie-Datensätze untersucht. Das MetaStore-Framework haben wir hinsichtlich der folgenden bei- den Aspekte evaluiert. Zum einen haben wir die Metadatenaufnahme und Voll- textsuchleistung unter verschiedenen Datenbankkonfigurationen getestet. Zum an- deren zeigen wir die umfassende Abdeckung der Funktionalitäten von MetaStore durch einen funktionsbasierten Vergleich von MetaStore mit bestehenden Metadaten- Management-Systemen. Für die Evaluation von P-PIF haben wir zunächst die Korrek- theit und Vollständigkeit unseres Prov2ONE-Algorithmus bewiesen und darüber hin- aus die vom Prov2ONE BPEL-Algorithmus generierten Prognose-Graphpattern aus ProvONE gegen bestehende BPEL-Kontrollflussmuster ausgewertet. Um zu zeigen, dass P-PIF ein nachhaltiges Framework ist, das sich an Standards hält, vergle- ichen wir außerdem die Funktionen von P-PIF mit denen bestehender Provenance- Interoperabilitätsframeworks. Diese Auswertungen zeigen die Überlegenheit und die Vorteile der einzelnen in dieser Arbeit entwickelten Komponenten gegenüber ex- istierenden Systemen