100 research outputs found

    Infrastructure-wide and intent-based networking dataset for 5G-and-beyond AI-driven autonomous networks

    Get PDF
    In the era of Autonomous Networks (ANs), artificial intelligence (AI) plays a crucial role for their development in cellular networks, especially in 5G-and-beyond networks. The availability of high-quality networking datasets is one of the essential aspects for creating data-driven algorithms in network management and optimisation tasks. These datasets serve as the foundation for empowering AI algorithms to make informed decisions and optimise network resources efficiently. In this research work, we propose the IW-IB-5GNET networking dataset: an infrastructure-wide and intent-based dataset that is intended to be of use in research and development of network management and optimisation solutions in 5G-and-beyond networks. It is infrastructure wide due to the fact that the dataset includes information from all layers of the 5G network. It is also intent based as it is initiated based on predefined user intents. The proposed dataset has been generated in an emulated 5G network, with a wide deployment of network sensors for its creation. The IW-IB-5GNET dataset is promising to facilitate the development of autonomous and intelligent network management solutions that enhance network performance and optimisation

    Attack graph approach to dynamic network vulnerability analysis and countermeasures

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIt is widely accepted that modern computer networks (often presented as a heterogeneous collection of functioning organisations, applications, software, and hardware) contain vulnerabilities. This research proposes a new methodology to compute a dynamic severity cost for each state. Here a state refers to the behaviour of a system during an attack; an example of a state is where an attacker could influence the information on an application to alter the credentials. This is performed by utilising a modified variant of the Common Vulnerability Scoring System (CVSS), referred to as a Dynamic Vulnerability Scoring System (DVSS). This calculates scores of intrinsic, time-based, and ecological metrics by combining related sub-scores and modelling the problem’s parameters into a mathematical framework to develop a unique severity cost. The individual static nature of CVSS affects the scoring value, so the author has adapted a novel model to produce a DVSS metric that is more precise and efficient. In this approach, different parameters are used to compute the final scores determined from a number of parameters including network architecture, device setting, and the impact of vulnerability interactions. An attack graph (AG) is a security model representing the chains of vulnerability exploits in a network. A number of researchers have acknowledged the attack graph visual complexity and a lack of in-depth understanding. Current attack graph tools are constrained to only limited attributes or even rely on hand-generated input. The automatic formation of vulnerability information has been troublesome and vulnerability descriptions are frequently created by hand, or based on limited data. The network architectures and configurations along with the interactions between the individual vulnerabilities are considered in the method of computing the Cost using the DVSS and a dynamic cost-centric framework. A new methodology was built up to present an attack graph with a dynamic cost metric based on DVSS and also a novel methodology to estimate and represent the cost-centric approach for each host’ states was followed out. A framework is carried out on a test network, using the Nessus scanner to detect known vulnerabilities, implement these results and to build and represent the dynamic cost centric attack graph using ranking algorithms (in a standardised fashion to Mehta et al. 2006 and Kijsanayothin, 2010). However, instead of using vulnerabilities for each host, a CostRank Markov Model has developed utilising a novel cost-centric approach, thereby reducing the complexity in the attack graph and reducing the problem of visibility. An analogous parallel algorithm is developed to implement CostRank. The reason for developing a parallel CostRank Algorithm is to expedite the states ranking calculations for the increasing number of hosts and/or vulnerabilities. In the same way, the author intends to secure large scale networks that require fast and reliable computing to calculate the ranking of enormous graphs with thousands of vertices (states) and millions of arcs (representing an action to move from one state to another). In this proposed approach, the focus on a parallel CostRank computational architecture to appraise the enhancement in CostRank calculations and scalability of of the algorithm. In particular, a partitioning of input data, graph files and ranking vectors with a load balancing technique can enhance the performance and scalability of CostRank computations in parallel. A practical model of analogous CostRank parallel calculation is undertaken, resulting in a substantial decrease in calculations communication levels and in iteration time. The results are presented in an analytical approach in terms of scalability, efficiency, memory usage, speed up and input/output rates. Finally, a countermeasures model is developed to protect against network attacks by using a Dynamic Countermeasures Attack Tree (DCAT). The following scheme is used to build DCAT tree (i) using scalable parallel CostRank Algorithm to determine the critical asset, that system administrators need to protect; (ii) Track the Nessus scanner to determine the vulnerabilities associated with the asset using the dynamic cost centric framework and DVSS; (iii) Check out all published mitigations for all vulnerabilities. (iv) Assess how well the security solution mitigates those risks; (v) Assess DCAT algorithm in terms of effective security cost, probability and cost/benefit analysis to reduce the total impact of a specific vulnerability

    eHDDP: Enhanced Hybrid Domain Discovery Protocol for network topologies with both wired/wireless and SDN/non-SDN devices

    Get PDF
    Handling efficiently both wired and/or wireless devices in SDN networks is still an open issue. eHDDP comes as an enhanced version of the Hybrid Domain Discovery Protocol (HDDP) that allows the SDN control plane to discover and manage hybrid topologies composed by both SDN and non-SDN devices with wired and/or wireless interfaces, thus opening a path for the integration of IoT and SDN networks. Moreover, the proposal is also able to detect both unidirectional and bidirectional links between wireless devices. eHDDP has been thoroughly evaluated in different scenarios and exhibits good scalability properties since the number of required messages is proportional to the number of existing links in the network topology. Moreover, the obtained discovery and processing times give the opportunity to support scenarios with low mobility devices since the discovery times are in the range of hundreds of milliseconds.Comunidad de MadridJunta de Comunidades de Castilla-La Manch

    RUNTIME METHODS TO IMPROVE ENERGY EFFICIENCY IN SUPERCOMPUTING APPLICATIONS

    Get PDF
    Energy efficiency in supercomputing is critical to limit operating costs and carbon footprints. While the energy efficiency of future supercomputing centers needs to improve at all levels, the energy consumed by the processing units is a large fraction of the total energy consumed by High Performance Computing (HPC) systems. HPC applications use a parallel programming paradigm like the Message Passing Interface (MPI) to coordinate computation and communication among thousands of processors. With dynamically-changing factors both in hardware and software affecting energy usage of processors, there exists a need for power monitoring and regulation at runtime to achieve savings in energy. This dissertation highlights an adaptive runtime framework that enables processors with core-specific power control by dynamically adapting to workload characteristics to reduce power with little or no performance impact. Two opportunities to improve the energy efficiency of processors running MPI applications are identified - computational workload imbalance and waiting on memory. Monitoring of performance and power regulation is performed by the framework transparently within the MPI runtime system, eliminating the need for code changes to MPI applications. The effect of enforcing power limits (capping) on processors is also investigated. Experiments on 32 nodes (1024 cores) show that in presence of workload imbalance, the runtime reduces Central Processing Unit (CPU) frequency on cores not on the critical path, thereby reducing power and hence energy usage without deteriorating performance. Using this runtime, six MPI mini-applications and a full MPI application show an overall 20% decrease in energy use with less than 1% increase in execution time. In addition, the lowering of frequency on non-critical cores reduces run-to-run performance variation and improves performance. For the full application, an average speedup of 11% is seen, while the power is lowered by about 31% for an energy savings of up to 42%. Another experiment on 16 nodes (256 cores) that are power capped also shows performance improvement along with power reduction. Thus, energy optimization can also be a performance optimization. For applications that are limited by memory access times, memory metrics identified facilitate lowering of power by up to 32% without adversely impacting performance.Doctor of Philosoph

    Standart-konformes Snapshotting fĂŒr SystemC Virtuelle Plattformen

    Get PDF
    The steady increase in complexity of high-end embedded systems goes along with an increasingly complex design process. We are currently still in a transition phase from Hardware-Description Language (HDL) based design towards virtual-platform-based design of embedded systems. As design complexity rises faster than developer productivity a gap forms. Restoring productivity while at the same time managing increased design complexity can also be achieved through focussing on the development of new tools and design methodologies. In most application areas, high-level modelling languages such as SystemC are used in early design phases. In modern software development Continuous Integration (CI) is used to automatically test if a submitted piece of code breaks functionality. Application of the CI concept to embedded system design and testing requires fast build and test execution times from the virtual platform framework. For this use case the ability to save a specific state of a virtual platform becomes necessary. The saving and restoring of specific states of a simulation requires the ability to serialize all data structures within the simulation models. Improving the frameworks and establishing better methods will only help to narrow the design gap, if these changes are introduced with the needs of the engineers and developers in mind. Ultimately, it is their productivity that shall be improved. The ability to save the state of a virtual platform enables developers to run longer test campaigns that can even contain randomized test stimuli. If the saved states are modifiable the developers can inject faulty states into the simulation models. This work contributes an extension to the SoCRocket virtual platform framework to enable snapshotting. The snapshotting extension can be considered a reference implementation as the utilization of current SystemC/TLM standards makes it compatible to other frameworkds. Furthermore, integrating the UVM SystemC library into the framework enables test driven development and fast validation of SystemC/TLM models using snapshots. These extensions narrow the design gap by supporting designers, testers and developers to work more efficiently.Die stetige Steigerung der KomplexitĂ€t eingebetteter Systeme geht einher mit einer ebenso steigenden KomplexitĂ€t des Entwurfsprozesses. Wir befinden uns momentan in der Übergangsphase vom Entwurf von eingebetteten Systemen basierend auf Hardware-Beschreibungssprachen hin zum Entwurf ebendieser basierend auf virtuellen Plattformen. Da die EntwurfskomplexitĂ€t rasanter steigt als die ProduktivitĂ€t der Entwickler, entsteht eine Kluft. Die ProduktivitĂ€t wiederherzustellen und gleichzeitig die gesteigerte EntwurfskomplexitĂ€t zu bewĂ€ltigen, kann auch erreicht werden, indem der Fokus auf die Entwicklung neuer Werkzeuge und Entwurfsmethoden gelegt wird. In den meisten Anwendungsgebieten werden Modellierungssprachen auf hoher Ebene, wie zum Beispiel SystemC, in den frĂŒhen Entwurfsphasen benutzt. In der modernen Software-Entwicklung wird Continuous Integration (CI) benutzt um automatisiert zu ĂŒberprĂŒfen, ob eine eingespielte Änderung am Quelltext bestehende FunktionalitĂ€ten beeintrĂ€chtigt. Die Anwendung des CI-Konzepts auf den Entwurf und das Testen von eingebetteten Systemen fordert schnelle Bau- und Test-AusfĂŒhrungszeiten von dem genutzten Framework fĂŒr virtuelle Plattformen. FĂŒr diesen Anwendungsfall wird auch die FĂ€higkeit, einen bestimmten Zustand der virtuellen Plattform zu speichern, erforderlich. Das Speichern und Wiederherstellen der ZustĂ€nde einer Simulation erfordert die Serialisierung aller Datenstrukturen, die sich in den Simulationsmodellen befinden. Das Verbessern von Frameworks und Etablieren besserer Methodiken hilft nur die Entwurfs-Kluft zu verringern, wenn diese Änderungen mit BerĂŒcksichtigung der BedĂŒrfnisse der Entwickler und Ingenieure eingefĂŒhrt werden. Letztendlich ist es ihre ProduktivitĂ€t, die gesteigert werden soll. Die FĂ€higkeit den Zustand einer virtuellen Plattform zu speichern, ermöglicht es den Entwicklern, lĂ€ngere Testkampagnen laufen zu lassen, die auch zufĂ€llig erzeugte Teststimuli beinhalten können oder, falls die gespeicherten ZustĂ€nde modifizierbar sind, fehlerbehaftete ZustĂ€nde in die Simulationsmodelle zu injizieren. Mein mit dieser Arbeit geleisteter Beitrag beinhaltet die Erweiterung des SoCRocket Frameworks um Checkpointing FunktionalitĂ€t im Sinne einer Referenzimplementierung. Weiterhin ermöglicht die Integration der UVM SystemC Bibliothek in das Framework die Umsetzung der testgetriebenen Entwicklung und schnelle Validierung von SystemC/TLM Modellen mit Hilfe von Snapshots

    Economic applications of nonparametric methods

    Get PDF
    This thesis deals with the subject of nonparametric methods, focusing on application to economic issues. Chapter 2 introduces the basic nonparametric methods underlying the applications in the subsequent chapters. In Chapter 3 we propose some basic standards to improve the use and reporting of nonparametric methods in the statistics and economics literature for the purpose of accuracy and reproducibility. We make recommendations on four aspects of the application of nonparametric methods: computational practice, published reporting, numerical accuracy, and visualization. In Chapter 4 we investigate the effect of life-cycle factors and other demographic characteristics on income inequality in the UK. Two conditional inequality measures are derived from estimating the cumulative distribution function of household income, conditional upon a broad set of explanatory variables. Estimation of the distribution is carried out using a semiparametric approach. The proposed inequality estimators are easily interpretable and are shown to be consistent. Our results indicate the importance of interfamily differences in the analysis of income distribution. In addition, our estimation procedure uncovers higher-order properties of the income distribution and non-linearities of its moments that cannot be captured by means of a "standard" parametric approach. Several features of the conditional distribution of income are highlighted. Chapter 5 we reexamine the relationship between openness to trade and the environment, controlling for economic development, in order to identify the presence of multiple regimes in the cross-country pollution-economic relationship. We first identify the presence of multiple regimes by using specification tests which entertain a single regime model as the null hypothesis. Then we develop an easily interpretable measure, based on an original application of the Blinder-Oaxaca decomposition, of the impact on the environment due to differences in regimes. Finally we apply a nonparametric recursive partitioning algorithm to endogenously identify various regimes. Our conclusions are threefold. First, we reject the null hypothesis that all countries obey a common linear model. Second, we find that quantitatively regime differences can have a significant impact. Thirdly, by using regression tree analysis we find subsets of countries which appear to possess very different environmental/economic relationships. In Chapter 6 investigate the existence of the so called environmental kuznets curve (EKC), the inverted-U shaped relationship between income and pollution, using nonparametric regression and a threshold regression methods. We find support for threshold models that lead to different reduced-form relationships between environmental quality and economic activity when early stages of economic growth are contrasted with later stages, There is no evidence of a common inverted U-shaped environment/economy relationship that all country follow as they grow. We also find that changes that might benefit the environment occur at much higher levels of income than those implied by standard models. Our findings support models in which improvements are a consequence of the deliberate introduction of policies addressing environmental concerns. Moreover, we find evidence that countries with low-income levels have a far greater variability in emissions per capita than high-income countries. This has the implication that it may be more difficult to predict emission levels for low-income countries approaching the turning point. A summary of the main findings and further research directions are presented in Chapter 7 and in Chapter 8, respectively

    Enabling Hyperscale Web Services

    Full text link
    Modern web services such as social media, online messaging, web search, video streaming, and online banking often support billions of users, requiring data centers that scale to hundreds of thousands of servers, i.e., hyperscale. In fact, the world continues to expect hyperscale computing to drive more futuristic applications such as virtual reality, self-driving cars, conversational AI, and the Internet of Things. This dissertation presents technologies that will enable tomorrow’s web services to meet the world’s expectations. The key challenge in enabling hyperscale web services arises from two important trends. First, over the past few years, there has been a radical shift in hyperscale computing due to an unprecedented growth in data, users, and web service software functionality. Second, modern hardware can no longer support this growth in hyperscale trends due to a decline in hardware performance scaling. To enable this new hyperscale era, hardware architects must become more aware of hyperscale software needs and software researchers can no longer expect unlimited hardware performance scaling. In short, systems researchers can no longer follow the traditional approach of building each layer of the systems stack separately. Instead, they must rethink the synergy between the software and hardware worlds from the ground up. This dissertation establishes such a synergy to enable futuristic hyperscale web services. This dissertation bridges the software and hardware worlds, demonstrating the importance of that bridge in realizing efficient hyperscale web services via solutions that span the systems stack. The specific goal is to design software that is aware of new hardware constraints and architect hardware that efficiently supports new hyperscale software requirements. This dissertation spans two broad thrusts: (1) a software and (2) a hardware thrust to analyze the complex hyperscale design space and use insights from these analyses to design efficient cross-stack solutions for hyperscale computation. In the software thrust, this dissertation contributes uSuite, the first open-source benchmark suite of web services built with a new hyperscale software paradigm, that is used in academia and industry to study hyperscale behaviors. Next, this dissertation uses uSuite to study software threading implications in light of today’s hardware reality, identifying new insights in the age-old research area of software threading. Driven by these insights, this dissertation demonstrates how threading models must be redesigned at hyperscale by presenting an automated approach and tool, uTune, that makes intelligent run-time threading decisions. In the hardware thrust, this dissertation architects both commodity and custom hardware to efficiently support hyperscale software requirements. First, this dissertation characterizes commodity hardware’s shortcomings, revealing insights that influenced commercial CPU designs. Based on these insights, this dissertation presents an approach and tool, SoftSKU, that enables cheap commodity hardware to efficiently support new hyperscale software paradigms, improving the efficiency of real-world web services that serve billions of users, saving millions of dollars, and meaningfully reducing the global carbon footprint. This dissertation also presents a hardware-software co-design, uNotify, that redesigns commodity hardware with minimal modifications by using existing hardware mechanisms more intelligently to overcome new hyperscale overheads. Next, this dissertation characterizes how custom hardware must be designed at hyperscale, resulting in industry-academia benchmarking efforts, commercial hardware changes, and improved software development. Based on this characterization’s insights, this dissertation presents Accelerometer, an analytical model that estimates gains from hardware customization. Multiple hyperscale enterprises and hardware vendors use Accelerometer to make well-informed hardware decisions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169802/1/akshitha_1.pd

    Privacy-aware Security Applications in the Era of Internet of Things

    Get PDF
    In this dissertation, we introduce several novel privacy-aware security applications. We split these contributions into three main categories: First, to strengthen the current authentication mechanisms, we designed two novel privacy-aware alternative complementary authentication mechanisms, Continuous Authentication (CA) and Multi-factor Authentication (MFA). Our first system is Wearable-assisted Continuous Authentication (WACA), where we used the sensor data collected from a wrist-worn device to authenticate users continuously. Then, we improved WACA by integrating a noise-tolerant template matching technique called NTT-Sec to make it privacy-aware as the collected data can be sensitive. We also designed a novel, lightweight, Privacy-aware Continuous Authentication (PACA) protocol. PACA is easily applicable to other biometric authentication mechanisms when feature vectors are represented as fixed-length real-valued vectors. In addition to CA, we also introduced a privacy-aware multi-factor authentication method, called PINTA. In PINTA, we used fuzzy hashing and homomorphic encryption mechanisms to protect the users\u27 sensitive profiles while providing privacy-preserving authentication. For the second privacy-aware contribution, we designed a multi-stage privacy attack to smart home users using the wireless network traffic generated during the communication of the devices. The attack works even on the encrypted data as it is only using the metadata of the network traffic. Moreover, we also designed a novel solution based on the generation of spoofed traffic. Finally, we introduced two privacy-aware secure data exchange mechanisms, which allow sharing the data between multiple parties (e.g., companies, hospitals) while preserving the privacy of the individual in the dataset. These mechanisms were realized with the combination of Secure Multiparty Computation (SMC) and Differential Privacy (DP) techniques. In addition, we designed a policy language, called Curie Policy Language (CPL), to handle the conflicting relationships among parties. The novel methods, attacks, and countermeasures in this dissertation were verified with theoretical analysis and extensive experiments with real devices and users. We believe that the research in this dissertation has far-reaching implications on privacy-aware alternative complementary authentication methods, smart home user privacy research, as well as the privacy-aware and secure data exchange methods
    • 

    corecore