16 research outputs found

    Resource Allocation in Networking and Computing Systems: A Security and Dependability Perspective

    Get PDF
    In recent years, there has been a trend to integrate networking and computing systems, whose management is getting increasingly complex. Resource allocation is one of the crucial aspects of managing such systems and is affected by this increased complexity. Resource allocation strategies aim to effectively maximize performance, system utilization, and profit by considering virtualization technologies, heterogeneous resources, context awareness, and other features. In such complex scenario, security and dependability are vital concerns that need to be considered in future computing and networking systems in order to provide the future advanced services, such as mission-critical applications. This paper provides a comprehensive survey of existing literature that considers security and dependability for resource allocation in computing and networking systems. The current research works are categorized by considering the allocated type of resources for different technologies, scenarios, issues, attributes, and solutions. The paper presents the research works on resource allocation that includes security and dependability, both singularly and jointly. The future research directions on resource allocation are also discussed. The paper shows how there are only a few works that, even singularly, consider security and dependability in resource allocation in the future computing and networking systems and highlights the importance of jointly considering security and dependability and the need for intelligent, adaptive and robust solutions. This paper aims to help the researchers effectively consider security and dependability in future networking and computing systems.publishedVersio

    A process for the application of modular architectural principles to system concept design.

    Get PDF
    A system architecture can be configured in ways that simplify both a system design and its development, by using established architectural principles such as independence and modularity. Despite systems design having been recognised as a discipline and a process as early as the mid-1900s, there are currently few methods available that address how these principles can be applied in practice. The literature search for this research has established a set of principles that can be used to develop a modular design, but has also shown that there are few formal methods available that will allow a system designer to apply such principles. This thesis examines what the key principles of modular architecture are and develops a process that enables the application of these principles to a system concept design. Key principles used are those of simplicity, independence, modularity and similarity. The concept of ‘context types’ is developed to allow the system designer to choose an architectural strategy that suits the system context. Another novel concept of ‘functional interaction types’ helps the system designer to identify critical interactions within the architecture that need to be addressed. Finally, the concept of functional interaction types is combined with existing measures of architectural ‘goodness’ to generate a method of evaluating the architecture that focusses on critical aspects. The process proposed is demonstrated by using a range of system examples and compared with the two of the most well-known methods currently available; Systematic Design and Axiomatic Design

    The distance-based critical node detection problem : models and algorithms

    Get PDF
    In the wake of terrorism and natural disasters, assessing networked systems for vulnerability to failures that arise from these events is essential to maintaining the operations of the systems. This is very crucial given the heavy dependence of daily social and economic activities on networked systems such as transport, telecommunication and energy networks as well as the interdependence of these networks. In this thesis, we explore methods to assess the vulnerability of networked systems to element failures which employ connectivity as the performance measure for vulnerability. The associated optimisation problem termed the critical node (edge) detection problem seeks to identify a subset of nodes (edges) of a network whose deletion (failure) optimises a network connectivity objective. Traditional connectivity measures employed in most studies of the critical node detection problem overlook internal cohesiveness of networks and the extent of connectivity in the network. This limits the effectiveness of the developed methods in uncovering vulnerability with regards to network connectivity. Our work therefore focuses on distance-based connectivity which is a fairly new class of connectivity introduced for studying the critical node detection problem to overcome the limitations of traditional connectivity measures. In Chapter 1, we provide an introduction outlining the motivations and the methods related to our study. In Chapter 2, we review the literature on the critical node detection problem as well as its application areas and related problems. Following this, we formally introduce the distance-based critical node detection problem in Chapter 3 where we propose new integer programming models for the case of hop-based distances and an efficient algorithm for the separation problems associated with the models. We also propose two families of valid inequalities. In Chapter 4, we consider the distance-based critical node detection problem using a heuristic approach in which we propose a centrality-based heuristic that employs a backbone crossover and a centrality-based neighbourhood search. In Chapter 5, we present generalisations of the methods proposed in Chapter 3 to edge-weighted graphs. We also introduce the edge-deletion version of the problem which we term the distance based critical edge detection problem. Throughout Chapters 3, 4 and 5, we provide computational experiments. Finally, in Chapter 6 we present conclusions as well future research directions. Keywords: Network Vulnerability, Critical Node Detection Problem, Distance-based Connectivity, Integer Programming, Lazy Constraints, Branch-and-cut, Heuristics.In the wake of terrorism and natural disasters, assessing networked systems for vulnerability to failures that arise from these events is essential to maintaining the operations of the systems. This is very crucial given the heavy dependence of daily social and economic activities on networked systems such as transport, telecommunication and energy networks as well as the interdependence of these networks. In this thesis, we explore methods to assess the vulnerability of networked systems to element failures which employ connectivity as the performance measure for vulnerability. The associated optimisation problem termed the critical node (edge) detection problem seeks to identify a subset of nodes (edges) of a network whose deletion (failure) optimises a network connectivity objective. Traditional connectivity measures employed in most studies of the critical node detection problem overlook internal cohesiveness of networks and the extent of connectivity in the network. This limits the effectiveness of the developed methods in uncovering vulnerability with regards to network connectivity. Our work therefore focuses on distance-based connectivity which is a fairly new class of connectivity introduced for studying the critical node detection problem to overcome the limitations of traditional connectivity measures. In Chapter 1, we provide an introduction outlining the motivations and the methods related to our study. In Chapter 2, we review the literature on the critical node detection problem as well as its application areas and related problems. Following this, we formally introduce the distance-based critical node detection problem in Chapter 3 where we propose new integer programming models for the case of hop-based distances and an efficient algorithm for the separation problems associated with the models. We also propose two families of valid inequalities. In Chapter 4, we consider the distance-based critical node detection problem using a heuristic approach in which we propose a centrality-based heuristic that employs a backbone crossover and a centrality-based neighbourhood search. In Chapter 5, we present generalisations of the methods proposed in Chapter 3 to edge-weighted graphs. We also introduce the edge-deletion version of the problem which we term the distance based critical edge detection problem. Throughout Chapters 3, 4 and 5, we provide computational experiments. Finally, in Chapter 6 we present conclusions as well future research directions. Keywords: Network Vulnerability, Critical Node Detection Problem, Distance-based Connectivity, Integer Programming, Lazy Constraints, Branch-and-cut, Heuristics

    Reliability Analysis of the Hypercube Architecture.

    Get PDF
    This dissertation presents improved techniques for analyzing network-connected (NCF), 2-connected (2CF), task-based (TBF), and subcube (SF) functionality measures in a hypercube multiprocessor with faulty processing elements (PE) and/or communication elements (CE). These measures help study system-level fault tolerance issues and relate to various application modes in the hypercube. Solutions discussed in the text fall into probabilistic and deterministic models. The probabilistic measure assumes a stochastic graph of the hypercube where PE\u27s and/or CE\u27s may fail with certain probabilities, while the deterministic model considers that some system components are already failed and aims to determine the system functionality. For probabilistic model, MIL-HDBK-217F is used to predict PE and CE failure rates for an Intel iPSC system. First, a technique called CAREL is presented. A proof of its correctness is included in an appendix. Using the shelling ordering concept, CAREL is shown to solve the exact probabilistic NCF measure for a hypercube in time polynomial in the number of spanning trees. However, this number increases exponentially in the hypercube dimension. This dissertation, then, aims to more efficiently obtain lower and upper bounds on the measures. Algorithms, presented in the text, generate tighter bounds than had been obtained previously and run in time polynomial in the cube dimension. The proposed algorithms for probabilistic 2CF measure consider PE and/or CE failures. In attempting to evaluate deterministic measures, a hybrid method for fault tolerant broadcasting in the hypercube is proposed. This method combines the favorable features of redundant and non-redundant techniques. A generalized result on the deterministic TBF measure for the hypercube is then described. Two distributed algorithms are proposed to identify the largest operational subcubes in a hypercube C\sb{n} with faulty PE\u27s. Method 1, called LOS1, requires a list of faulty components and utilizes the CMB operator of CAREL to solve the problem. In case the number of unavailable nodes (faulty or busy) increases, an alternative distributed approach, called LOS2, processes m available nodes in O(mn) time. The proposed techniques are simple and efficient

    Reliable load-balancing routing for resource-constrained wireless sensor networks

    Get PDF
    Wireless sensor networks (WSNs) are energy and resource constrained. Energy limitations make it advantageous to balance radio transmissions across multiple sensor nodes. Thus, load balanced routing is highly desirable and has motivated a significant volume of research. Multihop sensor network architecture can also provide greater coverage, but requires a highly reliable and adaptive routing scheme to accommodate frequent topology changes. Current reliability-oriented protocols degrade energy efficiency and increase network latency. This thesis develops and evaluates a novel solution to provide energy-efficient routing while enhancing packet delivery reliability. This solution, a reliable load-balancing routing (RLBR), makes four contributions in the area of reliability, resiliency and load balancing in support of the primary objective of network lifetime maximisation. The results are captured using real world testbeds as well as simulations. The first contribution uses sensor node emulation, at the instruction cycle level, to characterise the additional processing and computation overhead required by the routing scheme. The second contribution is based on real world testbeds which comprises two different TinyOS-enabled senor platforms under different scenarios. The third contribution extends and evaluates RLBR using large-scale simulations. It is shown that RLBR consumes less energy while reducing topology repair latency and supports various aggregation weights by redistributing packet relaying loads. It also shows a balanced energy usage and a significant lifetime gain. Finally, the forth contribution is a novel variable transmission power control scheme which is created based on the experience gained from prior practical and simulated studies. This power control scheme operates at the data link layer to dynamically reduce unnecessarily high transmission power while maintaining acceptable link reliability

    Causal failures and cost-effective edge augmentation in networks

    Get PDF
    Node failures have a terrible effect on the connectivity of the network. In traditional models, the failures of nodes affect their neighbors and may further trigger the failures of their neighbors, and so on. However, it is also possible that node failures would indirectly cause the failure of nodes that are not adjacent to the failed one. In a power grid, generators share the load. Failure of one generator induces extra load on other generators in the network, which could further trigger their failures. We call such failures causal failures. In this dissertation, we consider the impact of causal failures on multiple aspects of one network. More specifically, we list the content as follows. • In Chapter 1, we introduce basic concepts of networks and graphs, classical models of failures and formally define causal failures in a given network. • Chapter 2 addresses the network’s robustness and aims to find the maximum number of causal failures while maintaining a connected component with a size of at least a given integer. More specifically, we are looking into the number of causal node failures we can tolerate yet have most of the system connected with α being used to parametrize. • Chapter 3 deals with vulnerability, wherein we aim to find the minimum number of causal failures such that there are at least k connected components remaining. We are looking for the set of causal failures that will result in the network being disconnected into k or more components. • In Chapter 4, we consider causal node failures occurring in a cascading manner. Cascading causal node failures affect communication within nodes, which is dependent on the paths that connect them. Therefore, in this context of the cascading causal failure model, we study the impact of cascading causal failures on the distance between a pair of nodes in the network. More precisely, given a network G, a set of causal failures (containing possible cascading failures), a pair of nodes s and t, and a constant α ≥ 1, we would like to determine the maximum number of causal failures that can be applied (meaning that the nodes in the causal failures are removed), such that in the resulting network G′, dG′ (s, t) ≤ α × dG(s, t), where dG(s, t) and dG′ (s, t) are the distance between nodes s and t in the networks G and G′, respectively. • In Chapter 5, we consider causal edge failures in flow networks and investigate the impact of causal edge failures on flow transmission. We formulate an optimization problem to find the maximum number of causal edge failures after which the flow network can still deliver d units from source node s to terminal node t. • In Chapter 6, we consider edge-weighted network augmentation when facing causal failures. We look for a set of edges with minimum weight such that the network maintains an α-giant component when applying each causality individually. We show that the optimization problems in these chapters are NP-hard and provide the corresponding mixed integer linear programming models. Moreover, we design polynomial-time heuristic algorithms to solve them approximately. In each chapter, we run experiments on multiple synthetic and real networks to compare the performance of the mixed integer linear programming models and the heuristic algorithms. The results show that the heuristic algorithms show their efficacy and efficiency compared to the mixed-integer linear programming models
    corecore