61 research outputs found

    Performance Analysis and Design of Mobile Ad-Hoc Networks

    Get PDF
    We focus on the performance analysis and design of a wireless ad-hoc network using a virtual-circuit or reservation based medium access layer. In a reservation based MAC network, source nodes reserve a session's link capacity end-to-end over the entire path before sending traffic over the established path. An example of a generic reservation based MAC protocol is Unifying Slot Assignment Protocol (USAP). Any reservation based medium access protocol (including USAP) uses a simple set of rules to determine the cells or timeslots available at a node to reserve link capacity along the path to the next node. Given inputs of node locations, traffic pattern between nodes and link propagation matrices, we develop models to estimate blocking probability and throughput for reservation based wireless ad-hoc networks. These models are based on extending reduced load loss network models for a wireless network. For generic USAP with multiple frequency channels, the key effect of multiuser interference on a link is modeled via reduced available link capacity where the effects of transmissions and receptions in the link neighborhood are modeled using USAP reservation rules. We compare our results with simulation and obtain good results using our extended reduced load loss network models but with reduced available link capacity distribution obtained by simulation. For the case of generic USAP using a single frequency channel, we develop models for unicast traffic using reduced load loss network models but with the sharing of the wireless medium between a node and its neighbors modeled by considering cliques of neighboring interfering links around a particular link. We compare results of this model with simulation and show good match. We also develop models to calculate source-destination throughput for the reservation MAC as used in the Joint Tactical Radio System to support both unicast and multicast traffic. These models are based on extending reduced load loss network models for wireless multicast traffic with the sharing of the wireless medium between a node and its (upto 2 hop) neighbors modeled by considering cliques of interfering nodes around a particular node. We compare results of this model with simulation and show good match with simulation. Once we have developed models to estimate throughput and blocking probabilities, we use these models to optimize total network throughput. In order to optimize total throughput, we compute throughput sensitivities of the reduced load loss network model using an implied cost formulation and use these sensitivities to choose the routing probabilities among multiple paths so that total network throughput is maximized. In any network scenario, MANETs can get disconnected into clusters. As part of the MANET design problem, we look at the problem of establishing network connectivity and satisfying required traffic capacity between disconnected clusters by placing a minimum number of advantaged high flying Aerial Platforms (APs) as relay nodes at appropriate places. We also extend the connectivity solution in order to make the network single AP survivable. The problem of providing both connectivity and required capacity between disconnected ground clusters (which contain nodes that can communicate directly with each other) is formulated as a summation-form clustering problem of the ground clusters with the APs along with inter-AP distance constraints that make the AP network connected and with complexity costs that take care of ground cluster to AP capacity constraints. The resultant clustering problem is solved using Deterministic Annealing to find (near) globally optimal solutions for the minimum number and locations of the APs to establish connectivity and provide required traffic capacity between disconnected clusters. The basic connectivity constraints are extended to include conditions that make the resultant network survivable to a single AP failure. In order to make the network single AP survivable, we extend the basic connectivity solution by adding another summation form constraint so that the AP network forms a biconnected network and also by making sure that each ground cluster is connected to atleast two APs. We establish the validity of our algorithms by comparing them with optimal exhaustive search algorithms and show that our algorithms are near-optimal for the problem of establishing connectivity between disconnected clusters

    Exact algorithms for network design problems using graph orientations

    Get PDF
    Gegenstand dieser Dissertation sind exakte Lösungsverfahren für topologische Netzwerkdesignprobleme. Diese kombinatorischen Optimierungsprobleme tauchen in unterschiedlichen realen Anwendungen auf, wie z.B. in der Telekommunikation und der Energiewirtschaft. Die grundlegende Problemstellung dabei ist die Planung bzw. der Ausbau von Netzwerken, die Kunden durch physikalische Leitungen miteinander verbinden. Im Allgemeinen lassen sich solche Probleme graphentheoretisch wie folgt beschreiben: Gegeben eine Menge von Knoten (Kunden, Straßenkreuzungen, Router u.s.w.), eine Menge von Kanten (potenzielle Verbindungsmöglichkeiten) und eine Kostenfunktion auf den Kanten und/oder Knoten. Zu bestimmen ist eine Teilmenge von Knoten und Kanten, so dass die Kostensumme der gewählten Elemente minimiert wird und dabei Nebenbedingungen wie Zusammenhang, Ausfallsicherheit, Kardinalität o.ä. erfüllt werden. In dieser Dissertation behandeln wir zwei spezielle Klassen von topologischen Netzwerkdesignproblemen, nämlich das k-Cardinality Tree Problem (KCT) und das {0,1,2}-Survivable Netzwerkdesignproblem ({0,1,2}- SND) mit Knotenzusammenhang. Diese Probleme sind im Allgemeinen NP-schwer, d.h. nach derzeitigem Stand der Forschung kann es für solche Probleme keine Algorithmen geben die eine optimale Lösung berechnen und dabei für jede mögliche Instanz eine effiziente (d.h. polynomielle) Laufzeit garantieren. Die oben genannten Probleme lassen sich als ganzzahlige lineare Programme (ILPs) formulieren, d.h. als Systeme aus linearen Ungleichungen, ganzzahligen Variablen und einer linearen Zielfunktion. Solche Modelle lassen sich mit Methoden der sogenannten mathematischen Programmierung lösen. Dass die entsprechenden Lösungsverfahren im Allgemeinen sehr zeitaufwendig sein können, war ein oft genutztes Argument für die Entwicklung von (Meta-)Heuristiken um schnell eine Lösung zu erhalten, wenn auch auf Kosten der Optimalität. In dieser Dissertation zeigen wir, dass es, unter Ausnutzung gewisser graphentheoretischer Eigenschaften der zulässigen Lösungen, durchaus möglich ist große anwendungsnahe Probleminstanzen der von uns betrachteten Probleme beweisbar optimal und praktisch-effizient zu lösen. Basierend auf Orientierungseigenschaften der optimalen Lösungen, formulieren wir neue, beweisbar stärkere ILPs und lösen diese anschließend mit Hilfe maßgeschneiderter Branch-and-Cut Algorithmen. Durch umfangreiche polyedrische Analysen können wir beweisen, dass diese Modelle einerseits formal stärkere Beschreibungen der Lösungsräume liefern als bisher bekannte Modelle und andererseits für Branch-and-Cut Verfahren viele praktische Vorteile besitzen. Im Kontext des {0,1,2}-SND geben wir zum ersten Mal eine Orientierungseigenschaft zweiknotenzusammenhängender Graphen an, die zu einer beweisbar stärkeren ILP-Formulierung führt und lösen damit ein in der Literatur seit langem offenes Problem. Unsere experimentellen Ergebnisse für beide Problemklassen zeigen, dass während noch vor kurzem nur Instanzen mit weniger als 200 Knoten in annehmbarer Zeit berechnet werden konnten unsere Algorithmen das optimale Lösen von Instanzen mit mehreren tausend Knoten erlauben. Insbesondere für das KCT Problem ist unser exaktes Verfahren oft sogar schneller als moderne Metaheuristiken, die i.d.R. keine optimale Lösungen finden.The subject of this thesis are exact solution strategies for topological network design problems. These combinatorial optimization problems arise in various real-world scenarios, as, e.g., in the telecommunication and energy industries. The prime task thereby is to plan or extend networks, physically connecting customers. In general we can describe such problems graph-theoretically as follows: Given a set of nodes (customers, street crossings, routers, etc.), a set of edges (potential connections, e.g., cables), and a cost function on the edges and/or nodes. We ask for a subset of nodes and edges, such that the sum of the costs of the selected elements is minimized while satisfying side-conditions as, e.g., connectivity, reliability, or cardinality. In this thesis we concentrate on two special classes of topological network design problems: the k-cardinality tree problem (KCT) and the f0,1,2g-survivable network design problem (f0,1,2g-SND) with node-connectivity constraints. These problems are in general NP-hard, i.e., according to the current knowledge, it is very unlikely that optimal solutions can be found efficiently (i.e., in polynomial time) for all possible problem instances. The above problems can be formulated as integer linear programs (ILPs), i.e., as systems of linear inequalities, integral variables, and a linear objective function. Such models can be solved using methods of mathematical programming. Generally, the corresponding solutions methods can be very time-consuming. This was often used as an argument for developing (meta-)heuristics to obtain solutions fast, although at the cost of their optimality. However, in this thesis we show that, exploiting certain graph-theoretic properties of the feasible solutions, we are able to solve large real-world problem instances to provable optimality efficiently in practice. Based on orientation properties of optimal solutions we formulate new, provably stronger ILPs and solve them via specially tailored branch-and-cut algorithms. Our extensive polyhedral analyses show that these models give tighter descriptions of the solution spaces and also offer certain algorithmic advantages in practice. In the context of f0,1,2g-SND we are able to present the first orientation property of 2-node-connected graphs which leads to a provably stronger ILP formulation, thereby answering a long standing open research question. Until recently, both our problem classes allowed optimal solutions only for instances with roughly up to 200 nodes. Our experimental results show that our new approaches allow instances with thousands of nodes. Especially for the KCT problem, our exact method is often even faster than state-of-the-art metaheuristics, which usually do not find optimal solutions

    SIMULATION ANALYSIS OF USMC HIMARS EMPLOYMENT IN THE WESTERN PACIFIC

    Get PDF
    As a result of renewed focus on great power competition, the United States Marine Corps is currently undergoing a comprehensive force redesign. In accordance with the Commandant’s Planning Guidance and Force Design 2030, this redesign includes an increase of 14 rocket artillery batteries while divesting 14 cannon artillery batteries. These changes necessitate study into tactics and capabilities for rocket artillery against a peer threat in the Indo-Pacific region. This thesis implements an efficient design of experiments to simulate over 1.6 million Taiwan invasions using a stochastic, agent-based combat model. Varying tactics and capabilities as input, the model returns measures of effectiveness to serve as the response in metamodels, which are then analyzed for critical factors, interactions, and change points. The analysis provides insight into the principal factors affecting lethality and survivability for ground-based rocket fires. The major findings from this study include the need for increasingly distributed artillery formations, highly mobile launchers that can emplace and displace quickly, and the inadequacy of the unitary warheads currently employed by HIMARS units. Solutions robust to adversary actions and simulation variability can inform wargames and future studies as the Marine Corps continues to adapt in preparation for potential peer conflict.Captain, United States Marine CorpsApproved for public release. Distribution is unlimited

    Proactive Approaches for System Design under Uncertainty Applied to Network Synthesis and Capacity Planning

    Get PDF
    The need to design systems under uncertainty arises frequently in applications such as telecommunication network configuration, airline hub-and-spoke/inter-hub network design, power grid design, transportation system design, call center staffing, and distribution center design. Such problems are very challenging because: (1) design problems with sophisticated configuration requirements for medium to large scale systems often yield large-sized linear/nonlinear mathematical models with both continuous and discrete decision variables, and (2) in most cases input parameters such as demand arrival rates are subject to uncertainty, whereas engineers have to make a design decision ``today,'' before the outcomes of the uncertain parameters can be observed. The purpose of this study was to develop proactive modeling methodologies and effective solution techniques for such system design problems. Particular emphasis was placed on a network design problem with connectivity and diameter requirements under probabilistic edge failures and a service system capacity planning problem under uncertain demand rates.Industrial Engineering & Managemen

    Synthesis, Interdiction, and Protection of Layered Networks

    Get PDF
    This research developed the foundation, theory, and framework for a set of analysis techniques to assist decision makers in analyzing questions regarding the synthesis, interdiction, and protection of infrastructure networks. This includes extension of traditional network interdiction to directly model nodal interdiction; new techniques to identify potential targets in social networks based on extensions of shortest path network interdiction; extension of traditional network interdiction to include layered network formulations; and develops models/techniques to design robust layered networks while considering trade-offs with cost. These approaches identify the maximum protection/disruption possible across layered networks with limited resources, find the most robust layered network design possible given the budget limitations while ensuring that the demands are met, include traditional social network analysis, and incorporate new techniques to model the interdiction of nodes and edges throughout the formulations. In addition, the importance and effects of multiple optimal solutions for these (and similar) models is investigated. All the models developed are demonstrated on notional examples and were tested on a range of sample problem sets

    Obtaining Anti-Missile Decoy Launch Solution from a Ship Using Machine Learning Techniques

    Get PDF
    One of the most dangerous situations a warship may face is a missile attack launched from other ships, aircrafts, submarines or land. In addition, given the current scenario, it is not ruled out that a terrorist group may acquire missiles and use them against ships operating close to the coast, which increases their vulnerabilitydue to the limited reaction time. One of the means the ship has for its defense are decoys, designed to deceive the enemy missile. However, for their use to be effective it is necessary to obtain, in a quick way, a valid launching solution. The purpose of this article is to design a methodology to solve the problem of decoy launching and to provide the ship immediately with the necessary data to make the firing decision. To solve the problem machine learning models (neural networks and support vector machines) and a set of training data obtained in simulations will be used. The performance measures obtained with the implementation of multilayer perceptron models allow the replacement of the current procedures based on tables and launching rules with machine learning algorithms that are more flexible and adaptable to a larger number of scenarios

    Mitigating hidden node problem in an IEEE 802.16 failure resilient multi-hop wireless backhaul

    Get PDF
    Backhaul networks are used to interconnect access points and further connect them to gateway nodes which are located in regional or metropolitan centres. Conventionally, these backhaul networks are established using metallic cables, optical fibres, microwave or satellite links. With the proliferation of wireless technologies, multi-hop wireless backhaul networks emerge as a potential cost effective and flexible solution to provide extended coverage to areas where the deployment of wired backhaul is difficult or cost-prohibitive, such as the difficult to access and sparsely populated remote areas, which have little or no existing wired infrastructure.Nevertheless, wireless backhaul networks are vulnerable to node or link failures. In order to ensure undisrupted traffic transmission even in the presence of failures, additional nodes and links are introduced to create alternative paths between each source and destination pair. Moreover, the deployment of such extra links and nodes requires careful planning to ensure that available network resources can be fully utilised, while still achieving the specified failure resilience with minimum infrastructure establishment cost.The majority of the current research efforts focus on improving the failure resilience of wired backhaul networks but little is carried out on the wireless counterparts. Most of the existing studies on improving the failure resilience of wireless backhaul networks concern energy-constrained networks such as the wireless sensor and ad hoc networks. Moreover, they tend to focus on maintaining the connectivity of the networks during failure, but neglecting the network performance. As such, it calls for a better approach to design a wireless backhaul network, which can meet the specified failure resilience requirement with minimum network cost, while achieving the specified quality of service (QoS).In this study, a failure resilient wireless backhaul topology, taking the form of a ladder network, is proposed to connect a remote community to a gateway node located in a regional or metropolitan centre. This topology is designed with the use of a minimum number of nodes. Also, it provides at least one backup path between each node pair. With the exception of a few failure scenarios, the proposed ladder network can sustain multiple simultaneous link or node failures. Furthermore, it allows traffic to traverse a minimum number of additional hops to arrive at the destination during failure conditions.WiMax wireless technology, based on the IEEE 802.16 standard, is applied to the proposed ladder network of different hop counts. This wireless technology can operate in either point-to-multipoint single-hop mode or multi-hop mesh mode. For the latter, coordinated distributed scheduling involving a three-way handshake procedure is used for resource allocation. Computer simulations are used to extensively evaluate the performance of the ladder network. It is shown that the three-way handshake suffers from severe hidden node problem, which restrains nodes from data transmission for long period of time. As a result, data packets accumulate in the buffer queue of the affected nodes and these packets will be dropped when the buffer overflows. This in turn results in the degradation of the network throughput and increase of average transmission delay.A new scheme called reverse notification (RN) is proposed to overcome the hidden node problem. With this new scheme, all the nodes will be informed of the minislots requested by their neighbours. This will prevent the nodes from making the same request and increase the chance for the nodes to obtain all their requested resources, and start transmitting data as soon as the handshake is completed. Computer simulations have verified that the use of this RN can significantly reduce the hidden terminal problem and thus increase network throughput, as well as reduce transmission delay.In addition, two new schemes, namely request-resend and dynamic minislot allocation, are proposed to further mitigate the hidden node problem as it deteriorates during failure. The request-resend scheme is proposed to solve the hidden node problem when the RN message failed to arrive in time at the destined node to prevent it from sending a conflicting request. On the other hand, the dynamic minislot allocation scheme is proposed to allocate minislots to a given node according to the amount of traffic that it is currently servicing. It is shown that these two schemes can greatly enhance the network performance under both normal and failure conditions.The performance of the ladder network can be further improved by equipping each node with two transceivers to allow them to transmit concurrently on two different frequency channels. Moreover, a two-channel two-transceiver channel assignment (TTDCA) algorithm is proposed to allocate minislots to the nodes. When operating with this algorithm, a node uses only one of its two transceivers to transmit control messages during control subframe and both transceivers to transmit data packets during data subframe. Also, the frequency channels of the nodes are pre-assigned to more effectively overcome the hidden node problem. It is shown that the use of the TTDCA algorithm, in conjunction with the request-resend and RN schemes, is able to double the maximum achievable throughput of the ladder network, when compared to the single channel case. Also, the throughput remains constant regardless of the hop counts.The TTDCA algorithm is further modified to make use of the second transceiver at each node to transmit control messages during control subframe. Such an approach is referred to as enhanced TTDCA (ETTDCA) algorithm. This algorithm is effective in reducing the duration needed to complete the three-way handshake without sacrificing network throughput. It is shown that the application of the ETTDCA algorithm in ladder networks of different hop counts has greatly reduced the transmission delay to a value which allows the proposed network to not only relay a large amount of data traffic but also delay-sensitive traffics. This suggests that the proposed ladder network is a cost effective solution, which can provide the necessary failure resilience and specified QoS, for delivering broadband multimedia services to the remote rural communities

    A Stochastic Benders Decomposition Scheme for Large-Scale Data-Driven Network Design

    Full text link
    Network design problems involve constructing edges in a transportation or supply chain network to minimize construction and daily operational costs. We study a data-driven version of network design where operational costs are uncertain and estimated using historical data. This problem is notoriously computationally challenging, and instances with as few as fifty nodes cannot be solved to optimality by current decomposition techniques. Accordingly, we propose a stochastic variant of Benders decomposition that mitigates the high computational cost of generating each cut by sampling a subset of the data at each iteration and nonetheless generates deterministically valid cuts (as opposed to the probabilistically valid cuts frequently proposed in the stochastic optimization literature) via a dual averaging technique. We implement both single-cut and multi-cut variants of this Benders decomposition algorithm, as well as a k-cut variant that uses clustering of the historical scenarios. On instances with 100-200 nodes, our algorithm achieves 4-5% optimality gaps, compared with 13-16% for deterministic Benders schemes, and scales to instances with 700 nodes and 50 commodities within hours. Beyond network design, our strategy could be adapted to generic two-stage stochastic mixed-integer optimization problems where second-stage costs are estimated via a sample average

    Nature-inspired survivability: Prey-inspired survivability countermeasures for cloud computing security challenges

    Get PDF
    As cloud computing environments become complex, adversaries have become highly sophisticated and unpredictable. Moreover, they can easily increase attack power and persist longer before detection. Uncertain malicious actions, latent risks, Unobserved or Unobservable risks (UUURs) characterise this new threat domain. This thesis proposes prey-inspired survivability to address unpredictable security challenges borne out of UUURs. While survivability is a well-addressed phenomenon in non-extinct prey animals, applying prey survivability to cloud computing directly is challenging due to contradicting end goals. How to manage evolving survivability goals and requirements under contradicting environmental conditions adds to the challenges. To address these challenges, this thesis proposes a holistic taxonomy which integrate multiple and disparate perspectives of cloud security challenges. In addition, it proposes the TRIZ (Teorija Rezbenija Izobretatelskib Zadach) to derive prey-inspired solutions through resolving contradiction. First, it develops a 3-step process to facilitate interdomain transfer of concepts from nature to cloud. Moreover, TRIZ’s generic approach suggests specific solutions for cloud computing survivability. Then, the thesis presents the conceptual prey-inspired cloud computing survivability framework (Pi-CCSF), built upon TRIZ derived solutions. The framework run-time is pushed to the user-space to support evolving survivability design goals. Furthermore, a target-based decision-making technique (TBDM) is proposed to manage survivability decisions. To evaluate the prey-inspired survivability concept, Pi-CCSF simulator is developed and implemented. Evaluation results shows that escalating survivability actions improve the vitality of vulnerable and compromised virtual machines (VMs) by 5% and dramatically improve their overall survivability. Hypothesis testing conclusively supports the hypothesis that the escalation mechanisms can be applied to enhance the survivability of cloud computing systems. Numeric analysis of TBDM shows that by considering survivability preferences and attitudes (these directly impacts survivability actions), the TBDM method brings unpredictable survivability information closer to decision processes. This enables efficient execution of variable escalating survivability actions, which enables the Pi-CCSF’s decision system (DS) to focus upon decisions that achieve survivability outcomes under unpredictability imposed by UUUR
    corecore