190 research outputs found

    Game-Theoretic Frameworks and Strategies for Defense Against Network Jamming and Collocation Attacks

    Get PDF
    Modern networks are becoming increasingly more complex, heterogeneous, and densely connected. While more diverse services are enabled to an ever-increasing number of users through ubiquitous networking and pervasive computing, several important challenges have emerged. For example, densely connected networks are prone to higher levels of interference, which makes them more vulnerable to jamming attacks. Also, the utilization of software-based protocols to perform routing, load balancing and power management functions in Software-Defined Networks gives rise to more vulnerabilities that could be exploited by malicious users and adversaries. Moreover, the increased reliance on cloud computing services due to a growing demand for communication and computation resources poses formidable security challenges due to the shared nature and virtualization of cloud computing. In this thesis, we study two types of attacks: jamming attacks on wireless networks and side-channel attacks on cloud computing servers. The former attacks disrupt the natural network operation by exploiting the static topology and dynamic channel assignment in wireless networks, while the latter attacks seek to gain access to unauthorized data by co-residing with target virtual machines (VMs) on the same physical node in a cloud server. In both attacks, the adversary faces a static attack surface and achieves her illegitimate goal by exploiting a stationary aspect of the network functionality. Hence, this dissertation proposes and develops counter approaches to both attacks using moving target defense strategies. We study the strategic interactions between the adversary and the network administrator within a game-theoretic framework. First, in the context of jamming attacks, we present and analyze a game-theoretic formulation between the adversary and the network defender. In this problem, the attack surface is the network connectivity (the static topology) as the adversary jams a subset of nodes to increase the level of interference in the network. On the other side, the defender makes judicious adjustments of the transmission footprint of the various nodes, thereby continuously adapting the underlying network topology to reduce the impact of the attack. The defender\u27s strategy is based on playing Nash equilibrium strategies securing a worst-case network utility. Moreover, scalable decomposition-based approaches are developed yielding a scalable defense strategy whose performance closely approaches that of the non-decomposed game for large-scale and dense networks. We study a class of games considering discrete as well as continuous power levels. In the second problem, we consider multi-tenant clouds, where a number of VMs are typically collocated on the same physical machine to optimize performance and power consumption and maximize profit. This increases the risk of a malicious virtual machine performing side-channel attacks and leaking sensitive information from neighboring VMs. The attack surface, in this case, is the static residency of VMs on a set of physical nodes, hence we develop a timed migration defense approach. Specifically, we analyze a timing game in which the cloud provider decides when to migrate a VM to a different physical machine to mitigate the risk of being compromised by a collocated malicious VM. The adversary decides the rate at which she launches new VMs to collocate with the victim VMs. Our formulation captures a data leakage model in which the cost incurred by the cloud provider depends on the duration of collocation with malicious VMs. It also captures costs incurred by the adversary in launching new VMs and by the defender in migrating VMs. We establish sufficient conditions for the existence of Nash equilibria for general cost functions, as well as for specific instantiations, and characterize the best response for both players. Furthermore, we extend our model to characterize its impact on the attacker\u27s payoff when the cloud utilizes intrusion detection systems that detect side-channel attacks. Our theoretical findings are corroborated with extensive numerical results in various settings as well as a proof-of-concept implementation in a realistic cloud setting

    Towards Mobile Edge Computing: Taxonomy, Challenges, Applications and Future Realms

    Get PDF
    The realm of cloud computing has revolutionized access to cloud resources and their utilization and applications over the Internet. However, deploying cloud computing for delay critical applications and reducing the delay in access to the resources are challenging. The Mobile Edge Computing (MEC) paradigm is one of the effective solutions, which brings the cloud computing services to the proximity of the edge network and leverages the available resources. This paper presents a survey of the latest and state-of-the-art algorithms, techniques, and concepts of MEC. The proposed work is unique in that the most novel algorithms are considered, which are not considered by the existing surveys. Moreover, the chosen novel literature of the existing researchers is classified in terms of performance metrics by describing the realms of promising performance and the regions where the margin of improvement exists for future investigation for the future researchers. This also eases the choice of a particular algorithm for a particular application. As compared to the existing surveys, the bibliometric overview is provided, which is further helpful for the researchers, engineers, and scientists for a thorough insight, application selection, and future consideration for improvement. In addition, applications related to the MEC platform are presented. Open research challenges, future directions, and lessons learned in area of the MEC are provided for further future investigation

    Understanding Security Threats in Cloud

    Get PDF
    As cloud computing has become a trend in the computing world, understanding its security concerns becomes essential for improving service quality and expanding business scale. This dissertation studies the security issues in a public cloud from three aspects. First, we investigate a new threat called power attack in the cloud. Second, we perform a systematical measurement on the public cloud to understand how cloud vendors react to existing security threats. Finally, we propose a novel technique to perform data reduction on audit data to improve system capacity, and hence helping to enhance security in cloud. In the power attack, we exploit various attack vectors in platform as a service (PaaS), infrastructure as a service (IaaS), and software as a service (SaaS) cloud environments. to demonstrate the feasibility of launching a power attack, we conduct series of testbed based experiments and data-center-level simulations. Moreover, we give a detailed analysis on how different power management methods could affect a power attack and how to mitigate such an attack. Our experimental results and analysis show that power attacks will pose a serious threat to modern data centers and should be taken into account while deploying new high-density servers and power management techniques. In the measurement study, we mainly investigate how cloud vendors have reacted to the co-residence threat inside the cloud, in terms of Virtual Machine (VM) placement, network management, and Virtual Private Cloud (VPC). Specifically, through intensive measurement probing, we first profile the dynamic environment of cloud instances inside the cloud. Then using real experiments, we quantify the impacts of VM placement and network management upon co-residence, respectively. Moreover, we explore VPC, which is a defensive service of Amazon EC2 for security enhancement, from the routing perspective. Advanced Persistent Threat (APT) is a serious cyber-threat, cloud vendors are seeking solutions to ``connect the suspicious dots\u27\u27 across multiple activities. This requires ubiquitous system auditing for long period of time, which in turn causes overwhelmingly large amount of system audit logs. We propose a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high quality forensics analysis. In particular, we first propose an aggregation algorithm that preserves the event dependency in data reduction to ensure high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. We conduct a comprehensive evaluation on real world auditing systems using more than one-month log traces to validate the efficacy of our approach

    Walking Through Waypoints

    Full text link
    We initiate the study of a fundamental combinatorial problem: Given a capacitated graph G=(V,E)G=(V,E), find a shortest walk ("route") from a source sVs\in V to a destination tVt\in V that includes all vertices specified by a set WV\mathscr{W}\subseteq V: the \emph{waypoints}. This waypoint routing problem finds immediate applications in the context of modern networked distributed systems. Our main contribution is an exact polynomial-time algorithm for graphs of bounded treewidth. We also show that if the number of waypoints is logarithmically bounded, exact polynomial-time algorithms exist even for general graphs. Our two algorithms provide an almost complete characterization of what can be solved exactly in polynomial-time: we show that more general problems (e.g., on grid graphs of maximum degree 3, with slightly more waypoints) are computationally intractable

    Planning and Management of Cloud Computing Networks

    Get PDF
    Résumé L’évolution de l’internet a un effet important sur une grande partie de la population mondiale. On l’utilise pour communiquer, consulter de l’information, travailler et se divertir. Son utilité exceptionnelle a conduit à une explosion de la quantité d’applications et de ressources informatiques. Cependant, la croissance du réseau entraîne une importante consommation énergétique. Si la consommation énergétique des réseaux de télécommunications et des centres de données était celle d’un pays, il se classerait 5e pays du monde. Pis, le nombre de serveurs dans le monde devrait être multiplié par 10 entre 2013 et 2020. Ce contexte nous a motivé à étudier des techniques et des méthodes pour affecter les ressources d’une façon optimale par rapport aux coûts, à la qualité de service, à la consommation énergétique et `a l’impact écologique. Les résultats que nous avons obtenus minimisent les dépenses d’investissement (CAPEX) et les dépenses d’exploitation (OPEX), réduisent d’un facteur 6 le temps de réponse, diminuent la consommation énergétique de 30% et divisent les émissions de CO2 par un facteur 60. L’infonuagique permet l’accès dynamique aux ressources informatiques comme un service. Les programmes sont exécutés sur des serveurs connectés `a l’internet, et les usagers peuvent les utiliser depuis leurs ordinateurs et dispositifs mobiles. Le premier avantage de cette architecture est de réduire le temps de mise en place des applications et l’interopérabilité. En effet, un nouvel utilisateur n’a besoin que d’un navigateur web. Il n’est forcé ni d’installer de programmes sur son ordinateur, ni de posséder un système d’exploitation spécifique. Le deuxième avantage est la disponibilité des applications et de l’information de fa ̧con continue. Celles-ci peuvent être utilisées `a partir de n’importe quel endroit et de n’importe quel dis- positif connecté `a l’internet. De plus, les serveurs et les ressources informatiques peuvent être affectés aux applications de fa ̧con dynamique, selon la quantité d’utilisateurs et la charge de travail. C’est ce que l’on appelle l’élasticité des applications.---------- Abstract The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access

    Edge Assignment and Data Valuation in Federated Learning

    Get PDF
    Federated Learning (FL) is a recent Machine Learning method for training with private data separately stored in local machines without gathering them into one place for central learning. It was born to address the following challenges when applying Machine Learning in practice: (1) Communication cost: Most real-world data that can be useful for training are locally collected; to bring them all to one place for central learning can be expensive, especially in real-time learning applications when time is of the essence, for example, predicting the next word when texting on a smartphone; and (2) Privacy protection: Many applications must protect data privacy, such as those in the healthcare field; the private data can only be seen by its local owner and as such the learning may only use a content-hiding representation of this data, which is much less informative. To fulfill FL’s promise, this dissertation addresses three important problems regarding the need for good training data, system scalability, and uncertainty robustness: 1. The effectiveness of FL depends critically on the quality of the local training data. We should not only incentivize participants who have good training data but also minimize the effect of bad training data on the overall learning procedure. The first problem of my research is to determine a score to value a participant’s contribution. My approach is to compute such a score based on Shapley Value (SV), a concept of cooperative game theory for profit allocation in a coalition game. In this direction, the main challenge is due to the exponential time complexity of the SV computation, which is further complicated by the iterative manner of the FL learning algorithm. I propose a fast and effective valuation method that overcomes this challenge. 2. On scalability, FL depends on a central server for repeated aggregation of local training models, which is prone to become a performance bottleneck. A reasonable approach is to combine FL with Edge Computing: introduce a layer of edge servers to each serve as a regional aggregator to offload the main server. The scalability is thus improved, however at the cost of learning accuracy. The second problem of my research is to optimize this tradeoff. This dissertation shows that this cost can be alleviated with a proper choice of edge server assignment: which edge servers should aggregate the training models from which local machines. Specifically, I propose an assignment solution that is especially useful for the case of non-IID training data which is well-known to hinder today’s FL performance. 3. FL participants may decide on their own what devices they run on, their computing capabilities, and how often they communicate the training model with the aggregation server. The workloads incurred by them are therefore time-varying, and unpredictably. The server capacities are finite and can vary too. The third problem of my research is to compute an edge server assignment that is robust to such dynamics and uncertainties. I propose a stochastic approach to solving this problem

    Energy and performance-optimized scheduling of tasks in distributed cloud and edge computing systems

    Get PDF
    Infrastructure resources in distributed cloud data centers (CDCs) are shared by heterogeneous applications in a high-performance and cost-effective way. Edge computing has emerged as a new paradigm to provide access to computing capacities in end devices. Yet it suffers from such problems as load imbalance, long scheduling time, and limited power of its edge nodes. Therefore, intelligent task scheduling in CDCs and edge nodes is critically important to construct energy-efficient cloud and edge computing systems. Current approaches cannot smartly minimize the total cost of CDCs, maximize their profit and improve quality of service (QoS) of tasks because of aperiodic arrival and heterogeneity of tasks. This dissertation proposes a class of energy and performance-optimized scheduling algorithms built on top of several intelligent optimization algorithms. This dissertation includes two parts, including background work, i.e., Chapters 3–6, and new contributions, i.e., Chapters 7–11. 1) Background work of this dissertation. Chapter 3 proposes a spatial task scheduling and resource optimization method to minimize the total cost of CDCs where bandwidth prices of Internet service providers, power grid prices, and renewable energy all vary with locations. Chapter 4 presents a geography-aware task scheduling approach by considering spatial variations in CDCs to maximize the profit of their providers by intelligently scheduling tasks. Chapter 5 presents a spatio-temporal task scheduling algorithm to minimize energy cost by scheduling heterogeneous tasks among CDCs while meeting their delay constraints. Chapter 6 gives a temporal scheduling algorithm considering temporal variations of revenue, electricity prices, green energy and prices of public clouds. 2) Contributions of this dissertation. Chapter 7 proposes a multi-objective optimization method for CDCs to maximize their profit, and minimize the average loss possibility of tasks by determining task allocation among Internet service providers, and task service rates of each CDC. A simulated annealing-based bi-objective differential evolution algorithm is proposed to obtain an approximate Pareto optimal set. A knee solution is selected to schedule tasks in a high-profit and high-quality-of-service way. Chapter 8 formulates a bi-objective constrained optimization problem, and designs a novel optimization method to cope with energy cost reduction and QoS improvement. It jointly minimizes both energy cost of CDCs, and average response time of all tasks by intelligently allocating tasks among CDCs and changing task service rate of each CDC. Chapter 9 formulates a constrained bi-objective optimization problem for joint optimization of revenue and energy cost of CDCs. It is solved with an improved multi-objective evolutionary algorithm based on decomposition. It determines a high-quality trade-off between revenue maximization and energy cost minimization by considering CDCs’ spatial differences in energy cost while meeting tasks’ delay constraints. Chapter 10 proposes a simulated annealing-based bees algorithm to find a close-to-optimal solution. Then, a fine-grained spatial task scheduling algorithm is designed to minimize energy cost of CDCs by allocating tasks among multiple green clouds, and specifies running speeds of their servers. Chapter 11 proposes a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of systems and guarantee that response time limits of tasks are met in cloud-edge computing systems. A single-objective constrained optimization problem is solved by a proposed simulated annealing-based migrating birds optimization. This dissertation evaluates these algorithms, models and software with real-life data and proves that they improve scheduling precision and cost-effectiveness of distributed cloud and edge computing systems

    Modeling and Algorithmic Development for Selected Real-World Optimization Problems with Hard-to-Model Features

    Get PDF
    Mathematical optimization is a common tool for numerous real-world optimization problems. However, in some application domains there is a scope for improvement of currently used optimization techniques. For example, this is typically the case for applications that contain features which are difficult to model, and applications of interdisciplinary nature where no strong optimization knowledge is available. The goal of this thesis is to demonstrate how to overcome these challenges by considering five problems from two application domains. The first domain that we address is scheduling in Cloud computing systems, in which we investigate three selected problems. First, we study scheduling problems where jobs are required to start immediately when they are submitted to the system. This requirement is ubiquitous in Cloud computing but has not yet been addressed in mathematical scheduling. Our main contributions are (a) providing the formal model, (b) the development of exact and efficient solution algorithms, and (c) proofs of correctness of the algorithms. Second, we investigate the problem of energy-aware scheduling in Cloud data centers. The objective is to assign computing tasks to machines such that the energy required to operate the data center, i.e., the energy required to operate computing devices plus the energy required to cool computing devices, is minimized. Our main contributions are (a) the mathematical model, and (b) the development of efficient heuristics. Third, we address the problem of evaluating scheduling algorithms in a realistic environment. To this end we develop an approach that supports mathematicians to evaluate scheduling algorithms through simulation with realistic instances. Our main contributions are the development of (a) a formal model, and (b) efficient heuristics. The second application domain considered is powerline routing. We are given two points on a geographic area and respective terrain characteristics. The objective is to find a ``good'' route (which depends on the terrain), connecting both points along which a powerline should be built. Within this application domain, we study two selected problems. First, we study a geometric shortest path problem, an abstract and simplified version of the powerline routing problem. We introduce the concept of the k-neighborhood and contribute various analytical results. Second, we investigate the actual powerline routing problem. To this end, we develop algorithms that are built upon the theoretical insights obtained in the previous study. Our main contributions are (a) the development of exact algorithms and efficient heuristics, and (b) a comprehensive evaluation through two real-world case studies. Some parts of the research presented in this thesis have been published in refereed publications [119], [110], [109]
    corecore