374 research outputs found

    ICMP-based Third-Party Estimation of Cloud Availability

    Get PDF
    Cloud availability is an important parameter present in a typical Service Level Agreement (SLA). In order to check compliance with SLA commitments, a third party availability measurement is strongly needed. An availability estimation methods is evaluated here, based on the periodic repetition of sequence of probing packets in ICMP. Majority Voting, which declares a cloud to be available only if a majority of probing packets gets an echo from the cloud, appears to provide an accurate estimation even when the packet loss probability is rather high

    Whose Fault is It? Correctly Attributing Outages in Cloud Services

    Get PDF
    Cloud availability is a major performance parameter in cloud Service Level Agreements (SLA). Its correct evaluation is essential to SLA enforcement and possible litigation issues. Current methods fail to correctly identify the fault location, since they include the network contribution. We propose a procedure to identify the failures actually due to the cloud itself and provide a correct cloud availability measure. The procedure employs tools that are freely available, i.e. traceroute and whois, and arrives at the availability measure by first identifying the boundaries of the cloud. We evaluate our procedure by testing it on three major cloud providers: Google Cloud, Amazon AWS, and Rackspace. The results show that the procedure arrives at a correct identification in 95% of cases. The cloud availability obtained in the test after correct identification lies between 3 and 4 nines for the three platforms under test

    Vulnerability Assessment and Privacy-preserving Computations in Smart Grid

    Get PDF
    Modern advances in sensor, computing, and communication technologies enable various smart grid applications which highlight the vulnerability that requires novel approaches to the field of cybersecurity. While substantial numbers of technologies have been adopted to protect cyber attacks in smart grid, there lacks a comprehensive review of the implementations, impacts, and solutions of cyber attacks specific to the smart grid.In this dissertation, we are motivated to evaluate the security requirements for the smart grid which include three main properties: confidentiality, integrity, and availability. First, we review the cyber-physical security of the synchrophasor network, which highlights all three aspects of security issues. Taking the synchrophasor network as an example, we give an overview of how to attack a smart grid network. We test three types of attacks and show the impact of each attack consisting of denial-of-service attack, sniffing attack, and false data injection attack.Next, we discuss how to protect against each attack. For protecting availability, we examine possible defense strategies for the associated vulnerabilities.For protecting data integrity, a small-scale prototype of secure synchrophasor network is presented with different cryptosystems. Besides, a deep learning based time-series anomaly detector is proposed to detect injected measurement. Our approach observes both data measurements and network traffic features to jointly learn system states and can detect attacks when state vector estimator fails.For protecting data confidentiality, we propose privacy-preserving algorithms for two important smart grid applications. 1) A distributed privacy-preserving quadratic optimization algorithm to solve Security Constrained Optimal Power Flow (SCOPF) problem. The SCOPF problem is decomposed into small subproblems using the Alternating Direction Method of Multipliers (ADMM) and gradient projection algorithms. 2) We use Paillier cryptosystem to secure the computation of the power system dynamic simulation. The IEEE 3-Machine 9-Bus System is used to implement and demonstrate the proposed scheme. The security and performance analysis of our implementations demonstrate that our algorithms can prevent chosen-ciphertext attacks at a reasonable cost

    Optimizing the delivery of multimedia over mobile networks

    Get PDF
    Mención Internacional en el título de doctorThe consumption of multimedia content is moving from a residential environment to mobile phones. Mobile data traffic, driven mostly by video demand, is increasing rapidly and wireless spectrum is becoming a more and more scarce resource. This makes it highly important to operate mobile networks efficiently. To tackle this, recent developments in anticipatory networking schemes make it possible to to predict the future capacity of mobile devices and optimize the allocation of the limited wireless resources. Further, optimizing Quality of Experience—smooth, quick, and high quality playback—is more difficult in the mobile setting, due to the highly dynamic nature of wireless links. A key requirement for achieving, both anticipatory networking schemes and QoE optimization, is estimating the available bandwidth of mobile devices. Ideally, this should be done quickly and with low overhead. In summary, we propose a series of improvements to the delivery of multimedia over mobile networks. We do so, be identifying inefficiencies in the interconnection of mobile operators with the servers hosting content, propose an algorithm to opportunistically create frequent capacity estimations suitable for use in resource optimization solutions and finally propose another algorithm able to estimate the bandwidth class of a device based on minimal traffic in order to identify the ideal streaming quality its connection may support before commencing playback. The main body of this thesis proposes two lightweight algorithms designed to provide bandwidth estimations under the high constraints of the mobile environment, such as and most notably the usually very limited traffic quota. To do so, we begin with providing a thorough overview of the communication path between a content server and a mobile device. We continue with analysing how accurate smartphone measurements can be and also go in depth identifying the various artifacts adding noise to the fidelity of on device measurements. Then, we first propose a novel lightweight measurement technique that can be used as a basis for advanced resource optimization algorithms to be run on mobile phones. Our main idea leverages an original packet dispersion based technique to estimate per user capacity. This allows passive measurements by just sampling the existing mobile traffic. Our technique is able to efficiently filter outliers introduced by mobile network schedulers and phone hardware. In order to asses and verify our measurement technique, we apply it to a diverse dataset generated by both extensive simulations and a week-long measurement campaign spanning two cities in two countries, different radio technologies, and covering all times of the day. The results demonstrate that our technique is effective even if it is provided only with a small fraction of the exchanged packets of a flow. The only requirement for the input data is that it should consist of a few consecutive packets that are gathered periodically. This makes the measurement algorithm a good candidate for inclusion in OS libraries to allow for advanced resource optimization and application-level traffic scheduling, based on current and predicted future user capacity. We proceed with another algorithm that takes advantage of the traffic generated by short-lived TCP connections, which form the majority of the mobile connections, to passively estimate the currently available bandwidth class. Our algorithm is able to extract useful information even if the TCP connection never exits the slow start phase. To the best of our knowledge, no other solution can operate with such constrained input. Our estimation method is able to achieve good precision despite artifacts introduced by the slow start behavior of TCP, mobile scheduler and phone hardware. We evaluate our solution against traces collected in 4 European countries. Furthermore, the small footprint of our algorithm allows its deployment on resource limited devices. Finally, in an attempt to face the rapid traffic increase, mobile application developers outsource their cloud infrastructure deployment and content delivery to cloud computing services and content delivery networks. Studying how these services, which we collectively denote Cloud Service Providers (CSPs), perform over Mobile Network Operators (MNOs) is crucial to understanding some of the performance limitations of today’s mobile apps. To that end, we perform the first empirical study of the complex dynamics between applications, MNOs and CSPs. First, we use real mobile app traffic traces that we gathered through a global crowdsourcing campaign to identify the most prevalent CSPs supporting today’s mobile Internet. Then, we investigate how well these services interconnect with major European MNOs at a topological level, and measure their performance over European MNO networks through a month-long measurement campaign on the MONROE mobile broadband testbed. We discover that the top 6 most prevalent CSPs are used by 85% of apps, and observe significant differences in their performance across different MNOs due to the nature of their services, peering relationships with MNOs, and deployment strategies. We also find that CSP performance in MNOs is affected by inflated path length, roaming, and presence of middleboxes, but not influenced by the choice of DNS resolver. We also observe that the choice of operator’s Point of Presence (PoP) may inflate by at least 20% the delay towards popular websites.This work has been supported by IMDEA Networks Institute.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Ahmed Elmokashfi.- Secretario: Rubén Cuevas Rumín.- Vocal: Paolo Din

    Cloud and mobile infrastructure monitoring for latency and bandwidth sensitive applications

    Get PDF
    This PhD thesis involves the study of cloud computing infrastructures (from the networking perspective) to assess the feasibility of applications gaining increasing popularity over recent years, including multimedia and telemedicine applications, demanding low, bounded latency and sufficient bandwidth. I also focus on the case of telemedicine, where remote imaging applications (for example, telepathology or telesurgery) need to achieve a low and stable latency for the remote transmission of images, and also for the remote control of such equipment. Another important use case for telemedicine is denoted as remote computation, which involves the offloading of image processing to help diagnosis; also in this case, bandwidth and latency requirements should be enforced to ensure timely results, although they are less strict compared to the previous scenario. Nowadays, the capability of gaining access to IT resources in a rapid and on-demand fashion, according to a pay-as-you-go model, has made the cloud computing a key-enabler for innovative multimedia and telemedicine services. However, the partial obscurity of cloud performance, and also security concerns are still hindering the adoption of cloud infrastructure. To ensure that the requirements of applications running on the cloud are satisfied, there is the need to design and evaluate proper methodologies, according to the metric of interest. Moreover, some kinds of applications have specific requirements that cannot be satisfied by the current cloud infrastructure. In particular, since the cloud computing involves communication to remote servers, two problems arise: firstly, the core network infrastructure can be overloaded, considering the massive amount of data that has to flow through it to allow clients to reach the datacenters; secondly, the latency resulting from this remote interaction between clients and servers is increased. For these, and many other cases also beyond the field of telemedicine, the Edge and Fog computing paradigms were introduced. In these new paradigms, the IT resources are deployed not only in the core cloud datacenters, but also at the edge of the network, either in the telecom operator access network or even leveraging other users' devices. The proximity of resources to end-users allows to alleviate the burden on the core network and at the same time to reduce latency towards users. Indeed, the latency from users to remote cloud datacenters encompasses delays from the access and core networks, as well as the intra-datacenter delay. Therefore, this latency is expected to be higher than that required to interconnect users to edge servers, which in the envisioned paradigm are deployed in the access network, that is, nearby final users. Therefore, the edge latency is expected to be reduced to only a portion of the overall cloud delay. Moreover, the edge and central resources can be used in conjunction, and therefore attention to core cloud monitoring is of capital importance even when edge architectures will have a widespread adoption, which is not the case yet. While a lot of research work has been presented for monitoring several network-related metrics, such as bandwidth, latency, jitter and packet loss, less attention was given to the monitoring of latency in cloud and edge cloud infrastructures. In detail, while some works target cloud-latency monitoring, the evaluation is lacking a fine-grained analysis of latency considering spatial and temporal trends. Furthermore, the widespread adoption of mobile devices, and the Internet of Things paradigm further accelerate the shift towards the cloud paradigm for the additional benefits it can provide in this context, allowing energy savings and augmenting the computation capabilities of these devices, creating a new scenario denoted as mobile cloud. This scenario poses additional challenges for its bandwidth constraints, accentuating the need for tailored methodologies that can ensure that the crucial requirements of the aforementioned applications can be met by the current infrastructure. In this sense, there is still a gap of works monitoring bandwidth-related metrics in mobile networks, especially when performing in-the-wild assessment targeting actual mobile networks and operators. Moreover, even the few works testing real scenarios typically consider only one provider in one country for a limited period of time, lacking an in-depth assessment of bandwidth variability over space and time. In this thesis, I therefore consider monitoring methodologies for challenging scenarios, focusing on latency perceived by customers of public cloud providers, and bandwidth in mobile broadband networks. Indeed, as described, achieving low latency is a critical requirement for core cloud infrastructures, while providing enough bandwidth is still challenging in mobile networks compared to wired settings, even with the adoption of 4G mobile broadband networks, expecting to overcome this issue only with the widespread availability of 5G connections (with half of total traffic expected to come from 5G networks by 2026). Therefore, in the research activities carried on during my PhD, I focused on monitoring latency and bandwidth on cloud and mobile infrastructures, assessing to which extent the current public cloud infrastructure and mobile network make multimedia and telemedicine applications (as well as others having similar requirements) feasible

    INTRUSION PREDICTION SYSTEM FOR CLOUD COMPUTING AND NETWORK BASED SYSTEMS

    Get PDF
    Cloud computing offers cost effective computational and storage services with on-demand scalable capacities according to the customers’ needs. These properties encourage organisations and individuals to migrate from classical computing to cloud computing from different disciplines. Although cloud computing is a trendy technology that opens the horizons for many businesses, it is a new paradigm that exploits already existing computing technologies in new framework rather than being a novel technology. This means that cloud computing inherited classical computing problems that are still challenging. Cloud computing security is considered one of the major problems, which require strong security systems to protect the system, and the valuable data stored and processed in it. Intrusion detection systems are one of the important security components and defence layer that detect cyber-attacks and malicious activities in cloud and non-cloud environments. However, there are some limitations such as attacks were detected at the time that the damage of the attack was already done. In recent years, cyber-attacks have increased rapidly in volume and diversity. In 2013, for example, over 552 million customers’ identities and crucial information were revealed through data breaches worldwide [3]. These growing threats are further demonstrated in the 50,000 daily attacks on the London Stock Exchange [4]. It has been predicted that the economic impact of cyber-attacks will cost the global economy $3 trillion on aggregate by 2020 [5]. This thesis focused on proposing an Intrusion Prediction System that is capable of sensing an attack before it happens in cloud or non-cloud environments. The proposed solution is based on assessing the host system vulnerabilities and monitoring the network traffic for attacks preparations. It has three main modules. The monitoring module observes the network for any intrusion preparations. This thesis proposes a new dynamic-selective statistical algorithm for detecting scan activities, which is part of reconnaissance that represents an essential step in network attack preparation. The proposed method performs a statistical selective analysis for network traffic searching for an attack or intrusion indications. This is achieved by exploring and applying different statistical and probabilistic methods that deal with scan detection. The second module of the prediction system is vulnerabilities assessment that evaluates the weaknesses and faults of the system and measures the probability of the system to fall victim to cyber-attack. Finally, the third module is the prediction module that combines the output of the two modules and performs risk assessments of the system security from intrusions prediction. The results of the conducted experiments showed that the suggested system outperforms the analogous methods in regards to performance of network scan detection, which means accordingly a significant improvement to the security of the targeted system. The scanning detection algorithm has achieved high detection accuracy with 0% false negative and 50% false positive. In term of performance, the detection algorithm consumed only 23% of the data needed for analysis compared to the best performed rival detection method

    A Survey on Wireless Security: Technical Challenges, Recent Advances and Future Trends

    Full text link
    This paper examines the security vulnerabilities and threats imposed by the inherent open nature of wireless communications and to devise efficient defense mechanisms for improving the wireless network security. We first summarize the security requirements of wireless networks, including their authenticity, confidentiality, integrity and availability issues. Next, a comprehensive overview of security attacks encountered in wireless networks is presented in view of the network protocol architecture, where the potential security threats are discussed at each protocol layer. We also provide a survey of the existing security protocols and algorithms that are adopted in the existing wireless network standards, such as the Bluetooth, Wi-Fi, WiMAX, and the long-term evolution (LTE) systems. Then, we discuss the state-of-the-art in physical-layer security, which is an emerging technique of securing the open communications environment against eavesdropping attacks at the physical layer. We also introduce the family of various jamming attacks and their counter-measures, including the constant jammer, intermittent jammer, reactive jammer, adaptive jammer and intelligent jammer. Additionally, we discuss the integration of physical-layer security into existing authentication and cryptography mechanisms for further securing wireless networks. Finally, some technical challenges which remain unresolved at the time of writing are summarized and the future trends in wireless security are discussed.Comment: 36 pages. Accepted to Appear in Proceedings of the IEEE, 201
    • …
    corecore