91 research outputs found

    Comparative Analysis Based on Survey of DDOS Attacks’ Detection Techniques at Transport, Network, and Application Layers

    Get PDF
    Distributed Denial of Service (DDOS) is one of the most prevalent attacks and can be executed in diverse ways using various tools and codes. This makes it very difficult for the security researchers and engineers to come up with a rigorous and efficient security methodology. Even with thorough research, analysis, real time implementation, and application of the best mechanisms in test environments, there are various ways to exploit the smallest vulnerability within the system that gets overlooked while designing the defense mechanism. This paper presents a comprehensive survey of various methodologies implemented by researchers and engineers to detect DDOS attacks at network, transport, and application layers using comparative analysis. DDOS attacks are most prevalent on network, transport, and application layers justifying the need to focus on these three layers in the OSI model

    Assessment of socio-techno-economic factors affecting the market adoption and evolution of 5G networks: Evidence from the 5G-PPP CHARISMA project

    Get PDF
    5G networks are rapidly becoming the means to accommodate the complex demands of vertical sectors. The European project CHARISMA is aiming to develop a hierarchical, distributed-intelligence 5G architecture, offering low latency, security, and open access as features intrinsic to its design. Finding its place in such a complex landscape consisting of heterogeneous technologies and devices, requires the designers of the CHARISMA and other similar 5G architectures, as well as other related market actors to take into account the multiple technical, economic and social aspects that will affect the deployment and the rate of adoption of 5G networks by the general public. In this paper, a roadmapping activity identifying the key technological and socio-economic issues is performed, so as to help ensure a smooth transition from the legacy to future 5G networks. Based on the fuzzy Analytical Hierarchy Process (AHP) method, a survey of pairwise comparisons has been conducted within the CHARISMA project by 5G technology and deployment experts, with several critical aspects identified and prioritized. The conclusions drawn are expected to be a valuable tool for decision and policy makers as well as for stakeholders

    Video-on-demand optimization using an interior-point algorithm

    Get PDF
    A Content Delivery Network (CDN) aims to provide efficient movement of massive digital content (multimedia and files) across the Internet. This is achieved by putting the content in servers closer to the costumer. Video-On-Demand service is an application of CDN where videos have to be located strategically to avoid network congestion and servers saturation. Therefore, the problem of optimal placement of videos arises. This problem has a block diagonal structure with linking constraints on links and servers capacities. In this project, we solve huge instances of a video placement problem over three real network topologies with a specialized interior point solver named BlockIP. The evaluated instances range from 7 to 300 millions of variables and the difficulty of the instances depends on the size of servers, links bandwidth and network topology. Our results: 1) verified characteristics of BlockIP like regularization and the intensive computation in the last iterations and 2) showed that BlockIP found optimal solution in all the evaluated instances with a good optimality gap. On the contrary, state-of-art CPLEX cannot reach an optimal, feasible solution in some difficult instances and needs almost twice the memory of BlockIP. However, CPLEX solved most of feasible instances at least twice faster than BlockI

    Innovative techniques for deployment of microservices in cloud-edge environment

    Get PDF
    PhD ThesisThe evolution of microservice architecture allows complex applications to be structured into independent modular components (microservices) making them easier to develop and manage. Complemented with containers, microservices can be deployed across any cloud and edge environment. Although containerized microservices are getting popular in industry, less research is available specially in the area of performance characterization and optimized deployment of microservices. Depending on the application type (e.g. web, streaming) and the provided functionalities (e.g. ltering, encryption/decryption, storage), microservices are heterogeneous with speci c functional and Quality of Service (QoS) requirements. Further, cloud and edge environments are also complex with a huge number of cloud providers and edge devices along with their host con gurations. Due to these complexities, nding a suitable deployment solution for microservices becomes challenging. To handle the deployment of microservices in cloud and edge environments, this thesis presents multilateral research towards microservice performance characterization, run-time evaluation and system orchestration. Considering a variety of applications, numerous algorithms and policies have been proposed, implemented and prototyped. The main contributions of this thesis are given below: Characterizes the performance of containerized microservices considering various types of interference in the cloud environment. Proposes and models an orchestrator, SDBO for benchmarking simple webapplication microservices in a multi-cloud environment. SDBO is validated using an e-commerce test web-application. Proposes and models an advanced orchestrator, GeoBench for the deployment of complex web-application microservices in a multi-cloud environment. GeoBench is validated using a geo-distributed test web-application. - i - Proposes and models a run-time deployment framework for distributed streaming application microservices in a hybrid cloud-edge environment. The model is validated using a real-world healthcare analytics use case for human activity recognition.

    On the design of efficient caching systems

    Get PDF
    Content distribution is currently the prevalent Internet use case, accounting for the majority of global Internet traffic and growing exponentially. There is general consensus that the most effective method to deal with the large amount of content demand is through the deployment of massively distributed caching infrastructures as the means to localise content delivery traffic. Solutions based on caching have been already widely deployed through Content Delivery Networks. Ubiquitous caching is also a fundamental aspect of the emerging Information-Centric Networking paradigm which aims to rethink the current Internet architecture for long term evolution. Distributed content caching systems are expected to grow substantially in the future, in terms of both footprint and traffic carried and, as such, will become substantially more complex and costly. This thesis addresses the problem of designing scalable and cost-effective distributed caching systems that will be able to efficiently support the expected massive growth of content traffic and makes three distinct contributions. First, it produces an extensive theoretical characterisation of sharding, which is a widely used technique to allocate data items to resources of a distributed system according to a hash function. Based on the findings unveiled by this analysis, two systems are designed contributing to the abovementioned objective. The first is a framework and related algorithms for enabling efficient load-balanced content caching. This solution provides qualitative advantages over previously proposed solutions, such as ease of modelling and availability of knobs to fine-tune performance, as well as quantitative advantages, such as 2x increase in cache hit ratio and 19-33% reduction in load imbalance while maintaining comparable latency to other approaches. The second is the design and implementation of a caching node enabling 20 Gbps speeds based on inexpensive commodity hardware. We believe these contributions advance significantly the state of the art in distributed caching systems

    Profiling and Identification of Web Applications in Computer Network

    Get PDF
    Characterising network traffic is a critical step for detecting network intrusion or misuse. The traditional way to identify the application associated with a set of traffic flows uses port number and DPI (Deep Packet Inspection), but it is affected by the use of dynamic ports and encryption. The research community proposed models for traffic classification that determined the most important requirements and recommendations for a successful approach. The suggested alternatives could be categorised into four techniques: port-based, packet payload based, host behavioural, and statistical-based. The traditional way to identifying traffic flows typically focuses on using IANA assigned port numbers and deep packet inspection (DPI). However, an increasing number of Internet applications nowadays that frequently use dynamic post assignments and encryption data traffic render these techniques in achieving real-time traffic identification. In recent years, two other techniques have been introduced, focusing on host behaviour and statistical methods, to avoid these limitations. The former technique is based on the idea that hosts generate different communication patterns at the transport layer; by extracting these behavioural patterns, activities and applications can be classified. However, it cannot correctly identify the application names, classifying both Yahoo and Gmail as email. Thereby, studies have focused on using statistical features approach for identifying traffic associated with applications based on machine learning algorithms. This method relies on characteristics of IP flows, minimising the overhead limitations associated with other schemes. Classification accuracy of statistical flow-based approaches, however, depends on the discrimination ability of the traffic features used. NetFlow represents the de-facto standard in monitoring and analysing network traffic, but the information it provides is not enough to describe the application behaviour. The primary challenge is to describe the activity within entirely and among network flows to understand application usage and user behaviour. This thesis proposes novel features to describe precisely a web application behaviour in order to segregate various user activities. Extracting the most discriminative features, which characterise web applications, is a key to gain higher accuracy without being biased by either users or network circumstances. This work investigates novel and superior features that characterize a behaviour of an application based on timing of arrival packets and flows. As part of describing the application behaviour, the research considered the on/off data transfer, defining characteristics for many typical applications, and the amount of data transferred or exchanged. Furthermore, the research considered timing and patterns for user events as part of a network application session. Using an extended set of traffic features output from traffic captures, a supervised machine learning classifier was developed. To this effect, the present work customised the popular tcptrace utility to generate classification features based on traffic burstiness and periods of inactivity for everyday Internet usage. A C5.0 decision tree classifier is applied using the proposed features for eleven different Internet applications, generated by ten users. Overall, the newly proposed features reported a significant level of accuracy (~98%) in classifying the respective applications. Afterwards, uncontrolled data collected from a real environment for a group of 20 users while accessing different applications was used to evaluate the proposed features. The evaluation tests indicated that the method has an accuracy of 87% in identifying the correct network application.Iraqi cultural Attach

    Acta Cybernetica : Volume 25. Number 2.

    Get PDF

    14th SC@RUG 2017 proceedings 2016-2017

    Get PDF
    • …
    corecore