665 research outputs found

    Analyzing the costs/tradeoffs involved between layer 2, layer 3, layer 4 and layer 5 switching

    Get PDF
    The switching function was primarily entrusted to Layer 2 of the OSI model, i.e. the Data Link Layer. A Layer 2 switch performs forwarding decisions by analyzing the MAC (Media Access Control) address of the destination segment in the frame. The Layer 2 switch checks for the destination address and transmits the packet to the appropriate segment if the address is present in its table of known destinations. If the entry for that address is not present, the switch then forwards the packet to all segments except the one on which it came from. This is known as flooding. When it gets a reply from the destination segment, it learns the location of the new address and adds it to its table of known destinations. As number of users are increasing on the network, the speed and the bandwidth of the network is being stretched to its limits. Earlier, switching was primarily entrusted to Layer 2 (Data Link Layer) of the OSI model, but now there are switches that operate at Layer 3 (Network Layer), Layer 4 (Transport Layer) and Layer 5 (Session Layer) of the OSI model. Going from one layer to the other layer does involve some costs/tradeoffs. My thesis explores the costs and tradeoffs involved with switching based on layers 2, 3, 4 and 5 of the OSI reference model

    Towards the definition of a quality model for mail servers

    Get PDF
    The paper presents an approach for building a Mail Server Quality Model, based on the ISO/IEC software quality standard. We start by defining the mail system domain to be used as general framework and the relevant technologies involved. Then a general overview of the ISO/IEC standard is given. The basic steps, the relevant considerations and criteria used to select the appropriated subcharacteristics and quality attributes are also presented. The selected attributes are categorized under the six ISO/IEC quality characteristics conforming the model. Finally some case studies requirements and two commercial mail server tools are used to evaluate the model.Postprint (published version

    Revenue maximization problems in commercial data centers

    Get PDF
    PhD ThesisAs IT systems are becoming more important everyday, one of the main concerns is that users may face major problems and eventually incur major costs if computing systems do not meet the expected performance requirements: customers expect reliability and performance guarantees, while underperforming systems loose revenues. Even with the adoption of data centers as the hub of IT organizations and provider of business efficiencies the problems are not over because it is extremely difficult for service providers to meet the promised performance guarantees in the face of unpredictable demand. One possible approach is the adoption of Service Level Agreements (SLAs), contracts that specify a level of performance that must be met and compensations in case of failure. In this thesis I will address some of the performance problems arising when IT companies sell the service of running ‘jobs’ subject to Quality of Service (QoS) constraints. In particular, the aim is to improve the efficiency of service provisioning systems by allowing them to adapt to changing demand conditions. First, I will define the problem in terms of an utility function to maximize. Two different models are analyzed, one for single jobs and the other useful to deal with session-based traffic. Then, I will introduce an autonomic model for service provision. The architecture consists of a set of hosted applications that share a certain number of servers. The system collects demand and performance statistics and estimates traffic parameters. These estimates are used by management policies which implement dynamic resource allocation and admission algorithms. Results from a number of experiments show that the performance of these heuristics is close to optimal.QoSP (Quality of Service Provisioning)British Teleco

    Separation of SSL protocol phases across process boundaries

    Get PDF
    Secure Sockets Layer is the de-facto standard used in the industry today for secure communications through web sites. An SSL connection is established by performing a Handshake, which is followed by the Record phase. While the SSL Handshake is computationally intensive and can cause of bottlenecks on an application server, the Record phase can cause similar bottlenecks while encrypting large volumes of data. SSL Accelerators have been used to improve the performance of SSL-based application servers. These devices are expensive, complex to configure and inflexible to customizations. By separating the SSL Handshake and the Record phases into separate software processes, high availability and throughput can be achieved using open-source software and platforms. The delegation of the SSL Record phase to a separate process by transfer of necessary cryptographic information was achieved. Load tests conducted, showed gains with the separation of the Handshake and Record phases at nominal data sizes and the approach provides flexibility for enhancements to be carried out for performance improvements at higher data sizes

    Analisis Implementasi Load Balancing Dengan Metode Source Hash Scheduling Pada Procol SSL

    Get PDF
    The Course programming period give the highest load for academic server (SIAM) of UB. More than 65000 student will access the system concurrently. There for, the load balancing mechanism is required to improve system capacity and prevent the access failure SIAM using SSL mechanism to provide protection for academik data transaction. SSL do the handshaking process to maintain the connectivity of the web browser. In addition, the client and the server will establish a session recording mechanism to keep the identity of a connection to prevent repeat login. This study tried to implement a source-hash scheduling mechanism on the load balancing system. This mechanism subjected to prevent the termination of a session connection which has been formed. The results shows that the source hash scheduling has increased the capacity of the system to handle as many as 9.02594 million requests from 65 087 different IP within 1 day. And provide total data throughput of 169 537 010 395 Bytes (169 GB) in a single day getIndex Terms—Load Balancing, Source Hash Schedulin

    Improving Network Performance, Security and Robustness in Hybrid Wireless Networks Using a Satellite Overlay

    Get PDF
    In this thesis we propose that the addition of a satellite overlay to large or dense wireless networks will result in improvement in application performance and network reliability, and also enable efficient security solutions that are well-suited for wireless nodes with limited resources. We term the combined network as a hybrid wireless network. Through analysis, network modeling and simulation, we quantify the improvement in end-to-end performance in such networks, compared to flat wireless networks. We also propose a new analytical method for modeling and estimating the performance of hybrid wireless networks. We create a loss network model for hybrid networks using the hierarchical reduced loss network model, adapted for packet-switched networks. Applying a fixed point approximation method on the set of relations modeling the hierarchical loss network, we derive a solution that converges to a fixed point for the parameter set. We analyze the sensitivity of the performance metric to variations in the network parameters by applying Automatic Differentiation to the performance model. We thus develop a method for parameter optimization and sensitivity analysis of protocols for designing hybrid networks. We investigate how the satellite overlay can help to implement better solutions for secure group communications in hybrid wireless networks. We propose a source authentication protocol for multicast communications that makes intelligent use of the satellite overlay, by modifying and extending TESLA certificates. We also propose a probabilistic non-repudiation technique that uses the satellite as a proxy node. We describe how the authentication protocol can be integrated with a topology-aware hierarchical multicast routing protocol to design a secure multicast routing protocol that is robust to active attacks. Lastly, we examine how the end-to-end delay is adversely affected when IP Security protocol (IPSEC) and Secure Socket Layer protocol (SSL) are applied to unicast communications in hybrid networks. For network-layer security with low delay, we propose the use of the Layered IPSEC protocol, with a modified Internet Key Exchange protocol. For secure web browsing with low delay, we propose the Dual-mode SSL protocol. We present simulation results to quantify the performance improvement with our proposed protocols, compared to the traditional solutions

    The Dark Side(-Channel) of Mobile Devices: A Survey on Network Traffic Analysis

    Full text link
    In recent years, mobile devices (e.g., smartphones and tablets) have met an increasing commercial success and have become a fundamental element of the everyday life for billions of people all around the world. Mobile devices are used not only for traditional communication activities (e.g., voice calls and messages) but also for more advanced tasks made possible by an enormous amount of multi-purpose applications (e.g., finance, gaming, and shopping). As a result, those devices generate a significant network traffic (a consistent part of the overall Internet traffic). For this reason, the research community has been investigating security and privacy issues that are related to the network traffic generated by mobile devices, which could be analyzed to obtain information useful for a variety of goals (ranging from device security and network optimization, to fine-grained user profiling). In this paper, we review the works that contributed to the state of the art of network traffic analysis targeting mobile devices. In particular, we present a systematic classification of the works in the literature according to three criteria: (i) the goal of the analysis; (ii) the point where the network traffic is captured; and (iii) the targeted mobile platforms. In this survey, we consider points of capturing such as Wi-Fi Access Points, software simulation, and inside real mobile devices or emulators. For the surveyed works, we review and compare analysis techniques, validation methods, and achieved results. We also discuss possible countermeasures, challenges and possible directions for future research on mobile traffic analysis and other emerging domains (e.g., Internet of Things). We believe our survey will be a reference work for researchers and practitioners in this research field.Comment: 55 page

    Managing Access Control in Virtual Private Networks

    Get PDF
    Virtual Private Network technology allows remote network users to benefit from resources on a private network as if their host machines actually resided on the network. However, each resource on a network may also have its own access control policies, which may be completely unrelated to network access. Thus users� access to a network (even by VPN technology) does not guarantee their access to the sought resources. With the introduction of more complicated access privileges, such as delegated access, it is conceivable for a scenario to arise where a user can access a network remotely (because of direct permissions from the network administrator or by delegated permission) but cannot access any resources on the network. There is, therefore, a need for a network access control mechanism that understands the privileges of each remote network user on one hand, and the access control policies of various network resources on the other hand, and so can aid a remote user in accessing these resources based on the user\u27s privileges. This research presents a software solution in the form of a centralized access control framework called an Access Control Service (ACS), that can grant remote users network presence and simultaneously aid them in accessing various network resources with varying access control policies. At the same time, the ACS provides a centralized framework for administrators to manage access to their resources. The ACS achieves these objectives using VPN technology, network address translation and by proxying various authentication protocols on behalf of remote users

    Development of a secure monitoring framework for optical disaggregated data centres

    Get PDF
    Data center (DC) infrastructures are a key piece of nowadays telecom and cloud services delivery, enabling the access and storage of enormous quantities of information as well as the execution of complex applications and services. Such aspect is being accentuated with the advent of 5G and beyond architectures, since a significant portion of the network and service functions are being deployed as specialized virtual elements inside dedicated DC infrastructures. As such, the development of new architectures to better exploit the resources of DC becomes of paramount importanceThe mismatch between the variability of resources required by running applications and the fixed amount of resources in server units severely limits resource utilization in today's Data Centers (DCs). The Disaggregated DC (DDC) paradigm was recently introduced to address these limitations. The main idea behind DDCs is to divide the various computational resources into independent hardware modules/blades, which are mounted in racks, bringing greater modularity and allowing operators to optimize their deployments for improved efficiency and performance, thus, offering high resource allocation flexibility. Moreover, to efficiently exploit the hardware blades and establish the connections across them according to upper layer requirements, a flexible control and management framework is required. In this regard, following current industrial trends, the Software Defined Networking (SDN) paradigm is one of the leading technologies for the control of DC infrastructures, allowing for the establishment of high-speed, low-latency optical connections between hardware components in DDCs in response to the demands of higher-level services and applications. With these concepts in mind, the primary objective of this thesis is to design and carry out the implementation of the control of a DDC infrastructure layer that is founded on the SDN principles and makes use of optical technologies for the intra-DC network fabric, highlighting the importance of quality control and monitoring. Thanks to several SDN agents, it becomes possible to gather statistics and metrics from the multiple infrastructure elements (computational blades and network equipment), allowing DC operators to monitor and make informed decisions on how to utilize the infrastructure resources to the greatest extent feasible. Indeed, quality assurance operations are of capital importance in modern DC infrastructures, thus, it becomes essential to guarantee a secure communication channel for gathering infrastructure metrics/statistics and enforcing (re-)configurations, closing the full loop, then addressing the security layer to secure the communication channel by encryption and providing authentication for the server and the client
    • …
    corecore