6,971 research outputs found

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols

    TCP throughput guarantee in the DiffServ Assured Forwarding service: what about the results?

    Get PDF
    Since the proposition of Quality of Service architectures by the IETF, the interaction between TCP and the QoS services has been intensively studied. This paper proposes to look forward to the results obtained in terms of TCP throughput guarantee in the DiffServ Assured Forwarding (DiffServ/AF) service and to present an overview of the different proposals to solve the problem. It has been demonstrated that the standardized IETF DiffServ conditioners such as the token bucket color marker and the time sliding window color maker were not good TCP traffic descriptors. Starting with this point, several propositions have been made and most of them presents new marking schemes in order to replace or improve the traditional token bucket color marker. The main problem is that TCP congestion control is not designed to work with the AF service. Indeed, both mechanisms are antagonists. TCP has the property to share in a fair manner the bottleneck bandwidth between flows while DiffServ network provides a level of service controllable and predictable. In this paper, we build a classification of all the propositions made during these last years and compare them. As a result, we will see that these conditioning schemes can be separated in three sets of action level and that the conditioning at the network edge level is the most accepted one. We conclude that the problem is still unsolved and that TCP, conditioned or not conditioned, remains inappropriate to the DiffServ/AF service

    A look at cloud architecture interoperability through standards

    Get PDF
    Enabling cloud infrastructures to evolve into a transparent platform while preserving integrity raises interoperability issues. How components are connected needs to be addressed. Interoperability requires standard data models and communication encoding technologies compatible with the existing Internet infrastructure. To reduce vendor lock-in situations, cloud computing must implement universal strategies regarding standards, interoperability and portability. Open standards are of critical importance and need to be embedded into interoperability solutions. Interoperability is determined at the data level as well as the service level. Corresponding modelling standards and integration solutions shall be analysed

    SGXIO: Generic Trusted I/O Path for Intel SGX

    Full text link
    Application security traditionally strongly relies upon security of the underlying operating system. However, operating systems often fall victim to software attacks, compromising security of applications as well. To overcome this dependency, Intel introduced SGX, which allows to protect application code against a subverted or malicious OS by running it in a hardware-protected enclave. However, SGX lacks support for generic trusted I/O paths to protect user input and output between enclaves and I/O devices. This work presents SGXIO, a generic trusted path architecture for SGX, allowing user applications to run securely on top of an untrusted OS, while at the same time supporting trusted paths to generic I/O devices. To achieve this, SGXIO combines the benefits of SGX's easy programming model with traditional hypervisor-based trusted path architectures. Moreover, SGXIO can tweak insecure debug enclaves to behave like secure production enclaves. SGXIO surpasses traditional use cases in cloud computing and makes SGX technology usable for protecting user-centric, local applications against kernel-level keyloggers and likewise. It is compatible to unmodified operating systems and works on a modern commodity notebook out of the box. Hence, SGXIO is particularly promising for the broad x86 community to which SGX is readily available.Comment: To appear in CODASPY'1

    Description and Experience of the Clinical Testbeds

    Get PDF
    This deliverable describes the up-to-date technical environment at three clinical testbed demonstrator sites of the 6WINIT Project, including the adapted clinical applications, project components and network transition technologies in use at these sites after 18 months of the Project. It also provides an interim description of early experiences with deployment and usage of these applications, components and technologies, and their clinical service impact

    The Internet of Everything

    Get PDF
    In the era before IoT, the world wide web, internet, web 2.0 and social media made people’s lives comfortable by providing web services and enabling access personal data irrespective of their location. Further, to save time and improve efficiency, there is a need for machine to machine communication, automation, smart computing and ubiquitous access to personal devices. This need gave birth to the phenomenon of Internet of Things (IoT) and further to the concept of Internet of Everything (IoE)

    The Internet of Everything

    Get PDF
    In the era before IoT, the world wide web, internet, web 2.0 and social media made people’s lives comfortable by providing web services and enabling access personal data irrespective of their location. Further, to save time and improve efficiency, there is a need for machine to machine communication, automation, smart computing and ubiquitous access to personal devices. This need gave birth to the phenomenon of Internet of Things (IoT) and further to the concept of Internet of Everything (IoE)

    IPv6 Network Mobility

    Get PDF
    Network Authentication, Authorization, and Accounting has been used since before the days of the Internet as we know it today. Authentication asks the question, “Who or what are you?” Authorization asks, “What are you allowed to do?” And fi nally, accounting wants to know, “What did you do?” These fundamental security building blocks are being used in expanded ways today. The fi rst part of this two-part series focused on the overall concepts of AAA, the elements involved in AAA communications, and highlevel approaches to achieving specifi c AAA goals. It was published in IPJ Volume 10, No. 1[0]. This second part of the series discusses the protocols involved, specifi c applications of AAA, and considerations for the future of AAA
    corecore