55,369 research outputs found

    Reflections on security options for the real-time transport protocol framework

    Get PDF
    The Real-time Transport Protocol (RTP) supports a range of video conferencing, telephony, and streaming video ap- plications, but offers few native security features. We discuss the problem of securing RTP, considering the range of applications. We outline why this makes RTP a difficult protocol to secure, and describe the approach we have recently proposed in the IETF to provide security for RTP applications. This approach treats RTP as a framework with a set of extensible security building blocks, and prescribes mandatory-to-implement security at the level of different application classes, rather than at the level of the media transport protocol

    CamFlow: Managed Data-sharing for Cloud Services

    Full text link
    A model of cloud services is emerging whereby a few trusted providers manage the underlying hardware and communications whereas many companies build on this infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS applications. From the start, strong isolation between cloud tenants was seen to be of paramount importance, provided first by virtual machines (VM) and later by containers, which share the operating system (OS) kernel. Increasingly it is the case that applications also require facilities to effect isolation and protection of data managed by those applications. They also require flexible data sharing with other applications, often across the traditional cloud-isolation boundaries; for example, when government provides many related services for its citizens on a common platform. Similar considerations apply to the end-users of applications. But in particular, the incorporation of cloud services within `Internet of Things' architectures is driving the requirements for both protection and cross-application data sharing. These concerns relate to the management of data. Traditional access control is application and principal/role specific, applied at policy enforcement points, after which there is no subsequent control over where data flows; a crucial issue once data has left its owner's control by cloud-hosted applications and within cloud-services. Information Flow Control (IFC), in addition, offers system-wide, end-to-end, flow control based on the properties of the data. We discuss the potential of cloud-deployed IFC for enforcing owners' dataflow policy with regard to protection and sharing, as well as safeguarding against malicious or buggy software. In addition, the audit log associated with IFC provides transparency, giving configurable system-wide visibility over data flows. [...]Comment: 14 pages, 8 figure

    The QoSxLabel: a quality of service cross layer label

    Get PDF
    A quality of service cross layer label

    Closing the loop of SIEM analysis to Secure Critical Infrastructures

    Get PDF
    Critical Infrastructure Protection is one of the main challenges of last years. Security Information and Event Management (SIEM) systems are widely used for coping with this challenge. However, they currently present several limitations that have to be overcome. In this paper we propose an enhanced SIEM system in which we have introduced novel components to i) enable multiple layer data analysis; ii) resolve conflicts among security policies, and discover unauthorized data paths in such a way to be able to reconfigure network devices. Furthermore, the system is enriched by a Resilient Event Storage that ensures integrity and unforgeability of events stored.Comment: EDCC-2014, BIG4CIP-2014, Security Information and Event Management, Decision Support System, Hydroelectric Da

    Transparency about net neutrality: A translation of the new European rules into a multi-stakeholder model

    Get PDF
    The new European framework directive contains a number of policy objectives in the area of net neutrality. In support of these objectives, the universal service directive includes a transparency obligation for ISPs. This paper proposes a multi-stakeholder model for the implementation of this transparency obligation. The model is a multi-stakeholder model in the sense that it treats the content and form of the transparent information in close connection with the parties involved in the provision of the information and the processes in which they take part. Another crucial property of the model is that it distinguishes between technical and user-friendly information. This distinction makes it possible to limit the obligation to ISPs to the information for which they are in the best position to provide: the technical information on the traffic management measures that they apply, e.g., which traffic streams are subject to special treatment? Which measures are applied and when? The public availability of this technical information creates the opportunity for the other parties in the model to step in and contribute to the formulation of the user-friendly information for end users: which applications and services receive special treatment? When is their effect noticeable? It is expected that the involvement of other parties will lead to multiple, complementary routes for the formulation of the user-friendly information. Thus, the user-friendly information emerges in ways driven by market players and stakeholders that would be difficult to design and lay down in advance in the transparency obligation. --net neutrality,transparency,traffic management
    • …
    corecore