15,831 research outputs found

    CamFlow: Managed Data-sharing for Cloud Services

    Full text link
    A model of cloud services is emerging whereby a few trusted providers manage the underlying hardware and communications whereas many companies build on this infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS applications. From the start, strong isolation between cloud tenants was seen to be of paramount importance, provided first by virtual machines (VM) and later by containers, which share the operating system (OS) kernel. Increasingly it is the case that applications also require facilities to effect isolation and protection of data managed by those applications. They also require flexible data sharing with other applications, often across the traditional cloud-isolation boundaries; for example, when government provides many related services for its citizens on a common platform. Similar considerations apply to the end-users of applications. But in particular, the incorporation of cloud services within `Internet of Things' architectures is driving the requirements for both protection and cross-application data sharing. These concerns relate to the management of data. Traditional access control is application and principal/role specific, applied at policy enforcement points, after which there is no subsequent control over where data flows; a crucial issue once data has left its owner's control by cloud-hosted applications and within cloud-services. Information Flow Control (IFC), in addition, offers system-wide, end-to-end, flow control based on the properties of the data. We discuss the potential of cloud-deployed IFC for enforcing owners' dataflow policy with regard to protection and sharing, as well as safeguarding against malicious or buggy software. In addition, the audit log associated with IFC provides transparency, giving configurable system-wide visibility over data flows. [...]Comment: 14 pages, 8 figure

    On the Feasibility of Fine-Grained TLS Security Configurations in Web Browsers Based on the Requested Domain Name

    Get PDF
    Most modern web browsers today sacrifice optimal TLS security for backward compatibility. They apply coarse-grained TLS configurations that support (by default) legacy versions of the protocol that have known design weaknesses, and weak ciphersuites that provide fewer security guarantees (e.g. non Forward Secrecy), and silently fall back to them if the server selects to. This introduces various risks including downgrade attacks such as the POODLE attack [15] that exploits the browsers silent fallback mechanism to downgrade the protocol version in order to exploit the legacy version flaws. To achieve a better balance between security and backward compatibility, we propose a mechanism for fine-grained TLS configurations in web browsers based on the sensitivity of the domain name in the HTTPS request using a whitelisting technique. That is, the browser enforces optimal TLS configurations for connections going to sensitive domains while enforcing default configurations for the rest of the connections. We demonstrate the feasibility of our proposal by implementing a proof-of-concept as a Firefox browser extension. We envision this mechanism as a built-in security feature in web browsers, e.g. a button similar to the \quotes{Bookmark} button in Firefox browsers and as a standardised HTTP header, to augment browsers security

    ANCHOR: logically-centralized security for Software-Defined Networks

    Get PDF
    While the centralization of SDN brought advantages such as a faster pace of innovation, it also disrupted some of the natural defenses of traditional architectures against different threats. The literature on SDN has mostly been concerned with the functional side, despite some specific works concerning non-functional properties like 'security' or 'dependability'. Though addressing the latter in an ad-hoc, piecemeal way, may work, it will most likely lead to efficiency and effectiveness problems. We claim that the enforcement of non-functional properties as a pillar of SDN robustness calls for a systemic approach. As a general concept, we propose ANCHOR, a subsystem architecture that promotes the logical centralization of non-functional properties. To show the effectiveness of the concept, we focus on 'security' in this paper: we identify the current security gaps in SDNs and we populate the architecture middleware with the appropriate security mechanisms, in a global and consistent manner. Essential security mechanisms provided by anchor include reliable entropy and resilient pseudo-random generators, and protocols for secure registration and association of SDN devices. We claim and justify in the paper that centralizing such mechanisms is key for their effectiveness, by allowing us to: define and enforce global policies for those properties; reduce the complexity of controllers and forwarding devices; ensure higher levels of robustness for critical services; foster interoperability of the non-functional property enforcement mechanisms; and promote the security and resilience of the architecture itself. We discuss design and implementation aspects, and we prove and evaluate our algorithms and mechanisms, including the formalisation of the main protocols and the verification of their core security properties using the Tamarin prover.Comment: 42 pages, 4 figures, 3 tables, 5 algorithms, 139 reference

    Social Research and Privileged Data

    Get PDF

    Open secrets

    Get PDF
    The law of trade secrets is often conceptualized in bilateral terms, as creating and enforcing rights between trade secret owners, on the one hand, and misappropriators on the other hand. This paper, a chapter in a forthcoming collection on the law of trade secrets, argues that trade secrets and the law that guards them can serve structural and institutional roles as well. Somewhat surprisingly, given the law’s focus on secrecy, among the institutional products of trade secrets law are commons, or managed openness: environments designed to facilitate the structured sharing of information. The paper illustrates with examples drawn from existing literature on cuisine, magic, and Internet search.

    Peering into the Courtroom: An Analysis of the Public Interest

    Get PDF

    Are Evidence-Related Ethics Provisions Law ?

    Get PDF

    CONFLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code

    Full text link
    We present an instrumenting compiler for enforcing data confidentiality in low-level applications (e.g. those written in C) in the presence of an active adversary. In our approach, the programmer marks secret data by writing lightweight annotations on top-level definitions in the source code. The compiler then uses a static flow analysis coupled with efficient runtime instrumentation, a custom memory layout, and custom control-flow integrity checks to prevent data leaks even in the presence of low-level attacks. We have implemented our scheme as part of the LLVM compiler. We evaluate it on the SPEC micro-benchmarks for performance, and on larger, real-world applications (including OpenLDAP, which is around 300KLoC) for programmer overhead required to restructure the application when protecting the sensitive data such as passwords. We find that performance overheads introduced by our instrumentation are moderate (average 12% on SPEC), and the programmer effort to port OpenLDAP is only about 160 LoC.Comment: Technical report for CONFLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code, appearing at EuroSys 201
    • …
    corecore