293 research outputs found

    Behavior Compliance Control for More Trustworthy Computation Outsourcing

    Get PDF
    Computation outsourcing has become a hot topic in both academic research and industry. This is because of the benefits accompanied with outsourcing, such as cost reduction, focusing on core businesses and possibility for benefiting from modern payment models like the pay-per-use model. Unfortunately, outsourcing to potentially untrusted third parties' hosting platforms requires a lot of trust. Clients need assurance that the intended code was loaded and executed, and that the application behaves correctly and trustworthy at runtime. That is, techniques from Trusted Computing which are used to allow issuing evidence about the execution of binaries and reporting it to a challenger are not sufficient. Challengers are more interested in evidence which allows detecting misbehavior while the outsourced computation is running on the hosting platform. Another challenging issue is providing a secure data storage for collected evidence information. Such a secure data storage is provided by the Trusted Platform Module (TPM). In outsourcing scenarios where virtualizations technologies are applied, the use of virtual TPMs (vTPMs) comes into consideration. However, researcher identified some drawbacks and limitations of the use of TPMs. These problems include privacy and maintainability issues, problems with the sealing functionality and the high communication and management efforts. On the other hand, virtualizing TPMs, especially virutalizing the Platform Configuration Registers (PCRs), strikes against one of the core principles of Trusted Computing, namely the need for a hardware-based secure storage. In this thesis, we propose different approaches and architectures which can be used to mitigate the problems above. In particular, in the first part of our thesis we propose an approach called Behavior Compliance Control (BCC) to defines architectures to describe how the behavior of such outsourced computations is captured and controlled as well as how to judge the compliance of it compared to a trusted behavior model. We present approaches for two abstraction levels; one on a program code level and the other is on the level of abstract executable business processes. In the second part of this thesis, we propose approaches to solve the aforementioned problems related to TPMs and vTPMs, which are used as storage for evidence data collected as assurance for behavior compliance. In particular, we recognized that the use of the SHA-1 hash to measure system components requires maintenance of a large set of hashes of presumably trustworthy software; furthermore, during attestation, the full configuration of the platform is revealed. Thus, our approach shows how the use of chameleon hashes allows to mitigate the impact of these two problems. To increase the security of vTPM, we show in another approach how strength of hardware-based security can be gained in virtual PCRs by binding them to their corresponding hardware PCRs. We propose two approaches for such a binding. For this purpose, the first variant uses binary hash trees, whereas the other variant uses incremental hashing. We further provide implementations of the proposed approach and evaluate their impact in practice. Furthermore, we empirically evaluate the relative efficacy of the different behavioral abstractions of BCC that we define based on different real world applications. In particular, we examined the feasibility, the effectiveness, the scalability and efficiency of the approach. To this end, we chose two kinds of applications, a web-based and a desktop application, performing different attacks on them, such as malicious input attach and SQL injection attack. The results show that such attacks can be detected so that the application of our approach can increase the protection against them

    Proceedings of the 2nd International Workshop on Security in Mobile Multiagent Systems

    Get PDF
    This report contains the Proceedings of the Second Workshop on Security on Security of Mobile Multiagent Systems (SEMAS2002). The Workshop was held in Montreal, Canada as a satellite event to the 5th International Conference on Autonomous Agents in 2001. The far reaching influence of the Internet has resulted in an increased interest in agent technologies, which are poised to play a key role in the implementation of successful Internet and WWW-based applications in the future. While there is still considerable hype concerning agent technologies, there is also an increasing awareness of the problems involved. In particular, that these applications will not be successful unless security issues can be adequately handled. Although there is a large body of work on cryptographic techniques that provide basic building-blocks to solve specific security problems, relatively little work has been done in investigating security in the multiagent system context. Related problems are secure communication between agents, implementation of trust models/authentication procedures or even reflections of agents on security mechanisms. The introduction of mobile software agents significantly increases the risks involved in Internet and WWW-based applications. For example, if we allow agents to enter our hosts or private networks, we must offer the agents a platform so that they can execute correctly but at the same time ensure that they will not have deleterious effects on our hosts or any other agents / processes in our network. If we send out mobile agents, we should also be able to provide guarantees about specific aspects of their behaviour, i.e., we are not only interested in whether the agents carry out-out their intended task correctly. They must defend themselves against attacks initiated by other agents, and survive in potentially malicious environments. Agent technologies can also be used to support network security. For example in the context of intrusion detection, intelligent guardian agents may be used to analyse the behaviour of agents on a firewall or intelligent monitoring agents can be used to analyse the behaviour of agents migrating through a network. Part of the inspiration for such multi-agent systems comes from primitive animal behaviour, such as that of guardian ants protecting their hill or from biological immune systems

    Stronger secrecy for network-facing applications through privilege reduction

    Get PDF
    Despite significant effort in improving software quality, vulnerabilities and bugs persist in applications. Attackers remotely exploit vulnerabilities in network-facing applications and then disclose and corrupt users' sensitive information that these applications process. Reducing privilege of application components helps to limit the harm that an attacker may cause if she exploits an application. Privilege reduction, i.e., the Principle of Least Privilege, is a fundamental technique that allows one to contain possible exploits of error-prone software components: it entails granting a software component the minimal privilege that it needs to operate. Applying this principle ensures that sensitive data is given only to those software components that indeed require processing such data. This thesis explores how to reduce the privilege of network-facing applications to provide stronger confidentiality and integrity guarantees for sensitive data. First, we look into applying privilege reduction to cryptographic protocol implementations. We address the vital and largely unexamined problem of how to structure implementations of cryptographic protocols to protect sensitive data even in the case when an attacker compromises untrusted components of a protocol implementation. As evidence that the problem is poorly understood, we identified two attacks which succeed in disclosing of sensitive data in two state-of-the-art, exploit-resistant cryptographic protocol implementations: the privilege-separated OpenSSH server and the HiStar/DStar DIFC-based SSL web server. We propose practical, general, system-independent principles for structuring protocol implementations to defend against these two attacks. We apply our principles to protect sensitive data from disclosure in the implementations of both the server and client sides of OpenSSH and of the OpenSSL library. Next, we explore how to reduce the privilege of language runtimes, e.g., the JavaScript language runtime, so as to minimize the risk of their compromise, and thus of the disclosure and corruption of sensitive information. Modern language runtimes are complex software involving such advanced techniques as just-in-time compilation, native-code support routines, garbage collection, and dynamic runtime optimizations. This complexity makes it hard to guarantee the safety of language runtimes, as evidenced by the frequency of the discovery of vulnerabilities in them. We provide new mechanisms that allow sandboxing language runtimes using Software-based Fault Isolation (SFI). In particular, we enable sandboxing of runtime code modification, which modern language runtimes depend on heavily for achieving high performance. We have applied our sandboxing techniques to the V8 Javascript engine on both the x86-32 and x86-64 architectures, and found that the techniques incur only moderate performance overhead. Finally, we apply privilege reduction within the web browser to secure sensitive data within web applications. Web browsers have become an attractive target for attackers because of their widespread use. There are two principal threats to a user's sensitive data in the browser environment: untrusted third-party extensions and untrusted web pages. Extensions execute with elevated privilege which allows them to read content within all web applications. Thus, a malicious extension author may write extension code that reads sensitive page content and sends it to a remote server he controls. Alternatively, a malicious page author may exploit an honest but buggy extension, thus leveraging its elevated privilege to disclose sensitive information from other origins. We propose enforcing privilege reduction policies on extension JavaScript code to protect web applications' sensitive data from malicious extensions and malicious pages. We designed ScriptPolice, a policy system for the Chrome browser's V8 JavaScript language runtime, to enforce flexible security policies on JavaScript execution. We restrict the privileges of a variety of extensions and contain any malicious activity whether introduced by design or injected by a malicious page. The overhead ScriptPolice incurs on extension execution is acceptable: the added page load latency caused by ScriptPolice is so short as to be virtually indistinguishable by users

    Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study

    Get PDF
    This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives

    Using Context to Improve Network-based Exploit Kit Detection

    Get PDF
    Today, our computers are routinely compromised while performing seemingly innocuous activities like reading articles on trusted websites (e.g., the NY Times). These compromises are perpetrated via complex interactions involving the advertising networks that monetize these sites. Web-based compromises such as exploit kits are similar to any other scam -- the attacker wants to lure an unsuspecting client into a trap to steal private information, or resources -- generating 10s of millions of dollars annually. Exploit kits are web-based services specifically designed to capitalize on vulnerabilities in unsuspecting client computers in order to install malware without a user's knowledge. Sadly, it only takes a single successful infection to ruin a user's financial life, or lead to corporate breaches that result in millions of dollars of expense and loss of customer trust. Exploit kits use a myriad of techniques to obfuscate each attack instance, making current network-based defenses such as signature-based network intrusion detection systems far less effective than in years past. Dynamic analysis or honeyclient analysis on these exploits plays a key role in identifying new attacks for signature generation, but provides no means of inspecting end-user traffic on the network to identify attacks in real time. As a result, defenses designed to stop such malfeasance often arrive too late or not at all resulting in high false positive and false negative (error) rates. In order to deal with these drawbacks, three new detection approaches are presented. To deal with the issue of a high number of errors, a new technique for detecting exploit kit interactions on a network is proposed. The technique capitalizes on the fact that an exploit kit leads its potential victim through a process of exploitation by forcing the browser to download multiple web resources from malicious servers. This process has an inherent structure that can be captured in HTTP traffic and used to significantly reduce error rates. The approach organizes HTTP traffic into tree-like data structures, and, using a scalable index of exploit kit traces as samples, models the detection process as a subtree similarity search problem. The technique is evaluated on 3,800 hours of web traffic on a large enterprise network, and results show that it reduces false positive rates by four orders of magnitude over current state-of-the-art approaches. While utilizing structure can vastly improve detection rates over current approaches, it does not go far enough in helping defenders detect new, previously unseen attacks. As a result, a new framework that applies dynamic honeyclient analysis directly on network traffic at scale is proposed. The framework captures and stores a configurable window of reassembled HTTP objects network wide, uses lightweight content rendering to establish the chain of requests leading up to a suspicious event, then serves the initial response content back to the honeyclient in an isolated network. The framework is evaluated on a diverse collection of exploit kits as they evolve over a 1 year period. The empirical evaluation suggests that the approach offers significant operational value, and a single honeyclient can support a campus deployment of thousands of users. While the above approaches attempt to detect exploit kits before they have a chance to infect the client, they cannot protect a client that has already been infected. The final technique detects signs of post infection behavior by intrusions that abuses the domain name system (DNS) to make contact with an attacker. Contemporary detection approaches utilize the structure of a domain name and require hundreds of DNS messages to detect such malware. As a result, these detection mechanisms cannot detect malware in a timely manner and are susceptible to high error rates. The final technique, based on sequential hypothesis testing, uses the DNS message patterns of a subset of DNS traffic to detect malware in as little as four DNS messages, and with orders of magnitude reduction in error rates. The results of this work can make a significant operational impact on network security analysis, and open several exciting future directions for network security research.Doctor of Philosoph

    Deployable filtering architectures against large denial-of-service attacks

    Get PDF
    Denial-of-Service attacks continue to grow in size and frequency despite serious underreporting. While several research solutions have been proposed over the years, they have had important deployment hurdles that have prevented them from seeing any significant level of deployment on the Internet. Commercial solutions exist, but they are costly and generally are not meant to scale to Internet-wide levels. In this thesis we present three filtering architectures against large Denial-of-Service attacks. Their emphasis is in providing an effective solution against such attacks while using simple mechanisms in order to overcome the deployment hurdles faced by other solutions. While these are well-suited to being implemented in fast routing hardware, in the early stages of deployment this is unlikely to be the case. Because of this, we implemented them on low-cost off-the-shelf hardware and evaluated their performance on a network testbed. The results are very encouraging: this setup allows us to forward traffic on a single PC at rates of millions of packets per second even for minimum-sized packets, while at the same time processing as many as one million filters; this gives us confidence that the architecture as a whole could combat even the large botnets currently being reported. Better yet, we show that this single-PC performance scales well with the number of CPU cores and network interfaces, which is promising for our solutions if we consider the current trend in processor design. In addition to using simple mechanisms, we discuss how the architectures provide clear incentives for ISPs that adopt them early, both at the destination as well as at the sources of attacks. The hope is that these will be sufficient to achieve some level of initial deployment. The larger goal is to have an architectural solution against large DoS deployed in place before even more harmful attacks take place; this thesis is hopefully a step in that direction

    Accountable infrastructure and its impact on internet security and privacy

    Get PDF
    The Internet infrastructure relies on the correct functioning of the basic underlying protocols, which were designed for functionality. Security and privacy have been added post hoc, mostly by applying cryptographic means to different layers of communication. In the absence of accountability, as a fundamental property, the Internet infrastructure does not have a built-in ability to associate an action with the responsible entity, neither to detect or prevent misbehavior. In this thesis, we study accountability from a few different perspectives. First, we study the need of having accountability in anonymous communication networks as a mechanism that provides repudiation for the proxy nodes by tracing back selected outbound traffic in a provable manner. Second, we design a framework that provides a foundation to support the enforcement of the right to be forgotten law in a scalable and automated manner. The framework provides a technical mean for the users to prove their eligibility for content removal from the search results. Third, we analyze the Internet infrastructure determining potential security risks and threats imposed by dependencies among the entities on the Internet. Finally, we evaluate the feasibility of using hop count filtering as a mechanism for mitigating Distributed Reflective Denial-of-Service attacks, and conceptually show that it cannot work to prevent these attacks.Die Internet-Infrastrutur stützt sich auf die korrekte Ausführung zugrundeliegender Protokolle, welche mit Fokus auf Funktionalität entwickelt wurden. Sicherheit und Datenschutz wurden nachträglich hinzugefügt, hauptsächlich durch die Anwendung kryptografischer Methoden in verschiedenen Schichten des Protokollstacks. Fehlende Zurechenbarkeit, eine fundamentale Eigenschaft Handlungen mit deren Verantwortlichen in Verbindung zu bringen, verhindert jedoch, Fehlverhalten zu erkennen und zu unterbinden. Diese Dissertation betrachtet die Zurechenbarkeit im Internet aus verschiedenen Blickwinkeln. Zuerst untersuchen wir die Notwendigkeit für Zurechenbarkeit in anonymisierten Kommunikationsnetzen um es Proxyknoten zu erlauben Fehlverhalten beweisbar auf den eigentlichen Verursacher zurückzuverfolgen. Zweitens entwerfen wir ein Framework, das die skalierbare und automatisierte Umsetzung des Rechts auf Vergessenwerden unterstützt. Unser Framework bietet Benutzern die technische Möglichkeit, ihre Berechtigung für die Entfernung von Suchergebnissen nachzuweisen. Drittens analysieren wir die Internet-Infrastruktur, um mögliche Sicherheitsrisiken und Bedrohungen aufgrund von Abhängigkeiten zwischen den verschiedenen beteiligten Entitäten zu bestimmen. Letztlich evaluieren wir die Umsetzbarkeit von Hop Count Filtering als ein Instrument DRDoS Angriffe abzuschwächen und wir zeigen, dass dieses Instrument diese Art der Angriffe konzeptionell nicht verhindern kann

    Trusted data path protecting shared data in virtualized distributed systems

    Get PDF
    When sharing data across multiple sites, service applications should not be trusted automatically. Services that are suspected of faulty, erroneous, or malicious behaviors, or that run on systems that may be compromised, should not be able to gain access to protected data or entrusted with the same data access rights as others. This thesis proposes a context flow model that controls the information flow in a distributed system. Each service application along with its surrounding context in a distributed system is treated as a controllable principal. This thesis defines a trust-based access control model that controls the information exchange between these principals. An online monitoring framework is used to evaluate the trustworthiness of the service applications and the underlining systems. An external communication interception runtime framework enforces trust-based access control transparently for the entire system.Ph.D.Committee Chair: Karsten Schwan; Committee Member: Douglas M. Blough; Committee Member: Greg Eisenhauer; Committee Member: Mustaque Ahamad; Committee Member: Wenke Le

    Security in Distributed, Grid, Mobile, and Pervasive Computing

    Get PDF
    This book addresses the increasing demand to guarantee privacy, integrity, and availability of resources in networks and distributed systems. It first reviews security issues and challenges in content distribution networks, describes key agreement protocols based on the Diffie-Hellman key exchange and key management protocols for complex distributed systems like the Internet, and discusses securing design patterns for distributed systems. The next section focuses on security in mobile computing and wireless networks. After a section on grid computing security, the book presents an overview of security solutions for pervasive healthcare systems and surveys wireless sensor network security
    corecore