476 research outputs found

    Discovering New Vulnerabilities in Computer Systems

    Get PDF
    Vulnerability research plays a key role in preventing and defending against malicious computer system exploitations. Driven by a multi-billion dollar underground economy, cyber criminals today tirelessly launch malicious exploitations, threatening every aspect of daily computing. to effectively protect computer systems from devastation, it is imperative to discover and mitigate vulnerabilities before they fall into the offensive parties\u27 hands. This dissertation is dedicated to the research and discovery of new design and deployment vulnerabilities in three very different types of computer systems.;The first vulnerability is found in the automatic malicious binary (malware) detection system. Binary analysis, a central piece of technology for malware detection, are divided into two classes, static analysis and dynamic analysis. State-of-the-art detection systems employ both classes of analyses to complement each other\u27s strengths and weaknesses for improved detection results. However, we found that the commonly seen design patterns may suffer from evasion attacks. We demonstrate attacks on the vulnerabilities by designing and implementing a novel binary obfuscation technique.;The second vulnerability is located in the design of server system power management. Technological advancements have improved server system power efficiency and facilitated energy proportional computing. However, the change of power profile makes the power consumption subjected to unaudited influences of remote parties, leaving the server systems vulnerable to energy-targeted malicious exploit. We demonstrate an energy abusing attack on a standalone open Web server, measure the extent of the damage, and present a preliminary defense strategy.;The third vulnerability is discovered in the application of server virtualization technologies. Server virtualization greatly benefits today\u27s data centers and brings pervasive cloud computing a step closer to the general public. However, the practice of physical co-hosting virtual machines with different security privileges risks introducing covert channels that seriously threaten the information security in the cloud. We study the construction of high-bandwidth covert channels via the memory sub-system, and show a practical exploit of cross-virtual-machine covert channels on virtualized x86 platforms

    Behavioral Mimicry Covert Communication

    Get PDF
    Covert communication refers to the process of communicating data through a channel that is neither designed, nor intended to transfer information. Traditionally, covert channels are considered as security threats in computer systems and a great deal of attention has been given to countermeasures for covert communication schemes. The evolution of computer networks led the communication community to revisit the concept of covert communication not only as a security threat but also as an alternative way of providing security and privacy to communication networks. In fact, the heterogeneous structure of computer networks and the diversity of communication protocols provide an appealing setting for covert channels. This dissertation is an exploration on a novel design methodology for undetectable and robust covert channels in communication networks. Our new design methodology is based on the concept of behavioral mimicry in computer systems. The objective is to design a covert transmitter that has enough degrees of freedom to behave like an ordinary transmitter and react normally to unpredictable network events, yet it has the ability to modulate a covert message over its behavioral fingerprints in the network. To this end, we argue that the inherent randomness in communication protocols and network environments is the key in finding the proper medium for network covert channels. We present a few examples on how random behaviors in communication protocols lead to discovery of suitable shared resources for covert channels. The proposed design methodology is tested on two new covert communication schemes, one is designed for wireless networks and the other one is optimized for public communication networks (e.g., Internet). Each design is accompanied by a comprehensive analysis from undetectability, achievable covert rate and reliability perspectives. In particular, we introduced turbo covert channels, a family of extremely robust model-based timing covert channels that achieve provable polynomial undetectability in public communication networks. This means that the covert channel is undetectable against any polynomial-time statistical test that analyzes samples of the covert traffic and the legitimate traffic of the network. Target applications for the proposed covert communication schemes are discussed including detailed practical scenarios in which the proposed channels can be implemented

    Developing reliable anomaly detection system for critical hosts: a proactive defense paradigm

    Full text link
    Current host-based anomaly detection systems have limited accuracy and incur high processing costs. This is due to the need for processing massive audit data of the critical host(s) while detecting complex zero-day attacks which can leave minor, stealthy and dispersed artefacts. In this research study, this observation is validated using existing datasets and state-of-the-art algorithms related to the construction of the features of a host's audit data, such as the popular semantic-based extraction and decision engines, including Support Vector Machines, Extreme Learning Machines and Hidden Markov Models. There is a challenging trade-off between achieving accuracy with a minimum processing cost and processing massive amounts of audit data that can include complex attacks. Also, there is a lack of a realistic experimental dataset that reflects the normal and abnormal activities of current real-world computers. This thesis investigates the development of new methodologies for host-based anomaly detection systems with the specific aims of improving accuracy at a minimum processing cost while considering challenges such as complex attacks which, in some cases, can only be visible via a quantified computing resource, for example, the execution times of programs, the processing of massive amounts of audit data, the unavailability of a realistic experimental dataset and the automatic minimization of the false positive rate while dealing with the dynamics of normal activities. This study provides three original and significant contributions to this field of research which represent a marked advance in its body of knowledge. The first major contribution is the generation and release of a realistic intrusion detection systems dataset as well as the development of a metric based on fuzzy qualitative modeling for embedding the possible quality of realism in a dataset's design process and assessing this quality in existing or future datasets. The second key contribution is constructing and evaluating the hidden host features to identify the trivial differences between the normal and abnormal artefacts of hosts' activities at a minimum processing cost. Linux-centric features include the frequencies and ranges, frequency-domain representations and Gaussian interpretations of system call identifiers with execution times while, for Windows, a count of the distinct core Dynamic Linked Library calls is identified as a hidden host feature. The final key contribution is the development of two new anomaly-based statistical decision engines for capitalizing on the potential of some of the suggested hidden features and reliably detecting anomalies. The first engine, which has a forensic module, is based on stochastic theories including Hierarchical hidden Markov models and the second is modeled using Gaussian Mixture Modeling and Correntropy. The results demonstrate that the proposed host features and engines are competent for meeting the identified challenges

    The Fault Is Not in Our Stars: Avoiding an Arms Race in Outer Space

    Get PDF
    The world is on the precipice of a new arms race in outer space, as China, Russia, the United States, and others undertake dramatic new initiatives in anti-satellite weaponry. These accelerated competitive efforts at space control are highly destabilizing because developed societies have come to depend so heavily upon satellite services to support the entire civilian economy and the modern military apparatus; any significant threat or disruption in the availability of space assets would be massively, and possibly permanently, disruptive. International law regarding outer space developed with remarkable rapidity in the early years of the Space Age, but the process of formulating additional treaties and norms for space has broken down over the past several decades; no additional legal instruments have emerged that could cope with today’s rising threats. This Article therefore proposes three initiatives. Although none of them can suffice to solve the emerging problems, they could, perhaps, provide additional diplomacy, reinvigorating the prospects for rapprochement in space. Importantly, each of these three ideas has deep roots in other sectors of arms control, where they have served both to restore a measure of stability and to catalyze even more ambitious agreements in the longer term. The first proposal is for a declaratory regime of “no first use” of specified space weapons; this would do little to directly alter states’ capabilities for space warfare, but could serve as a “confidence-building measure,” to temper their most provocative rhetoric and practices. The second concept is a “limited test ban,” to interdict the most dangerous debris-creating developmental tests of new space weapons. Third is a suggestion for shared “space situational awareness,” which would create an international apparatus enabling all participants to enjoy the benefits of greater transparency, reducing the possibilities for secret malign or negligent behavior. In each instance, the Article describes the proposal and its variations, assesses its possible contributions to space security, and displays the key precedents from other arms-control successes. The Article concludes by calling for additional, further-reaching space diplomacy, in the hope that these relatively modest initial measures could provoke more robust subsequent negotiations

    Symbolic Verification of Remote Client Behavior in Distributed Systems

    Get PDF
    A malicious client in a distributed system can undermine the integrity of the larger distributed application in a number of different ways. For example, a server with a vulnerability may be compromised directly by a modified client. If a client is authoritative for state in the larger distributed application, a malicious client may transmit an altered version of this state throughout the distributed application. A player in a networked game might cheat by modifying the client executable or the user of a network service might craft a sequence of messages that exploit a vulnerability in a server application. We present symbolic client verification, a technique for detecting whether network traffic from a remote client could have been generated by sanctioned software. Our method is based on constraint solving and symbolic execution and uses the client source code as a model for expected behavior. By identifying possible execution paths a remote client may have followed to generate a particular sequence of network traffic, we enable a precise verification technique that has the benefits of requiring little to no modification to the client application and is server agnostic; the only required inputs to the algorithm are the observed network traffic and the client source code. We demonstrate a parallel symbolic client verification algorithm that vastly reduces verification costs for our case study applications XPilot and Tetrinet.Doctor of Philosoph

    To Affinity and Beyond: Interactive Digital Humans as a Human Computer Interface

    Get PDF
    The field of human computer interaction is increasingly exploring the use of more natural, human-like user interfaces to build intelligent agents to aid in everyday life. This is coupled with a move to people using ever more realistic avatars to represent themselves in their digital lives. As the ability to produce emotionally engaging digital human representations is only just now becoming technically possible, there is little research into how to approach such tasks. This is due to both technical complexity and operational implementation cost. This is now changing as we are at a nexus point with new approaches, faster graphics processing and enabling new technologies in machine learning and computer vision becoming available. I articulate the issues required for such digital humans to be considered successfully located on the other side of the phenomenon known as the Uncanny Valley. My results show that a complex mix of perceived and contextual aspects affect the sense making on digital humans and highlights previously undocumented effects of interactivity on the affinity. Users are willing to accept digital humans as a new form of user interface and they react to them emotionally in previously unanticipated ways. My research shows that it is possible to build an effective interactive digital human that crosses the Uncanny Valley. I directly explore what is required to build a visually realistic digital human as a primary research question and I explore if such a realistic face provides sufficient benefit to justify the challenges involved in building it. I conducted a Delphi study to inform the research approaches and then produced a complex digital human character based on these insights. This interactive and realistic digital human avatar represents a major technical undertaking involving multiple teams around the world. Finally, I explored a framework for examining the ethical implications and signpost future research areas

    Analysis and detection of security vulnerabilities in contemporary software

    Get PDF
    Contemporary application systems are implemented using an assortment of high-level programming languages, software frameworks, and third party components. While this may help to lower development time and cost, the result is a complex system of interoperating parts whose behavior is difficult to fully and properly comprehend. This difficulty of comprehension often manifests itself in the form of program coding errors that are not directly related to security requirements but can have an impact on the security of the system. The thesis of this dissertation is that many security vulnerabilities in contemporary software may be attributed to unintended behavior due to unexpected execution paths resulting from the accidental misuse of the software components. Unlike many typical programmer errors such as missed boundary checks or user input validation, these software bugs are not easy to detect and avoid. While typical secure coding best practices, such as code reviews, dynamic and static analysis, offer little protection against such vulnerabilities, we argue that runtime verification of software execution against a specified expected behavior can help to identify unexpected behavior in the software. The dissertation explores how building software systems using components may lead to the emergence of unexpected software behavior that results in security vulnerabilities. The thesis is supported by a study of the evolution of a popular software product over a period of twelve years. While anomaly detection techniques could be applied to verify software verification at runtime, there are several practical challenges in using them in large-scale contemporary software. A model of expected application execution paths and a methodology that can be used to build it during the software development cycle is proposed. The dissertation explores its effectiveness in detecting exploits on vulnerabilities enabled by software errors in a popular, enterprise software product

    Mandan Amerindian culture| A study of values transmission

    Get PDF

    The Valiant Welshman, the Scottish James, and the Formation of Great Britain

    Get PDF
    When James VI of Scotland and I of England proclaimed himself King of Great Britain, he proposed a merger of the English and Scottish parliaments, and he looked to Henry VIII’s Acts of Union of England and Wales (1536/43) as an example for English Scottish union under one king. On the London stage after 1603 many plays paid tribute to the new king and provided a predominantly English audience a means of accepting the not so palatable ideas of Scottish power, assimilation and unity. The Valiant Welshman is distinctive among these works, as no other extant early modern English drama features a Welsh leading character. The challenges of reconciling distinct national identity with larger political unity are timeless issues with a strong resonance today. This book considers national, regional and linguistic identity and explores how R.A.\u27s play promotes Wales, serves King James and reveals what it means to be Welsh and Scots in a newly forming Great Britain.https://scholarworks.wmich.edu/mip_rmemc/1008/thumbnail.jp
    corecore