61 research outputs found

    Architecture for analyzing Potentially Unwanted Applications

    Get PDF
    The spread of potentially unwanted programs (PUP) and its supporting pay par install (PPI) business model have become relevant issues in the IT security area. While PUPs may not be explicitly malicious, they still represent a security hazard. Boosted by PPI companies, PUP software evolves rapidly. Although manual analysis represents the best approach for distinguishing cleanware from PUPs, it is inapplicable to the large amount of PUP installers appearing each day. To challenge this fast evolving phenomenon, automatic analysis tools are required. However, current automated malware analisyis techniques suffer from a number of limitations, such as the inability to click through PUP installation processes. Moreover, many malware analysis automated sandboxes (MSASs) can be detected, by taking advantage of artifacts affecting their virtualization engine. In order to overcome those limitations, we present an architectural design for implementing a MSAS mainly targeting PUP analysis. We also provide a cross-platform implementation of the MSAS, capable of running PUP analysis in both virtual and bare metal environments. The developed prototype has proved to be working and was able to automatically analyze more that 480 freeware installers, collected by the three top most ranked freeware websites, such as cnet.com, filehippo.com and softonic.com. Eventually, we briefly analyze collected data and propose a first strategy for detecting PUPs by inspecting intercepted HTTP traffic

    Management of high availability services using virtualization

    Get PDF
    This thesis examines the use of virtualization in management of high availability services using open source tools. The services are hosted in virtual machines, which can be seamlessly migrated between the physical nodes in the cluster automatically by high availability software. Currently there are no complete open source solutions that provide migration of virtual machines as a method for repair. The work is based on the high availability software Heartbeat. In this work, an add-on to Heartbeat is developed, allowing Heartbeat to be able to seamlessly migrate the virtual machines between the physical nodes, when shut down gracefully. This add-on is tested in a proof of concept cluster, where Heartbeat runs Xen virtual machines with high availability. The impact of migration has been measured for both TCP and UDP services, both numerically and heuristically. The outages caused by graceful failures (e.g. rebooting) are measured to be around 1/4 seconds. Practical tests are also performed. The impression is that the outages are not noticed by the users of latency critical services as game servers or streaming audio servers.Master i nettverks- og systemadministrasjo

    Trusted execution: applications and verification

    Get PDF
    Useful security properties arise from sealing data to specific units of code. Modern processors featuring Intel’s TXT and AMD’s SVM achieve this by a process of measured and trusted execution. Only code which has the correct measurement can access the data, and this code runs in an environment trusted from observation and interference. We discuss the history of attempts to provide security for hardware platforms, and review the literature in the field. We propose some applications which would benefit from use of trusted execution, and discuss functionality enabled by trusted execution. We present in more detail a novel variation on Diffie-Hellman key exchange which removes some reliance on random number generation. We present a modelling language with primitives for trusted execution, along with its semantics. We characterise an attacker who has access to all the capabilities of the hardware. In order to achieve automatic analysis of systems using trusted execution without attempting to search a potentially infinite state space, we define transformations that reduce the number of times the attacker needs to use trusted execution to a pre-determined bound. Given reasonable assumptions we prove the soundness of the transformation: no secrecy attacks are lost by applying it. We then describe using the StatVerif extensions to ProVerif to model the bounded invocations of trusted execution. We show the analysis of realistic systems, for which we provide case studies

    Demystifying Internet of Things Security

    Get PDF
    Break down the misconceptions of the Internet of Things by examining the different security building blocks available in Intel Architecture (IA) based IoT platforms. This open access book reviews the threat pyramid, secure boot, chain of trust, and the SW stack leading up to defense-in-depth. The IoT presents unique challenges in implementing security and Intel has both CPU and Isolated Security Engine capabilities to simplify it. This book explores the challenges to secure these devices to make them immune to different threats originating from within and outside the network. The requirements and robustness rules to protect the assets vary greatly and there is no single blanket solution approach to implement security. Demystifying Internet of Things Security provides clarity to industry professionals and provides and overview of different security solutions What You'll Learn Secure devices, immunizing them against different threats originating from inside and outside the network Gather an overview of the different security building blocks available in Intel Architecture (IA) based IoT platforms Understand the threat pyramid, secure boot, chain of trust, and the software stack leading up to defense-in-depth Who This Book Is For Strategists, developers, architects, and managers in the embedded and Internet of Things (IoT) space trying to understand and implement the security in the IoT devices/platforms

    ECONOMICALLY PROTECTING COMPLEX, LEGACY OPERATING SYSTEMS USING SECURE DESIGN PRINCIPLES

    Get PDF
    In modern computer systems, complex legacy operating systems, such as Linux, are deployed ubiquitously. Many design choices in these legacy operating systems predate a modern understanding of security risks. As a result, new attack opportunities are routinely discovered to subvert such systems, which reveal design flaws that spur new research about secure design principles and other security mechanisms to thwart these attacks. Most research falls into two categories: encapsulating the threat and redesigning the system from scratch. Each approach has its challenge. Encapsulation can only limit the exposure to the risk, but not entirely prevent it. Rewriting the huge codebase of these operating systems is impractical in terms of developer effort, but appealing inasmuch as it can comprehensively eliminate security risks. This thesis pursues a third, understudied option: retrofitting security design principles in the existing kernel design. Conventional wisdom discourages retrofitting security because retrofitting is a hard problem, may require the use of new abstractions or break backward compatibility, may have unforeseen consequences, and may be equivalent to redesigning the system from scratch in terms of effort. This thesis offers new evidence to challenge this conventional wisdom, indicating that one can economically retrofit a comprehensive security policy onto complex, legacy systems. To demonstrate this assertion, this thesis firstly surveys the alternative of encapsulating the threat to the complex, legacy system by adding a monitoring layer using a technique called Virtual Machine Introspection, and discusses the shortcomings of this technique. Secondly, this thesis shows how to enforce the principle of least privilege by removing the need to run setuid-to-root binaries with administrator privilege. Finally, this thesis takes the first steps to show how to economically retrofit secure design principles to the OS virtualization feature of the Linux kernel called containers without rewriting the whole system. This approach can be applied more generally to other legacy systems.Doctor of Philosoph

    Detecting worm mutations using machine learning

    Get PDF
    Worms are malicious programs that spread over the Internet without human intervention. Since worms generally spread faster than humans can respond, the only viable defence is to automate their detection. Network intrusion detection systems typically detect worms by examining packet or flow logs for known signatures. Not only does this approach mean that new worms cannot be detected until the corresponding signatures are created, but that mutations of known worms will remain undetected because each mutation will usually have a different signature. The intuitive and seemingly most effective solution is to write more generic signatures, but this has been found to increase false alarm rates and is thus impractical. This dissertation investigates the feasibility of using machine learning to automatically detect mutations of known worms. First, it investigates whether Support Vector Machines can detect mutations of known worms. Support Vector Machines have been shown to be well suited to pattern recognition tasks such as text categorisation and hand-written digit recognition. Since detecting worms is effectively a pattern recognition problem, this work investigates how well Support Vector Machines perform at this task. The second part of this dissertation compares Support Vector Machines to other machine learning techniques in detecting worm mutations. Gaussian Processes, unlike Support Vector Machines, automatically return confidence values as part of their result. Since confidence values can be used to reduce false alarm rates, this dissertation determines how Gaussian Process compare to Support Vector Machines in terms of detection accuracy. For further comparison, this work also compares Support Vector Machines to K-nearest neighbours, known for its simplicity and solid results in other domains. The third part of this dissertation investigates the automatic generation of training data. Classifier accuracy depends on good quality training data -- the wider the training data spectrum, the higher the classifier's accuracy. This dissertation describes the design and implementation of a worm mutation generator whose output is fed to the machine learning techniques as training data. This dissertation then evaluates whether the training data can be used to train classifiers of sufficiently high quality to detect worm mutations. The findings of this work demonstrate that Support Vector Machines can be used to detect worm mutations, and that the optimal configuration for detection of worm mutations is to use a linear kernel with unnormalised bi-gram frequency counts. Moreover, the results show that Gaussian Processes and Support Vector Machines exhibit similar accuracy on average in detecting worm mutations, while K-nearest neighbours consistently produces lower quality predictions. The generated worm mutations are shown to be of sufficiently high quality to serve as training data. Combined, the results demonstrate that machine learning is capable of accurately detecting mutations of known worms

    A grid and cloud-based framework for high throughput bioinformatics

    Get PDF
    Recent advances in genome sequencing technologies have unleashed a flood of new data. As a result, the computational analysis of bioinformatics data sets has been rapidly moving from a labbased desktop computer environment to exhaustive analyses performed by large dedicated computing resources. Traditionally, large computational problems have been performed on dedicated clusters of high performance machines that are typically local to, and owned by, a particular institution. The current trend in Grid computing has seen institutions pooling their computational resources in order to offload excess computational work to remote locations during busy periods. In the last year or so, commercial Cloud computing initiatives have matured enough to offer a viable remote source of reliable computational power. Collections of idle desktop computers have also been used as a source of computational power in the form of ‘volunteer Grids’. The field of bioinformatics is highly dynamic, with new or updated versions of software tools and databases continually being developed. Several different tools and datasets must often be combined into a coherent, automated workflow or pipeline. While existing solutions are available for constructing workflows, there is a clear need for long-lived analyses consisting of many interconnected steps to be able to migrate among Grid and cloud computational resources dynamically. This project involved research into the principles underlying the design and architecture of flexible, high-throughput bioinformatics processes. Following extensive research into requirements gathering, a novel Grid-based platform, Microbase, has been implemented that is based on service-oriented architectures and peer-to-peer data transfer technology. This platform has been shown to be amenable to utilising a wide range of hardware from commodity desktop computers, to high-performance cloud infrastructure. The system has been shown to drastically reduce the bandwidth requirements of bioinformatics data distribution, and therefore reduces both the financial and computational costs associated with cloud computing. The system is inherently modular in nature, comprising a service based notification system, a data storage system scheduler and a job manager. In keeping with e-Science principles, each module can operate in physical isolation from each other, distributed within an intranet or Internet. Moreover, since each module is loosely coupled via Web services, modules have the potential to be used in combination with external service oriented components or in isolation as part of another system. In order to demonstrate the utility of such an open source system to the bioinformatics community, a pipeline of inter-connected bioinformatics applications was developed using the Microbase system to form a high throughput application for the comparative and visual analysis of microbial genomes. This application, Automated Genome Analyser (AGA) has been developed to operate without user interaction. AGA exposes its results via Web-services which can be used by further analytical stages within Microbase, by external computational resources via a Web service interface or which can be queried by users via an interactive genome browser. In addition to providing the necessary infrastructure for scalable Grid applications, a modular development framework has been provided, which simplifies the process of writing Grid applications. Microbase has been adopted by a number of projects ranging from comparative genomics to synthetic biology simulations.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore