93 research outputs found

    Escrow: A large-scale web vulnerability assessment tool

    Get PDF
    The reliance on Web applications has increased rapidly over the years. At the same time, the quantity and impact of application security vulnerabilities have grown as well. Amongst these vulnerabilities, SQL Injection has been classified as the most common, dangerous and prevalent web application flaw. In this paper, we propose Escrow, a large-scale SQL Injection detection tool with an exploitation module that is light-weight, fast and platform-independent. Escrow uses a custom search implementation together with a static code analysis module to find potential target web applications. Additionally, it provides a simple to use graphical user interface (GUI) to navigate through a vulnerable remote database. Escrow is implementation-agnostic, i.e. It can perform analysis on any web application regardless of the server-side implementation (PHP, ASP, etc.). Using our tool, we discovered that it is indeed possible to identify and exploit at least 100 databases per 100 minutes, without prior knowledge of their underlying implementation. We observed that for each query sent, we can scan and detect dozens of vulnerable web applications in a short space of time, while providing a means for exploitation. Finally, we provide recommendations for developers to defend against SQL injection and emphasise the need for proactive assessment and defensive coding practices

    Progger: an efficient, tamper-evident kernel-space logger for cloud data provenance tracking

    Get PDF
    Cloud data provenance, or "what has happened to my data in the cloud", is a critical data security component which addresses pressing data accountability and data governance issues in cloud computing systems. In this paper, we present Progger (Provenance Logger), a kernel-space logger which potentially empowers all cloud stakeholders to trace their data. Logging from the kernel space empowers security analysts to collect provenance from the lowest possible atomic data actions, and enables several higher-level tools to be built for effective end-to-end tracking of data provenance. Within the last few years, there has been an increasing number of proposed kernel space provenance tools but they faced several critical data security and integrity problems. Some of these prior tools' limitations include (1) the inability to provide log tamper-evidence and prevention of fake/manual entries, (2) accurate and granular timestamp synchronisation across several machines, (3) log space requirements and growth, and (4) the efficient logging of root usage of the system. Progger has resolved all these critical issues, and as such, provides high assurance of data security and data activity audit. With this in mind, the paper will discuss these elements of high-assurance cloud data provenance, describe the design of Progger and its efficiency, and present compelling results which paves the way for Progger being a foundation tool used for data activity tracking across all cloud systems

    Virtual numbers for virtual machines?

    Get PDF
    Knowing the number of virtual machines (VMs) that a cloud physical hardware can (further) support is critical as it has implications on provisioning and hardware procurement. However, current methods for estimating the maximum number of VMs possible on a given hardware is usually the ratio of the specifications of a VM to the underlying cloud hardware’s specifications. Such naive and linear estimation methods mostly yield impractical limits as to how many VMs the hardware can actually support. It was found that if we base on the naive division method, user experience on VMs at those limits would be severely degraded. In this paper, we demonstrate through experimental results, the significant gap between the limits derived using the estimation method mentioned above and the actual situation. We believe for a more practicable estimation of the limits of the underlying infrastructure

    Time for Cloud? Design and implementation of a time-based cloud resource management system

    Get PDF
    The current pay-per-use model adopted by public cloud service providers has influenced the perception on how a cloud should provide its resources to end-users, i.e. on-demand and access to an unlimited amount of resources. However, not all clouds are equal. While such provisioning models work for well-endowed public clouds, they may not always work well in private clouds with limited budget and resources such as research and education clouds. Private clouds also stand to be impacted greatly by issues such as user resource hogging and the misuse of resources for nefarious activities. These problems are usually caused by challenges such as (1) limited physical servers/ budget, (2) growing number of users and (3) the inability to gracefully and automatically relinquish resources from inactive users. Currently, cloud resource management frameworks used for private cloud setups, such as OpenStack and CloudStack, only uses the pay-per-use model as the basis when provisioning resources to users. In this paper, we propose OpenStack Café, a novel methodology adopting the concepts of 'time' and booking systems' to manage resources of private clouds. By allowing users to book resources over specific time-slots, our proposed solution can efficiently and automatically help administrators manage users' access to resource, addressing the issue of resource hogging and gracefully relinquish resources back to the pool in resource-constrained private cloud setups. Work is currently in progress to adopt Café into OpenStack as a feature, and results of our prototype show promises. We also present some insights to lessons learnt during the design and implementation of our proposed methodology in this paper

    The data privacy matrix project: towards a global alignment of data privacy laws

    Get PDF
    Data privacy is an expected right of most citizens around the world but there are many legislative challenges within a boundary-less cloud computing and World Wide Web environment. Despite its importance, there is limited research around data privacy law gaps and alignment, and the legal side of the security ecosystem which seems to be in a constant effort to catch-up. There are already issues within recent history which show a lack of alignment causing a great deal of confusion, an example of this is the 'right to be forgotten' case which came up in 2014. This case involved a Spanish man against Google Spain. He requested the removal of a link to an article about an auction for his foreclosed home, for a debt that he had subsequently paid. However, misalignment of data privacy laws caused further complications to the case. This paper introduces the Waikato Data Privacy Matrix, our global project for alignment of data privacy laws by focusing on Asia Pacific data privacy laws and their relationships with the European Union and the USA. This will also suggest potential solutions to address some of the issues which may occur when a breach of data privacy occurs, in order to ensure an individual has their data privacy protected across the boundaries in the Web. With the increase in data processing and storage across different jurisdictions and regions (e.g. public cloud computing), the Waikato Data Privacy Matrix empowers businesses using or providing cloud services to understand the different data privacy requirements across the globe, paving the way for increased cloud adoption and usage

    Secure FPGA as a Service - Towards Secure Data Processing by Physicalizing the Cloud

    Get PDF
    Securely processing data in the cloud is still a difficult problem, even with homomorphic encryption and other privacy preserving schemes. Hardware solutions provide additional layers of security and greater performance over their software alternatives. However by definition the cloud should be flexible and adaptive, often viewed as abstracting services from products. By creating services reliant on custom hardware, the core essence of the cloud is lost. FPGAs bridge this gap between software and hardware with programmable logic, allowing the cloud to remain abstract. FPGA as a Service (FaaS) has been proposed for a greener cloud, but not for secure data processing. This paper explores the possibility of Secure FaaS in the cloud for privacy preserving data processing, describes the technologies required, identifies use cases, and highlights potential challenges

    UVisP: User-centric visualization of data provenance with gestalt principles

    Get PDF
    The need to understand and track files (and inherently, data) in cloud computing systems is in high demand. Over the past years, the use of logs and data representation using graphs have become the main method for tracking and relating information to the cloud users. While being used, tracking related information with 'data provenance' (i.e. series of chronicles and the derivation history of data on metadata) is the new trend for cloud users. However, there is still much room for improving data activity representation in cloud systems for end-users. We propose 'User-centric Visualization of data provenance with Gestalt (UVisP)', a novel user-centric visualization technique for data provenance. This technique aims to facilitate the missing link between data movements in cloud computing environments and the end-users uncertain queries over their files security and life cycle within cloud systems. The proof of concept for the UVisP technique integrates an open-source visualization API with Gestalt's theory of perception to provide a range of user-centric provenance visualizations. UVisP allows users to transform and visualize provenance (logs) with implicit prior knowledge of 'Gestalt's theory of perception.' We presented the initial development of the UVisP technique and our results show that the integration of Gestalt and 'perceptual key(s)' in provenance visualization allows end-users to enhance their visualizing capabilities, to extract useful knowledge and understand the visualizations better

    Taxonomy of man-in-the-middle attacks on HTTPS

    Get PDF
    With the increase in Man-in-the-Middle (MITM) attacks capable of breaking Hypertext Transfer Protocol Secure (HTTPS) over the past five years, researchers tasked with the improvement of HTTPS must understand each attacks characteristics. However with the large amount of attacks it is difficult to discern attack differences, with out any existing classification system capable of classifying these attacks. In this paper we provide a framework for classifying and mitigating MITM attacks on HTTPS communications. The identification and classification of these attacks can be used to provide useful insight into what can be done to improve the security of HTTPS communications. The classification framework was used to create a taxonomy of MITM attacks providing a visual representation of attack relationships, and was designed to flexibly allow other areas of attack analysis to be added. The classification framework was tested against a testbed of MITM attacks, then further validated and evaluated at the INTERPOL Global Complex for Innovation (IGCI) with a forensic taxonomy extension, and forensic analysis tool

    Malware Propagation and Prevention Model for Time-Varying Community Networks within Software Defined Networks

    Get PDF
    As the adoption of Software Defined Networks (SDNs) grows, the security of SDN still has several unaddressed limitations. A key network security research area is in the study of malware propagation across the SDN-enabled networks. To analyze the spreading processes of network malware (e.g., viruses) in SDN, we propose a dynamic model with a time-varying community network, inspired by research models on the spread of epidemics in complex networks across communities. We assume subnets of the network as communities and links that are dense in subnets but sparse between subnets. Using numerical simulation and theoretical analysis, we find that the efficiency of network malware propagation in this model depends on the mobility rate q of the nodes between subnets. We also find that there exists a mobility rate threshold . The network malware will spread in the SDN when the mobility rate > . The malware will survive when > and perish when < . The results showed that our model is effective, and the results may help to decide the SDN control strategy to defend against network malware and provide a theoretical basis to reduce and prevent network security incidents

    Privacy preserving computation by fragmenting individual bits and distributing gates

    Get PDF
    Solutions that allow the computation of arbitrary operations over data securely in the cloud are currently impractical. The holy grail of cryptography, fully homomorphic encryption, still requires minutes to compute a single operation. In order to provide a practical solution, this paper proposes taking a different approach to the problem of securely processing data. FRagmenting Individual Bits (FRIBs), a scheme which preserves user privacy by distributing bit fragments across many locations, is presented. Privacy is maintained by each server only receiving a small portion of the actual data, and solving for the rest results in a vast number of possibilities. Functions are defined with NAND logic gates, and are computed quickly as the performance overhead is shifted from computation to network latency. This paper details our proof of concept addition algorithm which took 346ms to add two 32-bit values-paving the way towards further improvements to get computations completed under 100ms
    • 

    corecore