442 research outputs found

    Escrow: A large-scale web vulnerability assessment tool

    Get PDF
    The reliance on Web applications has increased rapidly over the years. At the same time, the quantity and impact of application security vulnerabilities have grown as well. Amongst these vulnerabilities, SQL Injection has been classified as the most common, dangerous and prevalent web application flaw. In this paper, we propose Escrow, a large-scale SQL Injection detection tool with an exploitation module that is light-weight, fast and platform-independent. Escrow uses a custom search implementation together with a static code analysis module to find potential target web applications. Additionally, it provides a simple to use graphical user interface (GUI) to navigate through a vulnerable remote database. Escrow is implementation-agnostic, i.e. It can perform analysis on any web application regardless of the server-side implementation (PHP, ASP, etc.). Using our tool, we discovered that it is indeed possible to identify and exploit at least 100 databases per 100 minutes, without prior knowledge of their underlying implementation. We observed that for each query sent, we can scan and detect dozens of vulnerable web applications in a short space of time, while providing a means for exploitation. Finally, we provide recommendations for developers to defend against SQL injection and emphasise the need for proactive assessment and defensive coding practices

    Progger: an efficient, tamper-evident kernel-space logger for cloud data provenance tracking

    Get PDF
    Cloud data provenance, or "what has happened to my data in the cloud", is a critical data security component which addresses pressing data accountability and data governance issues in cloud computing systems. In this paper, we present Progger (Provenance Logger), a kernel-space logger which potentially empowers all cloud stakeholders to trace their data. Logging from the kernel space empowers security analysts to collect provenance from the lowest possible atomic data actions, and enables several higher-level tools to be built for effective end-to-end tracking of data provenance. Within the last few years, there has been an increasing number of proposed kernel space provenance tools but they faced several critical data security and integrity problems. Some of these prior tools' limitations include (1) the inability to provide log tamper-evidence and prevention of fake/manual entries, (2) accurate and granular timestamp synchronisation across several machines, (3) log space requirements and growth, and (4) the efficient logging of root usage of the system. Progger has resolved all these critical issues, and as such, provides high assurance of data security and data activity audit. With this in mind, the paper will discuss these elements of high-assurance cloud data provenance, describe the design of Progger and its efficiency, and present compelling results which paves the way for Progger being a foundation tool used for data activity tracking across all cloud systems

    Virtual numbers for virtual machines?

    Get PDF
    Knowing the number of virtual machines (VMs) that a cloud physical hardware can (further) support is critical as it has implications on provisioning and hardware procurement. However, current methods for estimating the maximum number of VMs possible on a given hardware is usually the ratio of the specifications of a VM to the underlying cloud hardware’s specifications. Such naive and linear estimation methods mostly yield impractical limits as to how many VMs the hardware can actually support. It was found that if we base on the naive division method, user experience on VMs at those limits would be severely degraded. In this paper, we demonstrate through experimental results, the significant gap between the limits derived using the estimation method mentioned above and the actual situation. We believe for a more practicable estimation of the limits of the underlying infrastructure

    Time for Cloud? Design and implementation of a time-based cloud resource management system

    Get PDF
    The current pay-per-use model adopted by public cloud service providers has influenced the perception on how a cloud should provide its resources to end-users, i.e. on-demand and access to an unlimited amount of resources. However, not all clouds are equal. While such provisioning models work for well-endowed public clouds, they may not always work well in private clouds with limited budget and resources such as research and education clouds. Private clouds also stand to be impacted greatly by issues such as user resource hogging and the misuse of resources for nefarious activities. These problems are usually caused by challenges such as (1) limited physical servers/ budget, (2) growing number of users and (3) the inability to gracefully and automatically relinquish resources from inactive users. Currently, cloud resource management frameworks used for private cloud setups, such as OpenStack and CloudStack, only uses the pay-per-use model as the basis when provisioning resources to users. In this paper, we propose OpenStack Café, a novel methodology adopting the concepts of 'time' and booking systems' to manage resources of private clouds. By allowing users to book resources over specific time-slots, our proposed solution can efficiently and automatically help administrators manage users' access to resource, addressing the issue of resource hogging and gracefully relinquish resources back to the pool in resource-constrained private cloud setups. Work is currently in progress to adopt Café into OpenStack as a feature, and results of our prototype show promises. We also present some insights to lessons learnt during the design and implementation of our proposed methodology in this paper

    The data privacy matrix project: towards a global alignment of data privacy laws

    Get PDF
    Data privacy is an expected right of most citizens around the world but there are many legislative challenges within a boundary-less cloud computing and World Wide Web environment. Despite its importance, there is limited research around data privacy law gaps and alignment, and the legal side of the security ecosystem which seems to be in a constant effort to catch-up. There are already issues within recent history which show a lack of alignment causing a great deal of confusion, an example of this is the 'right to be forgotten' case which came up in 2014. This case involved a Spanish man against Google Spain. He requested the removal of a link to an article about an auction for his foreclosed home, for a debt that he had subsequently paid. However, misalignment of data privacy laws caused further complications to the case. This paper introduces the Waikato Data Privacy Matrix, our global project for alignment of data privacy laws by focusing on Asia Pacific data privacy laws and their relationships with the European Union and the USA. This will also suggest potential solutions to address some of the issues which may occur when a breach of data privacy occurs, in order to ensure an individual has their data privacy protected across the boundaries in the Web. With the increase in data processing and storage across different jurisdictions and regions (e.g. public cloud computing), the Waikato Data Privacy Matrix empowers businesses using or providing cloud services to understand the different data privacy requirements across the globe, paving the way for increased cloud adoption and usage

    State-wide survey of boat-based recreational fishing in Western Australia 2013/14

    Get PDF
    Based on the outcomes of an international workshop on recreational fishing survey methods in 2010, the Department of Fisheries developed an integrated survey involving several methods to provide a robust and cost-effective approach for obtaining annual estimates of recreational catch by boat-based fishers at both state-wide and bioregional levels

    Secure FPGA as a Service - Towards Secure Data Processing by Physicalizing the Cloud

    Get PDF
    Securely processing data in the cloud is still a difficult problem, even with homomorphic encryption and other privacy preserving schemes. Hardware solutions provide additional layers of security and greater performance over their software alternatives. However by definition the cloud should be flexible and adaptive, often viewed as abstracting services from products. By creating services reliant on custom hardware, the core essence of the cloud is lost. FPGAs bridge this gap between software and hardware with programmable logic, allowing the cloud to remain abstract. FPGA as a Service (FaaS) has been proposed for a greener cloud, but not for secure data processing. This paper explores the possibility of Secure FaaS in the cloud for privacy preserving data processing, describes the technologies required, identifies use cases, and highlights potential challenges

    Evaluation of 16S next-generation sequencing of hypervariable region 4 in wastewater samples: An unsuitable approach for bacterial enteric pathogen identification

    Get PDF
    Recycled wastewater can carry human-infectious microbial pathogens and therefore wastewater treatment strategies must effectively eliminate pathogens before recycled wastewater is used to supplement drinking and agricultural water supplies. This study characterised the bacterial composition of four wastewater treatment plants (WWTPs) (three waste stabilisation ponds and one oxidation ditch WWTP using activated sludge treatment) in Western Australia. The hypervariable region 4 (V4) of the bacterial 16S rRNA (16S) gene was sequenced using next-generation sequencing (NGS) on the Illumina MiSeq platform. Sequences were pre-processed in USEARCH v10.0 and denoised into zero-radius taxonomic units (ZOTUs) with UNOISE3. Taxonomy was assigned to the ZOTUs using QIIME 2 and the Greengenes database and cross-checked with the NCBI nr/nt database. Bacterial composition of all WWTPs and treatment stages (influent, intermediate and effluent) were dominated by Proteobacteria (29.0-87.4%), particularly Betaproteobacteria (9.0-53.5%) and Gammaproteobacteria (8.6-34.6%). Nitrifying bacteria (Nitrospira spp.) were found only in the intermediate and effluent of the oxidation ditch WWTP, and denitrifying and floc-forming bacteria were detected in all WWTPs, particularly from the families Comamonadaceae and Rhodocyclales. Twelve pathogens were assigned taxonomy by the Greengenes database, but comparison of sequences from genera and families known to contain pathogens to the NCBI nr/nt database showed that only three pathogens (Arcobacter venerupis, Laribacter hongkongensis and Neisseria canis) could be identified in the dataset at the V4 region. Importantly, Enterobacteriaceae genera could not be differentiated. Family level taxa assigned by Greengenes database agreed with NCBI nr/nt in most cases, however, BLAST analyses revealed erroneous taxa in Greengenes database. This study highlights the importance of validating taxonomy of NGS sequences with databases such as NCBI nr/nt, and recommends including the V3 region of 16S in future short amplicon NGS studies that aim to identify bacterial enteric pathogens, as this will improve taxonomic resolution of most, but not all, Enterobacteriaceae species

    Illuminating the bacterial microbiome of Australian ticks with 16S and Rickettsia-specific next-generation sequencing

    Get PDF
    Next-generation sequencing (NGS) studies show that mosquito and tick microbiomes influence the transmission of pathogens, opening new avenues for vector-borne pathogen control. Recent microbiological studies of Australian ticks highlight fundamental knowledge gaps of tick-borne agents. This investigation explored the composition, diversity and prevalence of bacteria in Australian ticks (n = 655) from companion animals (dogs, cats and horses). Bacterial 16S NGS was used to identify most bacterial taxa and a Rickettsia-specific NGS assay was developed to identify Rickettsia species that were indistinguishable at the V1-2 regions of 16S. Sanger sequencing of near full-length 16S was used to confirm whether species detected by 16S NGS were novel. The haemotropic bacterial pathogens Anaplasma platys, Bartonella clarridgeiae, “Candidatus Mycoplasma haematoparvum” and Coxiella burnetii were identified in Rhipicephalus sanguineus (s.l.) from Queensland (QLD), Western Australia, the Northern Territory (NT), and South Australia, Ixodes holocyclus from QLD, Rh. sanguineus (s.l.) from the NT, and I. holocyclus from QLD, respectively. Analysis of the control data showed that cross-talk compromises the detection of rare species as filtering thresholds for less abundant sequences had to be applied to mitigate false positives. A comparison of the taxonomic assignments made with 16S sequence databases revealed inconsistencies. The Rickettsia-specific citrate synthase gene NGS assay enabled the identification of Rickettsia co-infections with potentially novel species and genotypes most similar (97.9–99.1%) to Rickettsia raoultii and Rickettsia gravesii. “Candidatus Rickettsia jingxinensis” was identified for the first time in Australia. Phylogenetic analysis of near full-length 16S sequences confirmed a novel Coxiellaceae genus and species, two novel Francisella species, and two novel Francisella genotypes. Cross-talk raises concerns for the MiSeq platform as a diagnostic tool for clinical samples. This study provides recommendations for adjustments to IlluminaÊŒs 16S metagenomic sequencing protocol that help track and reduce cross-talk from cross-contamination during library preparation. The inconsistencies in taxonomic assignment emphasise the need for curated and quality-checked sequence databases

    UVisP: User-centric visualization of data provenance with gestalt principles

    Get PDF
    The need to understand and track files (and inherently, data) in cloud computing systems is in high demand. Over the past years, the use of logs and data representation using graphs have become the main method for tracking and relating information to the cloud users. While being used, tracking related information with 'data provenance' (i.e. series of chronicles and the derivation history of data on metadata) is the new trend for cloud users. However, there is still much room for improving data activity representation in cloud systems for end-users. We propose 'User-centric Visualization of data provenance with Gestalt (UVisP)', a novel user-centric visualization technique for data provenance. This technique aims to facilitate the missing link between data movements in cloud computing environments and the end-users uncertain queries over their files security and life cycle within cloud systems. The proof of concept for the UVisP technique integrates an open-source visualization API with Gestalt's theory of perception to provide a range of user-centric provenance visualizations. UVisP allows users to transform and visualize provenance (logs) with implicit prior knowledge of 'Gestalt's theory of perception.' We presented the initial development of the UVisP technique and our results show that the integration of Gestalt and 'perceptual key(s)' in provenance visualization allows end-users to enhance their visualizing capabilities, to extract useful knowledge and understand the visualizations better
    • 

    corecore