4,569 research outputs found

    Escrow: A large-scale web vulnerability assessment tool

    Get PDF
    The reliance on Web applications has increased rapidly over the years. At the same time, the quantity and impact of application security vulnerabilities have grown as well. Amongst these vulnerabilities, SQL Injection has been classified as the most common, dangerous and prevalent web application flaw. In this paper, we propose Escrow, a large-scale SQL Injection detection tool with an exploitation module that is light-weight, fast and platform-independent. Escrow uses a custom search implementation together with a static code analysis module to find potential target web applications. Additionally, it provides a simple to use graphical user interface (GUI) to navigate through a vulnerable remote database. Escrow is implementation-agnostic, i.e. It can perform analysis on any web application regardless of the server-side implementation (PHP, ASP, etc.). Using our tool, we discovered that it is indeed possible to identify and exploit at least 100 databases per 100 minutes, without prior knowledge of their underlying implementation. We observed that for each query sent, we can scan and detect dozens of vulnerable web applications in a short space of time, while providing a means for exploitation. Finally, we provide recommendations for developers to defend against SQL injection and emphasise the need for proactive assessment and defensive coding practices

    Progger: an efficient, tamper-evident kernel-space logger for cloud data provenance tracking

    Get PDF
    Cloud data provenance, or "what has happened to my data in the cloud", is a critical data security component which addresses pressing data accountability and data governance issues in cloud computing systems. In this paper, we present Progger (Provenance Logger), a kernel-space logger which potentially empowers all cloud stakeholders to trace their data. Logging from the kernel space empowers security analysts to collect provenance from the lowest possible atomic data actions, and enables several higher-level tools to be built for effective end-to-end tracking of data provenance. Within the last few years, there has been an increasing number of proposed kernel space provenance tools but they faced several critical data security and integrity problems. Some of these prior tools' limitations include (1) the inability to provide log tamper-evidence and prevention of fake/manual entries, (2) accurate and granular timestamp synchronisation across several machines, (3) log space requirements and growth, and (4) the efficient logging of root usage of the system. Progger has resolved all these critical issues, and as such, provides high assurance of data security and data activity audit. With this in mind, the paper will discuss these elements of high-assurance cloud data provenance, describe the design of Progger and its efficiency, and present compelling results which paves the way for Progger being a foundation tool used for data activity tracking across all cloud systems

    Virtual numbers for virtual machines?

    Get PDF
    Knowing the number of virtual machines (VMs) that a cloud physical hardware can (further) support is critical as it has implications on provisioning and hardware procurement. However, current methods for estimating the maximum number of VMs possible on a given hardware is usually the ratio of the specifications of a VM to the underlying cloud hardware’s specifications. Such naive and linear estimation methods mostly yield impractical limits as to how many VMs the hardware can actually support. It was found that if we base on the naive division method, user experience on VMs at those limits would be severely degraded. In this paper, we demonstrate through experimental results, the significant gap between the limits derived using the estimation method mentioned above and the actual situation. We believe for a more practicable estimation of the limits of the underlying infrastructure

    From Linked Data to Relevant Data -- Time is the Essence

    Full text link
    The Semantic Web initiative puts emphasis not primarily on putting data on the Web, but rather on creating links in a way that both humans and machines can explore the Web of data. When such users access the Web, they leave a trail as Web servers maintain a history of requests. Web usage mining approaches have been studied since the beginning of the Web given the log's huge potential for purposes such as resource annotation, personalization, forecasting etc. However, the impact of any such efforts has not really gone beyond generating statistics detailing who, when, and how Web pages maintained by a Web server were visited.Comment: 1st International Workshop on Usage Analysis and the Web of Data (USEWOD2011) in the 20th International World Wide Web Conference (WWW2011), Hyderabad, India, March 28th, 201

    Time for Cloud? Design and implementation of a time-based cloud resource management system

    Get PDF
    The current pay-per-use model adopted by public cloud service providers has influenced the perception on how a cloud should provide its resources to end-users, i.e. on-demand and access to an unlimited amount of resources. However, not all clouds are equal. While such provisioning models work for well-endowed public clouds, they may not always work well in private clouds with limited budget and resources such as research and education clouds. Private clouds also stand to be impacted greatly by issues such as user resource hogging and the misuse of resources for nefarious activities. These problems are usually caused by challenges such as (1) limited physical servers/ budget, (2) growing number of users and (3) the inability to gracefully and automatically relinquish resources from inactive users. Currently, cloud resource management frameworks used for private cloud setups, such as OpenStack and CloudStack, only uses the pay-per-use model as the basis when provisioning resources to users. In this paper, we propose OpenStack Café, a novel methodology adopting the concepts of 'time' and booking systems' to manage resources of private clouds. By allowing users to book resources over specific time-slots, our proposed solution can efficiently and automatically help administrators manage users' access to resource, addressing the issue of resource hogging and gracefully relinquish resources back to the pool in resource-constrained private cloud setups. Work is currently in progress to adopt Café into OpenStack as a feature, and results of our prototype show promises. We also present some insights to lessons learnt during the design and implementation of our proposed methodology in this paper

    Streptococcus Pneumoniae Intracranial Abscess and Post-Infectious Vasculitis

    Get PDF
    Intracranial abscesses are rare complications of Streptococcus pneumoniae infections, and to our knowledge, there have been no case reports of post-infectious vasculitis developing in such patients. Here we describe the case of a 48-year-old post-splenectomy male who developed post-infectious vasculitis following S. pneumoniae otitis media complicated by mastoiditis, osteomyelitis, meningitis, and intracranial abscess. Clinicians ought to be aware of the possible adverse outcomes of invasive S. pneumoniae and the limitations of current treatment options

    Digital Trust - Trusted Computing and Beyond A Position Paper

    Get PDF
    Along with the invention of computers and interconnected networks, physical societal notions like security, trust, and privacy entered the digital environment. The concept of digital environments begins with the trust (established in the real world) in the organisation/individual that manages the digital resources. This concept evolved to deal with the rapid growth of the Internet, where it became impractical for entities to have prior offline (real world) trust. The evolution of digital trust took diverse approaches and now trust is defined and understood differently across heterogeneous domains. This paper looks at digital trust from the point of view of security and examines how valid trust approaches from other domains are now making their way into secure computing. The paper also revisits and analyses the Trusted Platform Module (TPM) along with associated technologies and their relevance in the changing landscape. We especially focus on the domains of cloud computing, mobile computing and cyber-physical systems. In addition, the paper also explores our proposals that are competing with and extending the traditional functionality of TPM specifications

    Unified Model for Data Security -- A Position Paper

    Get PDF
    One of the most crucial components of modern Information Technology (IT) systems is data. It can be argued that the majority of IT systems are built to collect, store, modify, communicate and use data, enabling different data stakeholders to access and use it to achieve different business objectives. The confidentiality, integrity, availability, audit ability, privacy, and quality of the data is of paramount concern for end-users ranging from ordinary consumers to multi-national companies. Over the course of time, different frameworks have been proposed and deployed to provide data security. Many of these previous paradigms were specific to particular domains such as military or media content providers, while in other cases they were generic to different verticals within an industry. There is a much needed push for a holistic approach to data security instead of the current bespoke approaches. The age of the Internet has witnessed an increased ease of sharing data with or without authorisation. These scenarios have created new challenges for traditional data security. In this paper, we study the evolution of data security from the perspective of past proposed frameworks, and present a novel Unified Model for Data Security (UMDS). The discussed UMDS reduces the friction from several cross-domain challenges, and has the functionality to possibly provide comprehensive data security to data owners and privileged users

    Prospective marketing meta-analysis and a novel web-based media-mix modeling experiment

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 89-90).Prospective meta-analysis, pioneered in the biomedical field, is the meta-analysis of multiple studies conducted using similar protocols and under similar conditions. To eliminate bias, the inclusion of individual studies in the meta-analysis is agnostic of the findings of the individual experiment. In this thesis, I adapt prospective meta-analysis for use in the field of marketing science. Specifically, I design and create a database for prospective marketing meta-analysis that encourages and facilitates international collaboration and scale-up of marketing science studies and use this platform as the basis for a novel web-based media-mix modeling experiment that aims to model the relative effects of a variety of media. I detail the design and implementation of this web-based media-mix modeling experiment, which introduces the use of a browser extension to modify the media experience for test subjects based on their responses to pre-survey questions. I present preliminary results from a 50-user trial run of the system and analyze improvements and next steps, both for the current experiment and scale-up for future studies to include in the meta-analysis.by Ryan Ko.M.Eng
    corecore