1,770 research outputs found

    QoS-aware Storage Virtualization: A Framework for Multi-tier Infrastructures in Cloud Storage Systems

    Get PDF
    The emergence of the relatively modern phenomenon of cloud computing has manifested a different approach to the availability and storage of software and data on a remote online server ‘in the cloud’, which can be accessed by pre-determined users through the Internet, even allowing sharing of data in certain scenarios. Data availability, reliability, and access performance are three important factors that need to be taken into consideration by cloud providers when designing a high-performance storage system for any organization. Due to the high costs of maintaining and managing multiple local storage systems, it is now considered more applicable to design a virtualized multi-tier storage infrastructure, yet, the existing Quality of Service (QoS) must be guaranteed on the application level within the cloud without ongoing human intervention. Such interference seems necessary since the delivered QoS can vary widely both across and within storage tiers, depending on the access profile of the data. This survey paper encompasses a general framework for the optimal design of a distributed system in order to attain efficient data availability and reliability. To this extent, numerous state-of-the-art technologies and methods have been revised, especially for multi-tiered distributed cloud systems. Moreover, several critical aspects that must be taken into consideration for getting optimal performance of QoS-aware cloud systems are discussed, highlighting some solutions to handle failure situations, and the possible advantages and benefits of QoS. Finally, this papers attempts to argue the possible improvements that have been developed on QoS-aware cloud systems like Q-cloud since 2010, such as any extra attempts been carried forward to make the Q-cloud more adaptable and secure

    Securing future decentralised industrial IoT infrastructures: challenges and free open source solutions

    Get PDF
    peer-reviewedThe next industrial revolution is said to be paved by the use of novel Internet of Things (IoT) technology. One important aspect of the modern IoT infrastructures is decentralised communication, often called Peer-to-Peer (P2P). In the context of industrial communication, P2P contributes to resilience and improved stability for industrial components. Current industrial facilities, however, still rely on centralised networking schemes which are considered to be mandatory to comply with security standards. In order to succeed, introduced industrial P2P technology must maintain the current level of protection and also consider possible new threats. The presented work starts with a short analysis of well-established industrial communication infrastructures and how these could benefit from decentralised structures. Subsequently, previously undefined Information Technology (IT) security requirements are derived from the new cloud based decentralised industrial automation model architecture presented in this paper. To meet those requirements, state-of-the-art communication schemes and their open source implementations are presented and assessed for their usability in the context of industrial IoT. Finally, derived building blocks for industrial IoT P2P security are presented which are qualified to comply with the stated industrial IoT security requirements

    A policy compliance detection architecture for data exchange infrastructures

    Get PDF
    Data sharing and federation can significantly increase efficiency and lower the cost of digital collaborations. It is important to convince the data owners that their outsourced data will be used in a secure and controlled manner. To achieve this goal, constructing a policy governing concrete data usage rule among all parties is essential. More importantly, we need to establish digital infrastructures that can enforce the policy. In this thesis, we investigate how to select optimal application-tailored infrastructures and enhance policy compliance capabilities. First, we introduce a component linking the policy to the infrastructure patterns. The mechanism selects digital infrastructure patterns that satisfy the collaboration request to a maximal degree by modelling and closeness identification. Second, we present a threat-analysis driven risk assessment framework. The framework quantitatively assesses the remaining risk of an application delegated to digital infrastructure. The optimal digital infrastructure for a specific data federation application is the one which can support the requested collaboration model and provides the best security guarantee. Finally, we present a distributed architecture that detects policy compliance when an algorithm executes on the data. A profile and an IDS model are built for each containerized algorithm and are distributed to endpoint execution platforms via a secure channel. Syscall traces are monitored and analysed in endpoint points platforms. The machine learning based IDS is retrained periodically to increase generalization. A sanitization algorithm is implemented to filter out malicious samples to further defend the architecture against adversarial machine learning attacks

    A policy compliance detection architecture for data exchange infrastructures

    Get PDF

    GWpilot: Enabling multi-level scheduling in distributed infrastructures with GridWay and pilot jobs

    Get PDF
    Current systems based on pilot jobs are not exploiting all the scheduling advantages that the technique offers, or they lack compatibility or adaptability. To overcome the limitations or drawbacks in existing approaches, this study presents a different general-purpose pilot system, GWpilot. This system provides individual users or institutions with a more easy-to-use, easy-toinstall, scalable, extendable, flexible and adjustable framework to efficiently run legacy applications. The framework is based on the GridWay meta-scheduler and incorporates the powerful features of this system, such as standard interfaces, fair-share policies, ranking, migration, accounting and compatibility with diverse infrastructures. GWpilot goes beyond establishing simple network overlays to overcome the waiting times in remote queues or to improve the reliability in task production. It properly tackles the characterisation problem in current infrastructures, allowing users to arbitrarily incorporate customised monitoring of resources and their running applications into the system. This functionality allows the new framework to implement innovative scheduling algorithms that accomplish the computational needs of a wide range of calculations faster and more efficiently. The system can also be easily stacked under other software layers, such as self-schedulers. The advanced techniques included by default in the framework result in significant performance improvements even when very short tasks are scheduled

    From Artefacts to Infrastructures

    Get PDF
    In their initial articulation of the direction of the CSCW field, scholars advanced an open-ended agenda. This continuing commitment to open-ness to different contexts and approaches is not, however, reflected in the contents of the major CSCW outlets. The field appears to privilege particular forms of cooperative work. We find many examples of what could be described as ‘localist studies’, restricted to particular settings and timeframes. This focus on the ‘here and now’ is particularly problematic when one considers the kinds of large-scale, integrated and interconnected workplace information technologies—or what we are calling Information Infrastructures—increasingly found within and across organisations today. CSCW appears unable (or unwilling) to grapple with these technologies—which were at the outset envisaged as falling within the scope of the field. Our paper hopes to facilitate greater CSCW attention to Information Infrastructures through offering a re-conceptualisation of the role and nature of ‘design’. Design within an Information Infrastructures perspective needs to accommodate non-local constraints. We discuss two such forms of constraint: standardisation (how local fitting entails unfitting at other sites) and embeddedness (the entanglement of one technology with other apparently unrelated ones). We illustrate these themes through introducing case material drawn on from a number of previous studies

    An anti-malware product test orchestration solution for multiple pluggable environments

    Get PDF
    The term automation gets thrown around a lot these days in the software industry. However, the recent change in test automation in the software engineering process is driven by multiple factors such as environmental factors, both external and internal as well as industry-driven factors. Simply, what we all understand about automation is - the use of some technologies to operate a task. The choice of the right tools, be it in-house or any third-party software, can increase effectiveness, efficiency and coverage of the security product testing. Often, test environments are maintained at various stages in the testing process. Developer’s test, dedicated test, integration test and pre-production or business readiness test are some common phrases in software testing. On the other hand, abstraction is often included between different architectural layers, ever-changing providers of virtualization platforms such as VMWare, OpenStack, AWS as test execution environments and many others with a different state of maintainability. As there is an obvious mismatch in configuration between development, testing and production environment; software testing process is often slow and tedious for many organizations due to the lack of collaboration between IT Operations and Software Development teams. Because of this, identifying and addressing test environmentrelated compatibility becomes a major concern for QA teams. In this context, this thesis presents a DevOps approach and implementation method of an automated test execution solution named OneTA that can interact with multiple test environments including isolated malware test environments. The study was performed to identify a common way of preparing test environments in in-house and publicly available virtualization platforms where distributed tests can run on a regular basis. The current solution allows security product testing in multiple pluggable environments in a single setup utilizing the modern DevOps practice to result minimum efforts. This thesis project was carried out in collaboration with F-Secure, a leading cyber security company in Finland. The project deals with the company’s internal environments for test execution. It explores the available infrastructures so that software development team can use this solution as a test execution tool
    • …
    corecore