287,524 research outputs found

    Data Replication Strategies in Cloud Computing

    Get PDF
    Data replication is a widely used technique in various systems. For example, it can be employed in large-scale distributed file systems to increase data availability and system reliability, or it can be used in many network models (e.g. data grid, Amazon CloudFront) to reduce access latency and network bandwidth consumption, etc. I study a series of problems that related to the data replication method in Hadoop Distributed File System (HDFS) and in Amazon CloudFront service. Data failure, which is caused by hardware failure or malfunction, software error, human error, is the greatest threat to the file storage system. I present a set of schemes to enhance the efficiency of the current data replication strategy in HDFS thereby improving system reliability and performance. I also study the application replication placement problem based on an Original-Front sever model, and I propose a novel strategy which intends to maximize the profit of the application providers

    Distributed Detection of DDoS Attacks During the Intermediate Phase Through Mobile Agents

    Get PDF
    A Distributed Denial of Service attack is a large-scale, coordinated attack on the availability of services of a victim system, launched indirectly through many compromised computers on the Internet. Intrusion detection systems are network security tools that process local audit data or monitor network traffic to search for specific patterns or certain deviations from expected behavior, which indicate malicious activities against the protected network. In this study, we propose distributed intrusion detection methods to detect Distributed Denial of Service attacks in a special dataset and test these methods in a simulated-real time environment, in which the mobile agents are synchronized with the timestamp stated in the dataset. All of our methods use the alarms generated by SNORT, a signature-based network intrusion detection system. We use mobile agents in our methods on the Jade platform in order to reduce network bandwidth usage and to decrease the dependency on the central unit for a higher reliability. The methods are compared based on reliability, network load and mean detection time values

    Systems and certification issues for civil transport aircraft flow control systems

    Get PDF
    This article is placed here with permission from the Royal Aeronautical Society - Copyright @ 2009 Royal Aeronautical SocietyThe use of flow control (FC) technology on civil transport aircraft is seen as a potential means of providing a step change in aerodynamic performance in the 2020 time frame. There has been extensive research into the flow physics associated with FC. This paper focuses on developing an understanding of the costs and design drivers associated with the systems needed and certification. The research method adopted is based on three research strands: 1. Study of the historical development of other disruptive technologies for civil transport aircraft, 2. Analysis of the impact of legal and commercial requirements, and 3. Technological foresight based on technology trends for aircraft currently under development. Fly by wire and composite materials are identified as two historical examples of successful implementation of disruptive new technology. Both took decades to develop, and were initially developed for military markets. The most widely studied technology similar to FC is identified as laminar flow control. Despite more than six decades of research and arguably successful operational demonstration in the 1990s this has not been successfully transitioned to commercial products. Significant future challenges are identified in cost effective provision of the additional systems required for environmental protection and in service monitoring of FC systems particularly where multiple distributed actuators are envisaged. FC generated noise is also seen as a significant challenge. Additional complexity introduced by FC systems must also be balanced by the commercial imperative of dispatch reliability, which may impose more stringent constraints than legal (certification) requirements. It is proposed that a key driver for future successful application of FC is the likely availability of significant electrical power generation on 787 aircraft forwards. This increases the competitiveness of electrically driven FC systems compared with those using engine bleed air. At the current rate of progress it is unlikely FC will make a contribution to the next generation of single-aisle aircraft due to enter service in 2015. In the longer term, there needs to be significant movement across a broad range of systems technologies before the aerodynamic benefits of FC can be exploited.This work is supported by the EU FP6 AVERT (AerodynamicValidation of Emissions Reducing Technologies) project

    The Performability Manager

    Get PDF
    The authors describe the performability manager, a distributed system component that contributes to a more effective and efficient use of system components and prevents quality of service (QoS) degradation. The performability manager dynamically reconfigures distributed systems whenever needed, to recover from failures and to permit the system to evolve over time and include new functionality. Large systems require dynamic reconfiguration to support dynamic change without shutting down the complete system. A distributed system monitor is needed to verify QoS. Monitoring a distributed system is difficult because of synchronization problems and minor differences in clock speeds. The authors describe the functionality and the operation of the performability manager (both informally and formally). Throughout the paper they illustrate the approach by an example distributed application: an ANSAware-based number translation service (NTS), from the intelligent networks (IN) area

    Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments

    Full text link
    Data centres that use consumer-grade disks drives and distributed peer-to-peer systems are unreliable environments to archive data without enough redundancy. Most redundancy schemes are not completely effective for providing high availability, durability and integrity in the long-term. We propose alpha entanglement codes, a mechanism that creates a virtual layer of highly interconnected storage devices to propagate redundant information across a large scale storage system. Our motivation is to design flexible and practical erasure codes with high fault-tolerance to improve data durability and availability even in catastrophic scenarios. By flexible and practical, we mean code settings that can be adapted to future requirements and practical implementations with reasonable trade-offs between security, resource usage and performance. The codes have three parameters. Alpha increases storage overhead linearly but increases the possible paths to recover data exponentially. Two other parameters increase fault-tolerance even further without the need of additional storage. As a result, an entangled storage system can provide high availability, durability and offer additional integrity: it is more difficult to modify data undetectably. We evaluate how several redundancy schemes perform in unreliable environments and show that alpha entanglement codes are flexible and practical codes. Remarkably, they excel at code locality, hence, they reduce repair costs and become less dependent on storage locations with poor availability. Our solution outperforms Reed-Solomon codes in many disaster recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN

    A2THOS: Availability Analysis and Optimisation in SLAs

    Get PDF
    IT service availability is at the core of customer satisfaction and business success for today’s organisations. Many medium-large size organisations outsource part of their IT services to external providers, with Service Level Agreements describing the agreed availability of outsourced service components. Availability management of partially outsourced IT services is a non trivial task since classic approaches for calculating availability are not applicable, and IT managers can only rely on their expertise to fulfil it. This often leads to the adoption of non optimal solutions. In this paper we present A2THOS, a framework to calculate the availability of partially outsourced IT services in the presence of SLAs and to achieve a cost-optimal choice of availability levels for outsourced IT components while guaranteeing a target availability level for the service
    • 

    corecore