192 research outputs found

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Data partitioning and load balancing in parallel disk systems

    Get PDF
    Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible ways, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent file system that optimizes striping by taking into account the requirements of the applications, and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces

    Principle and method of deception systems synthesizing for malware and computer attacks detection

    Get PDF
    The number of different types and the actual number of malware and computer attacks is constantly increasing. Therefore, detecting and counteracting malware and computer attacks remains a pressing issue. Users of corporate networks suffer the greatest damage. Many effective tools of various kinds have been developed to detect and counteract these effects. However, the dynamism in the development of new malware and the diversity of computer attacks encourage detection and countermeasure developers to constantly improve their tools and create new ones. The object of research in this paper is deception systems. The task of this study is to develop the elements of the theory and practice of creating such systems. Deception systems occupy a special place among the means of detecting and counteracting malware and computer attacks. These systems confuse attackers, but they also require constant changes and updates, as the peculiarities of their functioning become known over time. Therefore, the problem of creating deception systems whose functioning would remain incomprehensible to attackers is relevant. To solve this problem, we propose a new principle for the synthesis of such systems. Because the formation of such systems will be based on computer stations of a corporate network, the system is positioned as a multi-computer system. The system proposes the use of combined baits and traps to create false attack targets. All components of such a system form a shadow computer network. This study develops a principle for synthesizing multi-computer systems with combined baits and traps and a decision-making controller for detecting and countering IEDs and spacecraft. The principle is based on the presence of a controller for decisions made in the system and the use of specialized functionality for detection and counteraction. According to the developed principle of synthesizing such systems, this paper identifies a subset of systems with deception technologies that must have a controller and specialized functionality. The decision-making controller in the system is separate from the decision-making center. Its task is to choose the options for the next steps of the system, which are formed in the center of the system, depending on the recurrence of events. Moreover, prolonged recurrence of external events requires the system center to form a sequence of next steps. If they are repeated, the attacker has the opportunity to study the functioning of the system. The controller in the system chooses different answers from different possible answers for the same repeated suspicious events. Thus, an attacker, when investigating a corporate network, receives different answers to the same queries. Specialized functionality, in accordance with the principle of synthesis of such systems, is implemented in the system architecture. It affects the change of system architecture in the process of its functioning as a result of internal and external influences. This paper also considers a possible variant of the architecture of such deception systems, in particular, the architecture of a system with partial centralization. To synthesize such systems, a new method for synthesizing partially centralized systems for detecting malware in computer environments has been developed based on analytical expressions that determine the security state of such systems and their components. In addition, the experiments showed that the loss of 10-20% of the components does not affect the performance of the task. The results of the experiments were processed using ROC analysis and the algorithm for constructing the ROC curve. The results of the experiments made it possible to determine the degree of degradation of the systems constructed in this manner. Conclusions. This paper presents a new principle for the synthesis of multi-computer systems with combined decoys and traps and a decision-making controller for detecting and counteracting IEDs and spacecraft, as well as methods for synthesizing partially centralized systems for detecting malware in computer networks

    Design and Performance analysis of a relational replicated database systems

    Get PDF
    The hardware organization and software structure of a new database system are presented. This system, the relational replicated database system (RRDS), is based on a set of replicated processors operating on a partitioned database. Performance improvements and capacity growth can be obtained by adding more processors to the configuration. Based on designing goals a set of hardware and software design questions were developed. The system then evolved according to a five-phase process, based on simulation and analysis, which addressed and resolved the design questions. Strategies and algorithms were developed for data access, data placement, and directory management for the hardware organization. A predictive performance analysis was conducted to determine the extent to which original design goals were satisfied. The predictive performance results, along with an analytical comparison with three other relational multi-backend systems, provided information about the strengths and weaknesses of our design as well as a basis for future research
    • …
    corecore