192 research outputs found
Experimental analysis of computer system dependability
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance
Data partitioning and load balancing in parallel disk systems
Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible ways, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent file system that optimizes striping by taking into account the requirements of the applications, and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces
Recommended from our members
Cherub: A hardware distributed single shared address space memory architecture
Increased computer throughput can be achieved through the use of parallel processing. The granularity of a parallel program is the average number of instructions performed by the tasks constituting it. Coarse-grained programs typically execute huge numbers of instructions per task (w 105). The tasks in fine-grained programs are typically short (æ 103). In general, the finer the program grain, the greater the potential for exploiting parallelism. Amdahl’s Law shows that in the absence of overheads, the more potential parallelism that is realised in an algorithm, the faster it will be. The economical granularity of tasks is determined by the intertask communications overhead. Break-even occurs when processing is approximately equally divided between useful work and overhead.
The two common parallel programming paradigms are shared variable and message passing. Shared variable is, in general, the more natural of the two as it allows implicit communication between tasks. This encourages the programmer to make use of fine-grained tasks. The message passing paradigm requires explicit communication between tasks. This encourages the programmer to use coarser-grained tasks.
Two kinds of parallel architecture have become established. The first is the multiprocessor, which is built around a shared bus giving broadcast communications and a shared memory. This is characterised by low communications overhead, but limited scalability. The second is the multicomputer, which is based on point-to-point communications with larger communications overhead, but good scalability. Quantitatively, the low overhead of the multiprocessor is well matched to fine-grain tasks and, hence, to supporting the shared variable paradigm, while the high overhead of the multicomputer matches it to coarse-grain parallelism and, hence, to the message passing paradigm.
Currently, there appears to be no middle ground in parallel computing; an architecture which can support both several hundred medium-grained (« 104 instructions) parallel tasks and the shared variable programming paradigm would be advantageous in many applications.
This thesis asserts that it is possible to implement a new computer architecture, Cherub, which has at least 200 processors and is able to support shared variable programming with an optimal task granularity of around 104 instructions. This can be achieved through the combination of a hardware-based distributed shared single address space and a wafer-scale communications network.
To support the thesis, the dissertation first specifies a programmer’s interface to Cherub which is simple enough to implement in hardware. It then designs algorithms which provide this interface, allowing the requirements of the underlying network to be estimated. Finally, a wafer scale communications network is outlined, and simulations are used to demonstrate that it can provide the performance required to successfully implement Cherub
Principle and method of deception systems synthesizing for malware and computer attacks detection
The number of different types and the actual number of malware and computer attacks is constantly increasing. Therefore, detecting and counteracting malware and computer attacks remains a pressing issue. Users of corporate networks suffer the greatest damage. Many effective tools of various kinds have been developed to detect and counteract these effects. However, the dynamism in the development of new malware and the diversity of computer attacks encourage detection and countermeasure developers to constantly improve their tools and create new ones. The object of research in this paper is deception systems. The task of this study is to develop the elements of the theory and practice of creating such systems. Deception systems occupy a special place among the means of detecting and counteracting malware and computer attacks. These systems confuse attackers, but they also require constant changes and updates, as the peculiarities of their functioning become known over time. Therefore, the problem of creating deception systems whose functioning would remain incomprehensible to attackers is relevant. To solve this problem, we propose a new principle for the synthesis of such systems. Because the formation of such systems will be based on computer stations of a corporate network, the system is positioned as a multi-computer system. The system proposes the use of combined baits and traps to create false attack targets. All components of such a system form a shadow computer network. This study develops a principle for synthesizing multi-computer systems with combined baits and traps and a decision-making controller for detecting and countering IEDs and spacecraft. The principle is based on the presence of a controller for decisions made in the system and the use of specialized functionality for detection and counteraction. According to the developed principle of synthesizing such systems, this paper identifies a subset of systems with deception technologies that must have a controller and specialized functionality. The decision-making controller in the system is separate from the decision-making center. Its task is to choose the options for the next steps of the system, which are formed in the center of the system, depending on the recurrence of events. Moreover, prolonged recurrence of external events requires the system center to form a sequence of next steps. If they are repeated, the attacker has the opportunity to study the functioning of the system. The controller in the system chooses different answers from different possible answers for the same repeated suspicious events. Thus, an attacker, when investigating a corporate network, receives different answers to the same queries. Specialized functionality, in accordance with the principle of synthesis of such systems, is implemented in the system architecture. It affects the change of system architecture in the process of its functioning as a result of internal and external influences. This paper also considers a possible variant of the architecture of such deception systems, in particular, the architecture of a system with partial centralization. To synthesize such systems, a new method for synthesizing partially centralized systems for detecting malware in computer environments has been developed based on analytical expressions that determine the security state of such systems and their components. In addition, the experiments showed that the loss of 10-20% of the components does not affect the performance of the task. The results of the experiments were processed using ROC analysis and the algorithm for constructing the ROC curve. The results of the experiments made it possible to determine the degree of degradation of the systems constructed in this manner. Conclusions. This paper presents a new principle for the synthesis of multi-computer systems with combined decoys and traps and a decision-making controller for detecting and counteracting IEDs and spacecraft, as well as methods for synthesizing partially centralized systems for detecting malware in computer networks
Design and Performance analysis of a relational replicated database systems
The hardware organization and software structure of a new database system are presented. This system, the relational replicated database system (RRDS), is based on a set of replicated processors operating on a partitioned database. Performance improvements and capacity growth can be obtained by adding more processors to the configuration. Based on designing goals a set of hardware and software design questions were developed. The system then evolved according to a five-phase process, based on simulation and analysis, which addressed and resolved the design questions. Strategies and algorithms were developed for data access, data placement, and directory management for the hardware organization. A predictive performance analysis was conducted to determine the extent to which original design goals were satisfied. The predictive performance results, along with an analytical comparison with three other relational multi-backend systems, provided information about the strengths and weaknesses of our design as well as a basis for future research
- …