33 research outputs found

    A Model-Based Approach To Requirements Analysis

    Get PDF
    A major task in designing embedded systems is the systematic elaboration of functional system requirements and their integration into the environment of the complete technical system. The main challenge is to handle the versatile tasks of coordinating a definition of behavior, which is appropriate to the problem. The problem- and design-specifications of the customer related product definition have to be adjusted with and integrated into the manifold requirements of the technical system design. Accordingly, the model-based requirements analysis and system-definition presented here defines a well-structured modeling approach, which systematically aids the goal-oriented formulation and adjustment of the different stakeholder-requirements with the aid of views onto the system and descriptive specification techniques. Thus it allows a clear specification of a consistent and complete system design. The central steps of this approach are implemented in a requirements management (RM) tool prototype called AutoRAI

    Data allocation in disk arrays with multiple raid levels

    Get PDF
    There has been an explosion in the amount of generated data, which has to be stored reliably because it is not easily reproducible. Some datasets require frequent read and write access. like online transaction processing applications. Others just need to be stored safely and read once in a while, as in data mining. This different access requirements can be solved by using the RAID (redundant array of inexpensive disks) paradigm. i.e., RAIDi for the first situation and RAID5 for the second situation. Furthermore rather than providing two disk arrays with RAID 1 and RAID5 capabilities, a controller can be postulated to emulate both. It is referred as a heterogeneous disk array (HDA). Dedicating a subset of disks to RAID 1 results in poor disk utilization, since RAIDi vs RAID5 capacity and bandwidth requirements are not known a priori. Balancing disk loads when disk space is shared among allocation requests, referred to as virtual arrays - VAs poses a difficult problem. RAIDi disk arrays have a higher access rate per gigabyte than RAID5 disk arrays. Allocating more VAs while keeping disk utilizations balanced and within acceptable bounds is the goal of this study. Given its size and access rate a VA\u27s width or the number of its Virtual Disks -VDs is determined. VDs allocations on physical disks using vector-packing heuristics, with disk capacity and bandwidth as the two dimensions are shown to be the best. An allocation is acceptable if it does riot exceed the disk capacity and overload disks even in the presence of disk failures. When disk bandwidth rather than capacity is the bottleneck, the clustered RAID paradigm is applied, which offers a tradeoff between disk space and bandwidth. Another scenario is also considered where the RAID level is determined by a classification algorithm utilizing the access characteristics of the VA, i.e., fractions of small versus large access and the fraction of write versus read accesses. The effect of RAID 1 organization on its reliability and performance is studied too. The effect of disk failures on the X-code two disk failure tolerant array is analyzed and it is shown that the load across disks is highly unbalanced unless in an NxN array groups of N stripes are randomly rotated

    Studies of disk arrays tolerating two disk failures and a proposal for a heterogeneous disk array

    Get PDF
    There has been an explosion in the amount of generated data in the past decade. Online access to these data is made possible by large disk arrays, especially in the RAID (Redundant Array of Independent Disks) paradigm. According to the RAID level a disk array can tolerate one or more disk failures, so that the storage subsystem can continue operating with disk failure(s). RAID 5 is a single disk failure tolerant array which dedicates the capacity of one disk to parity information. The content on the failed disk can be reconstructed on demand and written onto a spare disk. However, RAID5 does not provide enough protection for data since the data loss may occur when there is a media failure (unreadable sectors) or a second disk failure during the rebuild process. Due to the high cost of downtime in many applications, two disk failure tolerant arrays, such as RAID6 and EVENODD, have become popular. These schemes use 2/N of the capacity of the array for redundant information in order to tolerate two disk failures. RM2 is another scheme that can tolerate two disk failures, with slightly higher redundancy ratio. However, the performance of these two disk failure tolerant RAID schemes is impaired, since there are two check disks to be updated for each write request. Therefore, their performance, especially when there are disk failure(s), is of interest. In the first part of the dissertation, the operations for the RAID5, RAID6, EVENODD and RM2 schemes are described. A cost model is developed for these RAID schemes by analyzing the operations in various operating modes. This cost model offers a measure of the volume of data being transmitted, and provides adevice-independent comparison of the efficiency of these RAID schemes. Based on this cost model, the maximum throughput of a RAID scheme can be obtained given detailed disk characteristic and RAID configuration. Utilizing M/G/1 queuing model and other favorable modeling assumptions, a queuing analysis to obtain the mean read response time is described. Simulation is used to validate analytic results, as well as to evaluate the RAID systems in analytically intractable cases. The second part of this dissertation describes a new disk array architecture, namely Heterogeneous Disk Array (HDA). The HDA is motivated by a few observations of the trends in storage technology. The HDA architecture allows a disk array to have two forms of heterogeneity: (1) device heterogeneity, i.e., disks of different types can be incorporated in a single HDA; and (2) RAID level heterogeneity, i.e., various RAID schemes can coexist in the same array. The goal of this architecture is (1) utilizing the extra resource (i.e. bandwidth and capacity) introduced by new disk drives in an automated and efficient way; and (2) using appropriate RAID levels to meet the varying availability requirements for different applications. In HDA, each new object is associated with an appropriate RAID level and the allocation is carried out in a way to keep disk bandwidth and capacity utilizations balanced. Design considerations for the data structures of HDA metadata are described, followed by the actual design of the data structures and flowcharts for the most frequent operations. Then a data allocation algorithm is described in detail. Finally, the HDA architecture is prototyped based on the DASim simulation toolkit developed at NJIT and simulation results of an HDA with two RAID levels (RAID 1 and RAIDS) are presented

    Uvođenje tehnika zasnovanih na modelu u razvoj aplikacija za ugradbene sustave s vremenskim ograničenjima

    Get PDF
    This paper investigates the feasibility of integrating legacy software processes and tools into the paradigm of model-based development of industrial real-time embedded systems. Research has been conducted on the example of using legacy assembly code for automatic code generation scheme inside MATLAB/Simulink environment. A sample Simulink model has been presented, code has been generated from it and its correctness has been validated by back-to-back comparison with the simulation results.Ovaj rad ispituje mogućnost integriranja naslije.enih procesa i alata za razvoj programske podrške namijenjene industrijskim ugradbenim računalnim sustavima s nametnutim vremenskim ograničenjima u paradigmu razvoja zasnovanog na modelu. Istraživanje je provedeno na primjeru korištenja naslijeđenog asemblerskog programskog koda pri automatskoj generaciji izvršnog koda unutar MATLAB/Simulink okruženja. Prikazan je primjer Simulink modela iz kojega je generiran kod čija je ispravnost utvrđena usporedbom s rezultatima simulacije

    REMM-Studio: an Integrated Model-Driven Environment for Requirements Specification, Validation and Formatting

    Get PDF
    In order to integrate requirements into the current Model-Driven Engineering (MDE) approach, the traditional document-based requirements specification process should be changed into a requirements modelling process. To achieve this we propose a requirements metamodel called REMM Requirements Engineering MetaModel) which includes the elements that should appear in a requirements model (requirements, stakeholders, test cases, etc.) together with the relationships that may appear between them. This metamodel is the basis of the REMM-Studio environment which enables: (1) to build graphical requirements models, (2) to validate them against the metamodel and against a set of additional OCL constraints, and (3) to automatically generate a navigable Software Requirements Specification (SRS) document as the main deliverable of the Requirements Engineering process. REMM-Studio is expected to ease the integration of requirements with other development models (e.g. component models) and to facilitate the validation of the SRS thanks to its navigability.MEDWSA (TIN2006-15175-C05-02), DEDALO (TIN2006-15175-C05-03), DESERT (PBC-05-012-3)Escuela Técnica superior de Ingeniería Agronómic

    Cut-and-paste file-systems: integrating simulators and file-systems

    Get PDF
    We have implemented an integrated and configurable file system called the PFS and a trace-driven file-system simulator called Patsy. Patsy is used for off-line analysis of file-system algorithms, PFS is used for on-line file-system data storage. Algorithms are first analyzed in Patsy and when we are satisfied\ud with the performance results, migrated into PFS for on-line usage. Since Patsy and PFS are derived from a common cut-and-paste file-system framework, this migration proceeds smoothly.\ud We have found this integration quite useful: algorithm bottlenecks have been found through Patsy that could have led to performance degradations in PFS. Off-line simulators are simpler to analyze compared to on-line file-systems because a work load can repeatedly be replayed on the same off-line simulator. This is almost impossible in on-line file-systems since it is hard to provide similar conditions for each experiment run. Since simulator and file-system are integrated (hence, use the same code), experiment results from the simulator have relevance in the real system. \ud This paper describes the cut-and-paste framework, the instantiation of the framework to PFS and Patsy and finally, some of the experiments we conducted in Patsy

    Data partitioning and load balancing in parallel disk systems

    Get PDF
    Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible ways, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent file system that optimizes striping by taking into account the requirements of the applications, and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces

    Automated Test Case Generation from Domain-Specific High-Level Requirement Models

    Get PDF
    One of the most researched aspects of the software engineering process is the verification and validation of software systems using various techniques. The need to ensure that the developed software system addresses its intended specifications has led to several approaches that link the requirements gathering and software testing phases of development. This thesis presents a framework that bridges the gap between requirement specification and testing of software using domain-specific modelling concepts. The proposed modelling notation, High-Level Requirement Modelling Language (HRML), addresses the drawbacks of Natural Language (NL) for high-level requirement specifications including ambiguity and incompleteness. Real-time checks are implemented to ensure valid HRML specification models are utilised for the automated test cases generation. The type of HRML requirement specified in the model determines the approach to be employed to generate corresponding test cases. Boundary Value Analysis and Equivalence Partitioning is applied to specifications with predefined range values to generate valid and invalid inputs for robustness test cases. Structural coverage test cases are also generated to satisfy the Modified Condition/Decision Coverage (MC/DC) criteria for HRML specifications with logic expressions. In scenarios where the conditional statements are combined with logic expressions, the MC/DC approach is extended to generate the corresponding tests cases. Evaluation of the proposed framework by industry experts in a case study, its scalability, comparative study and the assessment of its learnability by non-experts are reported. The results indicate a reduction in the test case generation process in the case study, however non-experts spent more time in modelling the requirement in HRML while the time taken for test case generation is also reduced
    corecore