150,713 research outputs found

    Analysis of assessment options of software dependability

    Get PDF
    katedra: RSS;V této bakalářské práci jsem se pokusila seznámit Vás se základními pojmy, které se vyskytují v oblasti spolehlivosti a zejména ve spolehlivosti software. Byl vysvětlen také vývoj samotné definice spolehlivosti. Dalším důležitým tématem, kterým jsem se v této práci zabývala, byly základní metody analýzy spolehlivosti, kde jsem se věnovala jednotlivým metodám, například analýze způsobů a důsledků poruch (FMEA), analýze způsobů, důsledků a kritičnosti poruch, analýze stromu poruchových stavů (FTA), analýze blokovým schématem bezporuchovosti (RBD), Markovově analýze (MA) a dalším. Také jsem uvedla praktické příklady použití některých metod analýzy spolehlivosti. Poté jsem se zabývala životním cyklem software. Uvedla jsem jeho etapy a základní požadavky na software. Dále jsem charakterizovala základní modely životního cyklu software. Za nejdůležitější považuji vodopádový životní cyklus, V - životní cyklus, rychlý vývoj aplikace a objektově orientovaný životní cyklus. V závěrečné části této práce jsem popsala dva typy modelů spolehlivosti software, a to model statický a dynamický.In this bachelor´s work I have attempted to acquaint you with the basic conceptions of reliability especially of software reliability. I have clarified the development of the reliability definition. Another important theme I have dealt with in this work, was the basic methods of reliability analysis, where I have applied to the several methods. For example Fault Mode and Effects Analysis (FMEA), Fault Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis (FTA), Reliability Block Diagram (RBD), Markov Analysis (MA), and other. Furthermore I have mentioned some examples of several methods. Then I have dealt with software life cycle. I have mentioned software´s phases and requirements for software. Furthermore I have described the basic models of software life cycle. I rate the Waterfall Life-Cycle, the V Life Cycle, Rapid Applications Development and Object Oriented Life Cycle as the most important models. In the final part of this work I have described two types of reliability models. It is static and dynamic model

    Towards making functional size measurement easily usable in practice

    Get PDF
    Functional Size Measurement methods \u2013like the IFPUG Function Point Analysis and COSMIC methods\u2013 are widely used to quantify the size of applications. However, the measurement process is often too long or too expensive, or it requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified measurement methods have been proposed. This research explores easily usable functional size measurement method, aiming to improve efficiency, reduce difficulty and cost, and make functional size measurement widely adopted in practice. The first stage of the research involved the study of functional size measurement methods (in particular Function Point Analysis and COSMIC), simplified methods, and measurement based on measurement-oriented models. Then, we modeled a set of applications in a measurement-oriented way, and obtained UML models suitable for functional size measurement. From these UML models we derived both functional size measures and object-oriented measures. Using these measures it was possible to: 1) Evaluate existing simplified functional size measurement methods and derive our own simplified model. 2) Explore whether simplified method can be used in various stages of modeling and evaluate their accuracy. 3) Analyze the relationship between functional size measures and object oriented measures. In addition, the conversion between FPA and COSMIC was studied as an alternative simplified functional size measurement process. Our research revealed that: 1) In general it is possible to size software via simplified measurement processes with acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the details of both data and operations. The models obtained from out dataset yielded results that are similar to those reported in the literature. All simplified measurement methods that use predefined weights for all the transaction and data types identified in Function Point Analysis provided similar results, characterized by acceptable accuracy. On the contrary, methods that rely on just one of the elements that contribute to functional size tend to be quite inaccurate. In general, different methods showed different accuracy for Real-Time and non Real-Time applications. 2) It is possible to write progressively more detailed and complete UML models of user requirements that provide the data required by the simplified COSMIC methods. These models yield progressively more accurate measures of the modeled software. Initial measures are based on simple models and are obtained quickly and with little effort. As V models grow in completeness and detail, the measures increase their accuracy. Developers that use UML for requirements modeling can obtain early estimates of the applications\u2018 sizes at the beginning of the development process, when only very simple UML models have been built for the applications, and can obtain increasingly more accurate size estimates while the knowledge of the products increases and UML models are refined accordingly. 3) Both Function Point Analysis and COSMIC functional size measures appear correlated to object-oriented measures. In particular, associations with basic object- oriented measures were found: Function Points appear associated with the number of classes, the number of attributes and the number of methods; CFP appear associated with the number of attributes. This result suggests that even a very basic UML model, like a class diagram, can support size measures that appear equivalent to functional size measures (which are much harder to obtain). Actually, object-oriented measures can be obtained automatically from models, thus dramatically decreasing the measurement effort, in comparison with functional size measurement. In addition, we proposed conversion method between Function Points and COSMIC based on analytical criteria. Our research has expanded the knowledge on how to simplify the methods for measuring the functional size of the software, i.e., the measure of functional user requirements. Basides providing information immediately usable by developers, the researchalso presents examples of analysis that can be replicated by other researchers, to increase the reliability and generality of the results

    A study on test cases generation for object-oriented programs based on UML state diagram.

    Get PDF
    Software testing expenses are estimated to be between 20% and 50% of total development costs. Software testers need methodologies and tools to facilitate the testing portion of the development cycle. State-based testing is one of the most recommended techniques for testing object-oriented programs. Data flow testing is a code-based testing technique, which uses the data flow analysis in a program to guide the selection of test cases. Both state-based testing and data flow testing have their disadvantages. State-based testing does not analyze the program code, and thus could miss the detection of data members that do not define the states of the object class. Selecting data flow test cases from data members for testing classes is difficult and expensive. To overcome their weakness, a hybrid class test model is proposed, which contains both the information from specification about the state change of object instances of the Class Under Test (CUT) and the information from the source code about the definition and use of the data members in the CUT. With such an uniformed architecture, we can obtain automated tools to generate test cases for state-based testing and perform data flow testing at the same time. The combination of the two techniques is essential in improving our testing environment, and thus contributes to the enhancement of the reliability of software products. The proposed hybrid testing strategy can be used in both software design stage and software implementation stage. A Standard-based UML information exchange format, XMI is used to describe UML design specification in the hybrid testing strategy, to bridge the software designer and software tester. No matter what kind of CASE tools designer use, as long as it is saved as XMI format, the testing tool can easily understand design specification from different design tools. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2001 .Y36. Source: Masters Abstracts International, Volume: 40-03, page: 0730. Adviser: Xiaojun Chen. Thesis (M.Sc.)--University of Windsor (Canada), 2001

    Reverse Software Engineering

    Get PDF
    The goal of Reverse Software Engineering is the reuse of old outdated programs in developing new systems which have an enhanced functionality and employ modern programming languages and new computer architectures. Mere transliteration of programs from the source language to the object language does not support enhancing the functionality and the use of newer computer architectures. The main concept in this report is to generate a specification of the source programs in an intermediate nonprocedural, mathematically oriented language. This specification is purely descriptive and independent of the notion of the computer. It may serve as the medium for manually improving reliability and expanding functionally. The modified specification can be translated automatically into optimized object programs in the desired new language and for the new platforms. This report juxtaposes and correlates two classes of computer programming languages: procedural vs. nonprocedural. The nonprocedural languages are also called rule based, equational, functional or assertive. Non-procedural languages are noted for the absence of side effects and the freeing of a user from thinking like a computer when composing or studying a procedural language program. Nonprocedural languages are therefore advantageous for software development and maintenance. Non procedural languages use mathematical semantics and therefore are more suitable for analysis of the correctness and for improving the reliability of software. The difference in semantics between the two classes of languages centers on the meaning of variables. In a procedural language a variable may be assigned multiple values, while in a nonprocedural language a variable may assume one and only one value. The latter is the same convention as used in mathematics. The translation algorithm presented in this report consists of renaming variables and expanding the logic and control in the procedural program until each variable is assigned one and only one value. The translation into equations can then be performed directly. The source program and object specification are equivalent in that there is a one to one equality of values of respective variables. The specification that results from these transformations is then further simplified to make it easy to learn and understand it when performing maintenance. The presentation of translation algorithms in this report utilizes FORTRAN as the source language and MODEL as the object language. MODEL is an equational language, where rules are expressed as algebraic equations. MODEL has an effective translation into the object procedural languages PL/1, C and Ada

    Model Based Development of Quality-Aware Software Services

    Get PDF
    Modelling languages and development frameworks give support for functional and structural description of software architectures. But quality-aware applications require languages which allow expressing QoS as a first-class concept during architecture design and service composition, and to extend existing tools and infrastructures adding support for modelling, evaluating, managing and monitoring QoS aspects. In addition to its functional behaviour and internal structure, the developer of each service must consider the fulfilment of its quality requirements. If the service is flexible, the output quality depends both on input quality and available resources (e.g., amounts of CPU execution time and memory). From the software engineering point of view, modelling of quality-aware requirements and architectures require modelling support for the description of quality concepts, support for the analysis of quality properties (e.g. model checking and consistencies of quality constraints, assembly of quality), tool support for the transition from quality requirements to quality-aware architectures, and from quality-aware architecture to service run-time infrastructures. Quality management in run-time service infrastructures must give support for handling quality concepts dynamically. QoS-aware modeling frameworks and QoS-aware runtime management infrastructures require a common evolution to get their integration

    Design of a shared whiteboard component for multimedia conferencing

    Get PDF
    This paper reports on the development of a framework for multimedia applications in the domain of tele-education. The paper focuses on the protocol design of a specific component of the framework, namely a shared whiteboard application. The relationship of this component with other components of the framework is also discussed. A salient feature of the framework is that it uses an advanced ATM-based network service. The design of the shared whiteboard component is considered representative for the design as a whole, and is used to illustrate how a flexible protocol architecture utilizing innovative network functions and satisfying demanding user requirements can be developed
    corecore