11 research outputs found

    A Validation of Object-Oriented Design Metrics as Quality Indicators

    Get PDF
    This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (OO) design metrics introduced by [Chidamber&Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Li&Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber&Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes. (Also cross-referenced as UMIACS-TR-95-40

    Factors That Influence Application Migration To Cloud Computing In Government Organizations: A Conjoint Approach

    Get PDF
    Cloud computing is becoming a viable option for Chief Information Officers (CIO’s) and business stakeholders to consider in today’s information technology (IT) environment, characterized by shrinking budgets and dynamic changes in the technology landscape. The objective of this study is to help Federal Government decision makers appropriately decide on the suitability of applications for migration to cloud computing. I draw from four theoretical perspectives: transaction cost theory, resource-based theory, agency theory and dynamic capabilities theory and use a conjoint analysis approach to understand stakeholder attitudes, opinions and behaviors in their decision to migrate applications to cloud computing. Based on a survey of 81 government cloud computing stakeholders, this research examined the relative importance of thirteen factors that organizations consider when migrating applications to cloud computing. Our results suggest that trust in the cloud computing vendor is the most significant factor, followed by the relative cost advantage, sensing capabilities and application complexity. A total of twelve follow-up interviews were conducted to provide explanation of our results. The contributions of the dissertation are twofold: 1) it provides novel insights into the relative importance of factors that influence government organizations’ decision to migrate applications to cloud computing, and 2) it assists senior government decision makers to appropriately weigh and prioritize the factors that are critical in application migration to cloud computing

    Exploring interrelationship between three performance indicators with PMI’s Nine Knowledge Areas for successful Project Management

    Get PDF
    The performance output of software project management is an essential area of study as reflected in the earlier literatures of Management and organizational behaviour related studies. As a continuous improvement to the earlier existing knowledge contributed by Donna G. Thomas (2009), the present study has been attempted from mere identification of relationship between the performance indicators to project knowledge area of PMBOKⓇ, to the exploration of the strength of relationship between and beyond the PI-KA, the input artifacts and performance output deliverable. A conceptual model has been proposed as Artifact (input)-Process-knowledge area-Performance indicator-Performance deliverable (Output) model (Krishnaswamy N. & Selvarasu A., 2014) for further exploration in the present study. The study has been designed with triangulation of researcher-respondent interactions among FSEs, Senior Project Managers (SPM) and Project Managers (PM) with focused discussion, experience survey and personal/online survey, respectively. The PLS-Regression and PLS-SEM data modelling tool has been employed to find the total effect of hypothetically proposed paths from Artifact-PKA-PI-Performance with and without moderators. The focus of the study is aimed at identifying the top three performance indicators and its interrelationship between PMI’s nine knowledge areas.peer-reviewe

    Software architectural risk assessment

    Get PDF
    Risk assessment is an essential part of the software development life cycle. Performing risk analysis early in the life cycle enhances resource allocation decisions, enables us to compare alternative software architectural designs and helps in identifying high-risk components in the system. As a result, remedial actions to control and optimize the process and improve the quality of the software product can be taken. In this thesis we investigate two types of risk---reliability-based and performance-based risk. The reliability-based risk assessment takes into account the probability of the failures and the severity of failures. For the reliability-based risk analysis we use UML models of the software system, available early in the life cycle to come up with the risk factors of the scenarios and use cases. For each scenario we construct a Markov model to assess the risk factors of the scenarios and its risk distribution among the various classes of severity. Then we investigate both independent use cases and use cases with relationships, while obtaining the system-level risk factors. (Abstract shortened by UMI.)

    A software testing estimation and process control model

    Get PDF
    The control of the testing process and estimation of the resource required to perform testing is key to delivering a software product of target quality on budget. This thesis explores the use of testing to remove errors, the part that metrics and models play in this process, and considers an original method for improving the quality of a software product. The thesis investigates the possibility of using software metrics to estimate the testing resource required to deliver a product of target quality into deployment and also determine during the testing phases the correct point in time to proceed to the next testing phase in the life-cycle. Along with the metrics Clear ratio. Chum, Error rate halving. Severity shift, and faults per week, a new metric 'Earliest Visibility' is defined and used to control the testing process. EV is constructed upon the link between the point at which an error is made within development and subsequently found during testing. To increase the effectiveness of testing and reduce costs, whilst maintaining quality the model operates by each test phase being targeted at the errors linked to that test phase and the ability for each test phase to build upon the previous phase. EV also provides a measure of testing effectiveness and fault introduction rate by development phase. The resource estimation model is based on a gradual refinement of an estimate, which is updated following each development phase as more reliable data is available. Used in conjunction with the process control model, which will ensure the correct testing phase is in operation, the estimation model will have accurate data for each testing phase as input. The proposed model and metrics have been developed and tested on a large-scale (4 million LOC) industrial telecommunications product written in C and C++ running within a Unix environment. It should be possible to extend this work to suit other environments and other development life-cycles

    Métricas 00 aplicadas a código objeto java

    Get PDF
    Orientador : Márcio Eduardo DelamaroDissertaçao (mestrado) - Universidade Federal do ParanáResumo: Na busca de melhorias no processo de desenvolvimento de software para a obtenção de um produto de qualidade, várias métricas têm sido propostas, com as quais pode-se gerenciar este processo e detectar falhas de projeto. As métricas de software auxiliam na coleta de informação, fornecendo dados qualitativos e quantitativos sobre o processo e o produto de software. Elas identificam onde os recursos são necessários, constituindo assim importante fonte de informação para a. tomada, de decisão. Métricas podem ser aplicadas em diversas fases do desenvolvimento e em diversos produtos intermediários como especificação de requisitos, projeto ou código fonte. Este trabalho mostra a falibilidade de coletar-se algumas métricas de software a partir do código objeto (bytecode) .Java. Tal abordagem pode ser útil em atividades como: teste de programas que utilizam componentes de terceiros, re-engeiiharia e outras nas quais não se tenha acesso ao código fonte. As métricas coletadas a partir do bytecode Java foram aplicadas em um estudo de caso com dois sistemas onde procurou-se relacionar as métricas com a propensão a falhas.Abstract: Searching for improvements in the software development process and aiming at a product with high quality, several metrics, that help to manage the software process and to detect project flaws, have been proposed. Software metrics provide qualitative and quantitative about the software process and the software product. The identify where resources should be allocated, being an important source for decision making. They can be applied in any phase of the development and on different intermediary products as requirement specification, design or source code. This work shows the feasibility of collecting some software metrics directly from Java object programs (Java bytecode). Such an approach can be useful in activities like: testing of third party components, reengineering and others activities where the source code is not available. This approach has been applied in a case study to two Java systems, trying to relate the metrics to the existence of faults

    Software Quality Assessment using Ensemble Models

    Get PDF
    corecore