85,700 research outputs found

    Reliability and validity in comparative studies of software prediction models

    Get PDF
    Empirical studies on software prediction models do not converge with respect to the question "which prediction model is best?" The reason for this lack of convergence is poorly understood. In this simulation study, we have examined a frequently used research procedure comprising three main ingredients: a single data sample, an accuracy indicator, and cross validation. Typically, these empirical studies compare a machine learning model with a regression model. In our study, we use simulation and compare a machine learning and a regression model. The results suggest that it is the research procedure itself that is unreliable. This lack of reliability may strongly contribute to the lack of convergence. Our findings thus cast some doubt on the conclusions of any study of competing software prediction models that used this research procedure as a basis of model comparison. Thus, we need to develop more reliable research procedures before we can have confidence in the conclusions of comparative studies of software prediction models

    The determination of measures of software reliability

    Get PDF
    Measurement of software reliability was carried out during the development of data base software for a multi-sensor tracking system. The failure ratio and failure rate were found to be consistent measures. Trend lines could be established from these measurements that provide good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined

    Research and Applications of the Processes of Performance Appraisal: A Bibliography of Recent Literature, 1981-1989

    Get PDF
    [Excerpt] There have been several recent reviews of different subtopics within the general performance appraisal literature. The reader of these reviews will find, however, that the accompanying citations may be of limited utility for one or more reasons. For example, the reference sections of these reviews are usually composed of citations which support a specific theory or practical approach to the evaluation of human performance. Consequently, the citation lists for these reviews are, as they must be, highly selective and do not include works that may have only a peripheral relationship to a given reviewer\u27s target concerns. Another problem is that the citations are out of date. That is, review articles frequently contain many citations that are fifteen or more years old. The generation of new studies and knowledge in this field occurs very rapidly. This creates a need for additional reference information solely devoted to identifying the wealth of new research, ideas, and writing that is changing the field

    Effects of Selection Systems on Job Search Decisions

    Get PDF
    On the basis of Gilliland\u27s (1993) model of selection system fairness, the present study investigated the relationships between selection procedures, perceived selection system fairness, and job search decisions in both hypothetical and actual organizations. We conducted two studies to test the model. In Study 1, we used an experimental method to examine job seekers\u27 perceptions of, and reactions to, five widely used selection procedures. Results suggested that applicants viewed employment interviews and cognitive ability tests as more job related than biographical inventories (biodata), personality tests, and drug tests, and that job relatedness significantly affected fairness perceptions, which in turn affected job search decisions. Study 2 examined the hypothesized relationships between the selection systems and job seekers\u27 pursuit of actual, relevant organizations. Results from both studies offer support for the hypothesized model, suggesting that selection tests have differential effects on perceived selection system validity and fairness, which affect subsequent job search decisions

    Measuring Software Process: A Systematic Mapping Study

    Get PDF
    Context: Measurement is essential to reach predictable performance and high capability processes. It provides support for better understanding, evaluation, management, and control of the development process and project, as well as the resulting product. It also enables organizations to improve and predict its process’s performance, which places organizations in better positions to make appropriate decisions. Objective: This study aims to understand the measurement of the software development process, to identify studies, create a classification scheme based on the identified studies, and then to map such studies into the scheme to answer the research questions. Method: Systematic mapping is the selected research methodology for this study. Results: A total of 462 studies are included and classified into four topics with respect to their focus and into three groups based on the publishing date. Five abstractions and 64 attributes were identified, 25 methods/models and 17 contexts were distinguished. Conclusion: capability and performance were the most measured process attributes, while effort and performance were the most measured project attributes. Goal Question Metric and Capability Maturity Model Integration were the main methods and models used in the studies, whereas agile/lean development and small/medium-size enterprise were the most frequently identified research contexts.Ministerio de Economía y Competitividad TIN2013-46928-C3-3-RMinisterio de Economía y Competitividad TIN2016-76956-C3-2- RMinisterio de Economía y Competitividad TIN2015-71938-RED

    Task analysis for error identification: Theory, method and validation

    Get PDF
    This paper presents the underlying theory of Task Analysis for Error Identification. The aim is to illustrate the development of a method that has been proposed for the evaluation of prototypical designs from the perspective of predicting human error. The paper presents the method applied to representative examples. The methodology is considered in terms of the various validation studies that have been conducted, and is discussed in the light of a specific case study

    Psychometrics in Practice at RCEC

    Get PDF
    A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment
    corecore