4,708,370 research outputs found

    Information systems evaluation methodologies

    Get PDF
    Due to the prevalent use of Information Systems (IS) in modern organisations nowadays, evaluation research in this field is becoming more and more important. In light of this, a set of rigorous methodologies were developed and used by IS researchers and practitioners to evaluate the increasingly complex IS implementation used. Moreover, different types of IS and different focusing perspectives of the evaluation require the selection and use of different evaluation approaches and methodologies. This paper aims to identify, explore, investigate and discuss the various key methodologies that can be used in IS evaluation from different perspectives, namely in nature (e.g. summative vs. formative evaluation) and in strategy (e.g. goal-based, goal-free and criteria-based evaluation). The paper concludes that evaluation methodologies should be selected depending on the nature of the IS and the specific goals and objectives of the evaluation. Nonetheless, it is also proposed that formative criteria-based evaluation and summative criteria-based evaluation are currently among the most and more widely used in IS research. The authors suggest that the combines used of one or more of these approaches can be applied at different stages of the IS life cycle in order to generate more rigorous and reliable evaluation outcomes

    Systems effectiveness evaluation program

    Get PDF
    Eight integrated computer programs provide needed capability to reduce man-hours needed to perform routine monitoring and assessment of effectiveness, reliability, and maintainability of large electronic equipment systems

    An Evaluation of Journaling File Systems

    Full text link
    Many statisticians would agree that, had it not been for systems, the synthesis of virtual machines might never have occurred. In fact, few systems engineers would disagree with the improvement of the location-identity split. We motivate an algorithm for the synthesis of compilers, which we call Nap

    Evaluation methodologies in Automatic Question Generation 2013-2018

    Get PDF
    In the last few years Automatic Question Generation (AQG) has attracted increasing interest. In this paper we survey the evaluation methodologies used in AQG. Based on a sample of 37 papers, our research shows that the systems’ development has not been accompanied by similar developments in the methodologies used for the systems’ evaluation. Indeed, in the papers we examine here, we find a wide variety of both intrinsic and extrinsic evaluation methodologies. Such diverse evaluation practices make it difficult to reliably compare the quality of different generation systems. Our study suggests that, given the rapidly increasing level of research in the area, a common framework is urgently needed to compare the performance of AQG systems and NLG systems more generally

    An hierarchical approach to performance evaluation of expert systems

    Get PDF
    The number and size of expert systems is growing rapidly. Formal evaluation of these systems - which is not performed for many systems - increases the acceptability by the user community and hence their success. Hierarchical evaluation that had been conducted for computer systems is applied for expert system performance evaluation. Expert systems are also evaluated by treating them as software systems (or programs). This paper reports many of the basic concepts and ideas in the Performance Evaluation of Expert Systems Study being conducted at the University of Southwestern Louisiana

    A proposal for the evaluation of adaptive information retrieval systems using simulated interaction

    Get PDF
    The Centre for Next Generation Localisation (CNGL) is involved in building interactive adaptive systems which combine Information Retrieval (IR), Adaptive Hypermedia (AH) and adaptive web techniques and technologies. The complex functionality of these systems coupled with the variety of potential users means that the experiments necessary to evaluate such systems are difficult to plan, implement and execute. This evaluation requires both component-level scientific evaluation and user-based evaluation. Automated replication of experiments and simulation of user interaction would be hugely beneficial in the evaluation of adaptive information retrieval systems (AIRS). This paper proposes a methodology for the evaluation of AIRS which leverages simulated interaction. The hybrid approach detailed combines: (i) user-centred methods for simulating interaction and personalisation; (ii) evaluation metrics that combine Human Computer Interaction (HCI), AH and IR techniques; and (iii) the use of qualitative and quantitative evaluations. The benefits and limitations of evaluations based on user simulations are also discussed

    User-Centered Evaluation of Adaptive and Adaptable Systems

    Get PDF
    Adaptive and adaptable systems provide tailored output to various users in various contexts. While adaptive systems base their output on implicit inferences, adaptable systems use explicitly provided information. Since the presentation or output of these systems is adapted, standard user-centered evaluation methods do not produce results that can be easily generalized. This calls for a reflection on the appropriateness of standard evaluation methods for user-centered evaluations of these systems. We have conducted a literature review to create an overview of the methods that have been used. When reviewing the empirical evaluation studies we have, among other things, focused on the variables measured and the implementation of results in the (re)design process. The goal of our review has been to compose a framework for user-centered evaluation. In the next phase of the project, we intend to test some of the most valid and feasible methods with an adaptive or adaptable system

    Model-driven performance evaluation for service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Software quality aspects such as performance are of central importance for the integration of heterogeneous, distributed service-based systems. Empirical performance evaluation is a process of measuring and calculating performance metrics of the implemented software. We present an approach for the empirical, model-based performance evaluation of services and service compositions in the context of model-driven service engineering. Temporal databases theory is utilised for the empirical performance evaluation of model-driven developed service systems
    corecore