1,360,562 research outputs found

    A proposal for the evaluation of adaptive information retrieval systems using simulated interaction

    Get PDF
    The Centre for Next Generation Localisation (CNGL) is involved in building interactive adaptive systems which combine Information Retrieval (IR), Adaptive Hypermedia (AH) and adaptive web techniques and technologies. The complex functionality of these systems coupled with the variety of potential users means that the experiments necessary to evaluate such systems are difficult to plan, implement and execute. This evaluation requires both component-level scientific evaluation and user-based evaluation. Automated replication of experiments and simulation of user interaction would be hugely beneficial in the evaluation of adaptive information retrieval systems (AIRS). This paper proposes a methodology for the evaluation of AIRS which leverages simulated interaction. The hybrid approach detailed combines: (i) user-centred methods for simulating interaction and personalisation; (ii) evaluation metrics that combine Human Computer Interaction (HCI), AH and IR techniques; and (iii) the use of qualitative and quantitative evaluations. The benefits and limitations of evaluations based on user simulations are also discussed

    Challenges in Quantitative Abstractions for Collective Adaptive Systems

    Get PDF
    Like with most large-scale systems, the evaluation of quantitative properties of collective adaptive systems is an important issue that crosscuts all its development stages, from design (in the case of engineered systems) to runtime monitoring and control. Unfortunately it is a difficult problem to tackle in general, due to the typically high computational cost involved in the analysis. This calls for the development of appropriate quantitative abstraction techniques that preserve most of the system's dynamical behaviour using a more compact representation. This paper focuses on models based on ordinary differential equations and reviews recent results where abstraction is achieved by aggregation of variables, reflecting on the shortcomings in the state of the art and setting out challenges for future research.Comment: In Proceedings FORECAST 2016, arXiv:1607.0200

    "Touch me": workshop on tactile user experience evaluation methods

    Get PDF
    In this workshop we plan to explore the possibilities and challenges of physical objects and materials for evaluating the User Experience (UX) of interactive systems. These objects should face shortfalls of current UX evaluation methods and allow for a qualitative (or even quantitative), playful and holistic evaluation of UX -- without interfering with the users' personal experiences during interaction. This provides a tactile enhancement to a solely visual stimulation as used in classical evaluation methods. The workshop serves as a basis for networking and community building with interested HCI researchers, designers and practitioners and should encourage further development of the field of tactile UX evaluation

    MultiVeStA: Statistical Model Checking for Discrete Event Simulators

    Get PDF
    The modeling, analysis and performance evaluation of large-scale systems are difficult tasks. Due to the size and complexity of the considered systems, an approach typically followed by engineers consists in performing simulations of systems models to obtain statistical estimations of quantitative properties. Similarly, a technique used by computer scientists working on quantitative analysis is Statistical Model Checking (SMC), where rigorous mathematical languages (typically logics) are used to express systems properties of interest. Such properties can then be automatically estimated by tools performing simulations of the model at hand. These property specifications languages, often not popular among engineers, provide a formal, compact and elegant way to express systems properties without needing to hard-code them in the model definition. This paper presents MultiVeStA, a statistical analysis tool which can be easily integrated with existing discrete event simulators, enriching them with efficient distributed statistical analysis and SMC capabilities

    Evaluating land administration systems: a comparative method with an application to Peru and Honduras

    Get PDF
    This article develops a methodology for the evaluation of land administration systems. We propose a set of quantitative and qualitative indicators with benchmarks for each one of them that signal possible venues to improve the administration’s structure and budgetary/management arrangements, in order to bring about the following goals: (1) to contribute to public sector financing through taxes; (2) to encourage the productive and sustainable use of land, and (3) to facilitate access to land for low-income citizens. This methodology was applied to the cases of Honduras and Peru in order to refine our draft evaluation indicators, while evaluating the systems of both countries. Here we present the final refined indicators and benchmarks, and the conclusions from both case studies.land administration systems; cadastre; evaluation; performance indicators
    • 

    corecore