140,051 research outputs found

    Survey on Evaluation Methods for Dialogue Systems

    Get PDF
    In this paper we survey the methods and concepts developed for the evaluation of dialogue systems. Evaluation is a crucial part during the development process. Often, dialogue systems are evaluated by means of human evaluations and questionnaires. However, this tends to be very cost and time intensive. Thus, much work has been put into finding methods, which allow to reduce the involvement of human labour. In this survey, we present the main concepts and methods. For this, we differentiate between the various classes of dialogue systems (task-oriented dialogue systems, conversational dialogue systems, and question-answering dialogue systems). We cover each class by introducing the main technologies developed for the dialogue systems and then by presenting the evaluation methods regarding this class

    Evaluation of methods to accurately characterize the thermal conductivity of micro-and nanocellular polymers based on poly(methyl-methacrylate) (PMMA) produced at lab-scale

    Get PDF
    Producción CientíficaThe characterization of the thermal conductivity of new and enhanced thermal insulators developed at lab-scale is a challenge. The small dimensions of the prototypes make impossible the use of the conventional techniques because steady-state methods require large samples. Furthermore, the accuracy of transient methods to measure the thermal conductivity is not clear. In this work, we compare four different approaches to measure the thermal conductivity of small prototypes of nanocellular poly(methyl-methacrylate) (PMMA). Both steady-state and transient techniques are used. Results show that the transient plane source method is not suitable for the characterization of these materials (the deviation from the steady-state methods is on average higher than 15%). In addition, two different approaches for measuring the thermal conductivity of small samples via a steady-state technique are proposed and validated.Junta de Castilla y León (grant VA202P20)Ministerio de Ciencia, Innovación y Universidades (projects RTI2018-098749-B-I00 and PTQ2019-010560)Instituto para la Competitividad Empresarial de Castilla y León - Fondo Europeo de Desarrollo Regional (projects PAVIPEX. 04/18/VA/008 and FICACEL. 11/20/VA/0001

    Scoping analytical usability evaluation methods: A case study

    Get PDF
    Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method

    "Touch me": workshop on tactile user experience evaluation methods

    Get PDF
    In this workshop we plan to explore the possibilities and challenges of physical objects and materials for evaluating the User Experience (UX) of interactive systems. These objects should face shortfalls of current UX evaluation methods and allow for a qualitative (or even quantitative), playful and holistic evaluation of UX -- without interfering with the users' personal experiences during interaction. This provides a tactile enhancement to a solely visual stimulation as used in classical evaluation methods. The workshop serves as a basis for networking and community building with interested HCI researchers, designers and practitioners and should encourage further development of the field of tactile UX evaluation

    Evaluation of Trace Alignment Quality and its Application in Medical Process Mining

    Full text link
    Trace alignment algorithms have been used in process mining for discovering the consensus treatment procedures and process deviations. Different alignment algorithms, however, may produce very different results. No widely-adopted method exists for evaluating the results of trace alignment. Existing reference-free evaluation methods cannot adequately and comprehensively assess the alignment quality. We analyzed and compared the existing evaluation methods, identifying their limitations, and introduced improvements in two reference-free evaluation methods. Our approach assesses the alignment result globally instead of locally, and therefore helps the algorithm to optimize overall alignment quality. We also introduced a novel metric to measure the alignment complexity, which can be used as a constraint on alignment algorithm optimization. We tested our evaluation methods on a trauma resuscitation dataset and provided the medical explanation of the activities and patterns identified as deviations using our proposed evaluation methods.Comment: 10 pages, 6 figures and 5 table

    CHARACTERISTICS AND EVALUATION METHODS OF THE CASE TOOLS

    Get PDF
    The acronym CASE – Computer Assisted Software Engineering – is the term used to indicate a collection of methods, tools and processes used in the development of software products with the assistance of the computer.IPSE
    corecore