862 research outputs found

    Effects of variability in models: a family of experiments

    Get PDF
    The ever-growing need for customization creates a need to maintain software systems in many different variants. To avoid having to maintain different copies of the same model, developers of modeling languages and tools have recently started to provide implementation techniques for such variant-rich systems, notably variability mechanisms, which support implementing the differences between model variants. Available mechanisms either follow the annotative or the compositional paradigm, each of which have dedicated benefits and drawbacks. Currently, language and tool designers select the used variability mechanism often solely based on intuition. A better empirical understanding of the comprehension of variability mechanisms would help them in improving support for effective modeling. In this article, we present an empirical assessment of annotative and compositional variability mechanisms for three popular types of models. We report and discuss findings from a family of three experiments with 164 participants in total, in which we studied the impact of different variability mechanisms during model comprehension tasks. We experimented with three model types commonly found in modeling languages: class diagrams, state machine diagrams, and activity diagrams. We find that, in two out of three experiments, annotative technique lead to better developer performance. Use of the compositional mechanism correlated with impaired performance. For all three considered tasks, the annotative mechanism was preferred over the compositional one in all experiments. We present actionable recommendations concerning support of flexible, tasks-specific solutions, and the transfer of established best practices from the code domain to models

    CoMoVA - A comprehension measurement framework for visualization systems

    Get PDF
    Despite the burgeoning interest shown in visualizations by many disciplines, there yet remains the unresolved question concerning comprehension. Is the concept that is being communicated through the visual easily grasped and clearly interpreted? Visual comprehension is that characteristic of any visualization system, which deals with how efficiently and effectively users are able to grasp the underlying concepts through suitable interactions provided for exploring the visually represented information. Comprehension has been considered a very complex subject, which is intangible and subjective in nature. Assessment of comprehension can help to determine the true usefulness of visualization systems to the intended users. A principal contribution of this research is the formulation of an empirical evaluation framework for systematically assessing comprehension support provided by a visualization system to its intended users. To assess comprehension i.e. to measure this seemingly immeasurable factor of visualization systems, we propose a set of criteria based on a detailed analysis of information flow from the raw data to the cognition of information in human mind. Our comprehension criteria are adapted from the pioneering work of two eminent researchers - Donald A. Norman and Aaron Marcus, who have investigated the issues of human perception and cognition, and visual effectiveness respectively. The proposed criteria have been refined with the help of opinions from experts. To gauge and verify the efficacy of these criteria in a practical sense, they were then applied to a bioinformatics visualization study tool and an immersive art visualization environment. Given the vast variety of users and their visualization goals, it may be noted that it is difficult for one to decide on the effectiveness of different visualization tools/techniques in a context independent fashion. We therefore propose an innovative way of evaluating a visualization technique by encapsulating it in a visualization pattern where it is seen as a solution to the visualization problem in a specific context. These visualization patterns guide the tool users/evaluators to compare, understand and select appropriate visualization tools/techniques. Lastly, we propose a novel framework named as CoMoVA (Comprehension Model for Visualization Assessment) that incorporates 'context of use', visualization patterns, visual design principles and important cognitive principles into a coherent whole that can be used to effectively tell us in a more quantifiable manner the benefits of visual representations and interactions provided by a system to the intended audience. Our approach of evaluation of visualization systems is similar to other questionnaire-based approaches such as SUMI (Software Usability Measurement Inventory), where all the questions deal with the measurement of a common trait. We apply this framework to two static software visualization tools in the software visualization domain to demonstrate the practical benefits of using such a framework

    Design and Evaluation of User-Centered Explanations for Machine Learning Model Predictions in Healthcare

    Get PDF
    Challenges in interpreting some high-performing models present complications in applying machine learning (ML) techniques to healthcare problems. Recently, there has been rapid growth in research on model interpretability; however, approaches to explaining complex ML models are rarely informed by end-user needs and user evaluations of model interpretability are lacking, especially in healthcare. This makes it challenging to determine what explanation approaches might enable providers to understand model predictions in a comprehensible and useful way. Therefore, I aimed to utilize clinician perspectives to inform the design of explanations for ML-based prediction tools and improve the adoption of these systems in practice. In this dissertation, I proposed a new theoretical framework for designing user-centered explanations for ML-based systems. I then utilized the framework to propose explanation designs for predictions from a pediatric in-hospital mortality risk model. I conducted focus groups with healthcare providers to obtain feedback on the proposed designs, which was used to inform the design of a user-centered explanation. The user-centered explanation was evaluated in a laboratory study to assess its effect on healthcare provider perceptions of the model and decision-making processes. The results demonstrated that the user-centered explanation design improved provider perceptions of utilizing the predictive model in practice, but exhibited no significant effect on provider accuracy, confidence, or efficiency in making decisions. Limitations of the evaluation study design, including a small sample size, may have affected the ability to detect an impact on decision-making. Nonetheless, the predictive model with the user-centered explanation was positively received by healthcare providers, and demonstrated a viable approach to explaining ML model predictions in healthcare. Future work is required to address the limitations of this study and further explore the potential benefits of user-centered explanation designs for predictive models in healthcare. This work contributes a new theoretical framework for user-centered explanation design for ML-based systems that is generalizable outside the domain of healthcare. Moreover, the work provides meaningful insights into the role of model interpretability and explanation in healthcare while advancing the discussion on how to effectively communicate ML model information to healthcare providers

    Comprehension of Procedural Visual Business Process Models - A Literature Review

    Get PDF
    Visual process models are meant to facilitate comprehension of business processes. However, in prac- tice, process models can be difficult to understand. The main goal of this article is to clarify the sources of cog- nitive effort in comprehending process models. The article undertakes a comprehensive descriptive review of empiri- cal and theoretical work in order to categorize and sum- marize systematically existing findings on the factors that influence comprehension of visual process models. Methodologically, the article builds on a review of forty empirical studies that measure objective comprehension of process models, seven studies that measure subjective comprehension and user preferences, and thirty-two arti- cles that discuss the factors that influence the comprehen- sion of process models. The article provides information systems researchers with an overview of the empirical state of the art of process model comprehension and provides recommendations for new research questions to be addressed and methods to be used in future experiments

    Influence factors for local comprehensibility of process models

    Get PDF
    The main aim of this study is to investigate human understanding of process models and to develop an improved understanding of its relevant influence factors. Aided by assumptions from cognitive psychology, this article attempts to address specific deductive reasoning difficulties based on process models. The authors developed a research model to capture the influence of two effects on the cognitive difficulty of reasoning tasks: (i) the presence of different control-flow patterns (such as conditional or parallel execution) in a process model and (ii) the interactivity of model elements. Based on solutions to 61 different reasoning tasks by 155 modelers, the results from this study indicate that the presence of certain control-flow patterns influences the cognitive difficulty of reasoning tasks. In particular, sequence is relatively easy, while loops in a model proved difficult. Modelers with higher process modeling knowledge performed better and rated subjective difficulty of loops lower than modelers with lower process modeling knowledge. The findings additionally support the prediction that interactivity between model elements is positively related to the cognitive difficulty of reasoning. Our research contributes to both academic literature on the comprehension of process models and practitioner literature focusing on cognitive difficulties when using process models
    • …
    corecore