4 research outputs found

    Can we learn from wrong simulation models? A preliminary experimental study on user learning

    Get PDF
    A number of authors believe that wrong models can be useful, providing learning opportunities for their users. This paper details an experiment on model complexity, investigating differences in learning after using a simplified versus an adequate version of the same model. Undergraduate students were asked to solve a resource utilization task for an ambulance service. The treatment variables were defined as the model types used (complex, simple, and no model). Two questionnaires (before and after the process) and a presentation captured participants' attitudes towards the solution. Results suggest differences in learning were not significant, while simple model users demonstrated a better understanding of the problem. This paper consists of a preliminary behavioural operational research study that contributes towards identifying the value of wrong simulation models from the perspective of model users

    Can we learn from "wrong" models? A study of the characteristics and use of "wrong" simulation models

    No full text
    This thesis addresses the idea that even "wrong" models in the field of Operational Research (OR) and Simulation, and more specifically in Discrete Event Simulation (DES), may still be useful and even offer learning for their users.The inspiration for the topic resorts to literature where it is suggested that even if a model is viewed as "wrong", it may still entail certain benefits, especially if we consider that different people may perceive and evaluate models differently. Yet, there is a dearth of evidence to what constitutes to "wrong" models and their possible usefulness. This project aims to address these gaps using an empirical approach.To achieve this aim, three objectives are addressed. The first objective identifies factors of wrongness in practice categorised under model characteristics. The second objective explores the extent to which "wrong" models are used in practice. The last objective investigates whether learning can occur from "wrong" models in practice. The implementation is based on previous works that have considered learning within OR and Simulation. Additionally, a specific learning framework from the field of psychology is utilised. To accomplish the above objectives, two different studies are carried out, an exploratory study and an experimental study, addressing qualitatively and quantitatively the topic of wrongness The exploratory study conducts semi-structured interviews with DES modellers reporting their experience from models considered "wrong". A focused analysis also takes place in view of extreme cases of simplification, where simple and complex models considered as "wrong" models are examined with respect to their usefulness and learning. The experimental study consists of a laboratory-based experiment with students to test attitude changes towards a problem when using an "adequate" and an "oversimplified" model. The aim is to measure differences in learning, confidence, model perception and model usefulness between the two versions.The outcomes of the interviews suggest that "wrong" models are encountered in practice, they can indeed be useful as well as offer learning for their users but also for the modellers involved. Specific factors of wrongness and uses of such models are found and commented upon while the exploration of learning leads to suggestions on how to deal with "wrong" models in OR and Simulation. Additionally, the findings from the experimental study support the indications that simple models can be "wrong" but still useful. Also, they may offer changes to users' beliefs, as users of the oversimplified model had similar learning outcomes to the users of the adequate model.The thesis offers a detailed investigation of "wrong" models. The contributions are: identification of what is considered a "wrong" model in practice, possible uses to consider when "wrong" models are encountered, and evidence that learning can be acquired even from "wrong" models. These contributions expand the current literature and lead to a better understanding of model wrongness in simulation, including practical suggestions for the use of "wrong" models.</div

    Are “wrong” models useful? A qualitative study of discrete event simulation modeller stories

    No full text
    Little is known about models deemed ”wrong” by their modellers or clients in Operational Research (OR). This paper aims to improve our understanding of “wrong” Discrete Event Simulation (DES) models based on empirical evidence. We interview 22 modellers who describe projects where modelling did not go as expected and explain how they dealt with those situations. This resulted in 54 stories reporting that a model was identified ”wrong” either by the modeller, the client or both. We perform a qualitative text analysis of the stories to identify the factors that define a ”wrong” model as well as potential uses of ”wrong” models. The results show that some models even though considered ”wrong” may still be useful in practice and provide valuable insights to users and modellers. This study offers practical suggestions for users and modellers to consider when dealing with a model that is considered ”wrong”. </p

    Can we learn from simplified simulation models? An experimental study on user learning

    Get PDF
    Simple models are considered useful for decision making, especially when decisions are made by a group of stakeholders. This paper describes an experimental study that investigates whether the level of model detail affects users’ learning. Our subjects, undergraduate students, were asked to solve a resource utilisation task for an ambulance service problem. They worked in groups under three different conditions, based on the type of simulation model used (specifically a simple, adequate or no model at all), to analyse the problem and reach conclusions. A before and after questionnaire and a group presentation capture the participants’ individual and group attitudes towards the solution. Our results suggest that differences in learning from using the two different models were not significant, while simple model users demonstrated a better understanding of the problem. The outcomes and implications of our findings are discussed, alongside the limitations and future work.</div
    corecore