20 research outputs found

    Tools and methods in participatory modeling: Selecting the right tool for the job

    Get PDF
    © 2018 Elsevier Ltd Various tools and methods are used in participatory modelling, at different stages of the process and for different purposes. The diversity of tools and methods can create challenges for stakeholders and modelers when selecting the ones most appropriate for their projects. We offer a systematic overview, assessment, and categorization of methods to assist modelers and stakeholders with their choices and decisions. Most available literature provides little justification or information on the reasons for the use of particular methods or tools in a given study. In most of the cases, it seems that the prior experience and skills of the modelers had a dominant effect on the selection of the methods used. While we have not found any real evidence of this approach being wrong, we do think that putting more thought into the method selection process and choosing the most appropriate method for the project can produce better results. Based on expert opinion and a survey of modelers engaged in participatory processes, we offer practical guidelines to improve decisions about method selection at different stages of the participatory modeling process

    Formalizing Arguments From Cause-Effect Rules

    No full text

    Ideal, best, and emerging practices in creating artificial societies

    Full text link
    © 2019 Society for Modeling & Simulation International (SCS). Artificial societies used to guide and evaluate policies should be built by following “best practices”. However, this goal may be challenged by the complexity of artificial societies and the interdependence of their sub-systems (e.g., built environment, social norms). We created a list of seven practices based on simulation methods, specific aspects of quantitative individual models, and data-driven modeling. By evaluating published models for public health with respect to these ideal practices, we noted significant gaps between current and ideal practices on key items such as replicability and uncertainty. We outlined opportunities to address such gaps, such as integrative models and advances in the computational machinery used to build simulations

    An Online Environment to Compare Students’ and Expert Solutions to Ill-Structured Problems

    No full text
    Practitioners often face ill-structured problems. However, it is difficult for instructors to assess their students’ work on such problems, as a broad set of solutions exist and may depend on the context. One way to assess student learning is through the evaluation of their mental models, which can be presented in the form of a causal network or ‘map’. While comparing a student’s map to an expert’s map can assist with the evaluation, this is a challenging process, in part, due to variations in language, resulting in the use of different terms for the same construct. The first step of the comparison is to address these variations by aligning as many of the students’ terms with their equivalent in the expert’s map. We present the design and implementation of a software to assist with the alignment task. The software improves on previous work by optimizing usability (e.g., minimizing the number of clicks to create an alignment) and by leveraging previous alignments to recommend new ones. In addition, alignments can be done collaboratively, as our system is available online: one instructor can invite others to edit or see the alignments. Further improvements to this system may be achieved using content-based recommender systems or natural language processing
    corecore