680 research outputs found

    Who is behind the Model? Classifying Modelers based on Pragmatic Model Features

    Get PDF
    \u3cp\u3eProcess modeling tools typically aid end users in generic, non-personalized ways. However, it is well conceivable that different types of end users may profit from different types of modeling support. In this paper, we propose an approach based on machine learning that is able to classify modelers regarding their expertise while they are creating a process model. To do so, it takes into account pragmatic features of the model under development. The proposed approach is fully automatic, unobtrusive, tool independent, and based on objective measures. An evaluation based on two data sets resulted in a prediction performance of around 90%. Our results further show that all features can be efficiently calculated, which makes the approach applicable to online settings like adaptive modeling environments. In this way, this work contributes to improving the performance of process modelers.\u3c/p\u3

    A Structured Approach for Designing Collaboration Experiences for Virtual Worlds

    Get PDF
    While 3D virtual worlds are more frequently being used as interactive environments for collaboration, there is still no structured approach developed specifically for the combined design of 3D virtual environments and the collaborative activities in them. We argue that formalizing both the structural elements of virtual worlds and aspects of collaborative work or collaborative learning helps to develop fruitful collaborative work and learning experiences. As such, we present the avatar-based collaboration framework (ABC framework). Based on semiotics theory, the framework puts the collaborating groups into the center of the design and emphasizes the use of distinct features of 3D virtual worlds for use in collaborative learning environments and activities. In developing the framework, we have drawn from best practices in instructional design and game design, research in HCI, and findings and observations from our own empirical research that investigates collaboration patterns in virtual worlds. Along with the framework, we present a case study of its first application for a global collaborative learning project. This paper particularly addresses virtual world designers, educators, meeting facilitators, and other practitioners by thoroughly describing the process of creating rich collaboration and collaborative learning experiences for virtual worlds with the ABC framework

    A Method for Avoiding Bias from Feature Selection with Application to Naive Bayes Classification Models

    Full text link
    For many classification and regression problems, a large number of features are available for possible use - this is typical of DNA microarray data on gene expression, for example. Often, for computational or other reasons, only a small subset of these features are selected for use in a model, based on some simple measure such as correlation with the response variable. This procedure may introduce an optimistic bias, however, in which the response variable appears to be more predictable than it actually is, because the high correlation of the selected features with the response may be partly or wholely due to chance. We show how this bias can be avoided when using a Bayesian model for the joint distribution of features and response. The crucial insight is that even if we forget the exact values of the unselected features, we should retain, and condition on, the knowledge that their correlation with the response was too small for them to be selected. In this paper we describe how this idea can be implemented for ``naive Bayes'' models of binary data. Experiments with simulated data confirm that this method avoids bias due to feature selection. We also apply the naive Bayes model to subsets of data relating gene expression to colon cancer, and find that correcting for bias from feature selection does improve predictive performance

    Experimental effects and causal representations

    Get PDF
    In experimental settings, scientists often “make” new things, in which case the aim is to intervene in order to produce experimental objects and processes—characterized as ‘effects’. In this discussion, I illuminate an important performative function in measurement and experimentation in general: intervention-based experimental production (IEP). I argue that even though the goal of IEP is the production of new effects, it can be informative for causal details in scientific representations. Specifically, IEP can be informative about causal relations in: regularities under study; ‘intervention systems’, which are measurement/experimental systems; and new technological systems

    Law and the Art of Modeling: Are Models Facts?

    Get PDF
    In 2013, the Supreme Court made the offhand comment that empirical models and their estimations or predictions are not \u27findings of fact deserving of deference on appeal. The four Justices writing in dissent disagreed, insisting that an assessment of how a model works and its ability to measure what it claims to measure are precisely the kinds of factual findings that the Court, absent clear error cannot disturb. Neither side elaborated on the controversy or defended its position doctrinally or normatively. That the highest Court could split 5-4 on such a crucial issue without even mentioning the stakes or the terms of the debate, suggests that something is amiss in the legal understanding of models and modeling. This Article does what that case failed to do: it tackles the issue head-on, defining the legal status of a scientific model\u27s results and of the assumptions and choices that go into its construction. I argue that as a normative matter models and their conclusions should not be treated like facts. Models are better evaluated by a judge, they do not merit total deference on appeal, and modeling choices are at least somewhat susceptible to analogical reasoning between cases. But I show that as a descriptive matter courts often treat models and their outcomes like issues of fact, despite doctrines like Daubert that encourage serious judicial engagement with modeling. I suggest that a perceived mismatch between ability and task leads judges to take the easier route of treating modeling issues as facts, and I caution that when judges avoid hard questions about modeling, they jeopardize their own power and influence

    Models Needed to Assist in the Development of a National Fiber Supply Strategy for the 21st Century: Report of a Workshop

    Get PDF
    This discussion paper reports on a Workshop on Wood Fiber Supply Modeling held October 3-4, 1996 in Washington, DC. The purpose of this discussion paper is to provide an overview of some of the modeling work being done related to timber supply modeling and some of the issues related to the more useful application of wood fiber supply and projections models. This paper includes brief presentations of three commonly used long-term timber projections and forecasting models: the Timber Assessment Market Model (TAMM) of the Forest Service; the Cintrafor Global Trade Model (CGTM) of the University of Washington; and the Timber Supply Model (TSM) of Resources for the Future. Also, issues related to the useful of the models are addressed as well as a discussion of some applications of other timber or fiber projection models. The usefulness of the models are addressed from both a technical perspective and also from the perspective of their usefulness to various model users.
    • 

    corecore