16,974 research outputs found

    Four facets of a process modeling facilitator

    Get PDF
    Business process modeling as a practice and research field has received great attention in recent years. However, while related artifacts such as models, tools or grammars have substantially matured, comparatively little is known about the activities that are conducted as part of the actual act of process modeling. Especially the key role of the modeling facilitator has not been researched to date. In this paper, we propose a new theory-grounded, conceptual framework describing four facets (the driving engineer, the driving artist, the catalyzing engineer, and the catalyzing artist) that can be used by a facilitator. These facets with behavioral styles have been empirically explored via in-depth interviews and additional questionnaires with experienced process analysts. We develop a proposal for an emerging theory for describing, investigating, and explaining different behaviors associated with Business Process Modeling Facilitation. This theory is an important sensitizing vehicle for examining processes and outcomes from process modeling endeavors

    Evaluating a Model of Team Collaboration via Analysis of Team Communications

    Get PDF
    Human Factors and Ergonomics Society 51st Annual Meeting—2007The article of record may be found at https://doi.org/10.1177/154193120705100456A model of team collaboration was developed that emphasizes the macro-cognitive processes entailed in collaboration and includes major processes that underlie this type of communication: (1) individual knowledge building, (2) developing knowledge inter-operability, (3) team shared understanding, and (4) developing team consensus. This paper describes research conducted to empirically validate this model. Team communications that transpired during two complex problem solving situations were coded using cognitive process definitions included in the model. Data was analyzed for three teams that conducted a Maritime Interdiction Operation (MIO) and four teams that engaged in air-warfare scenarios. MIO scenarios involve a boarding team that boards a suspect ship to search for contraband cargo (e.g. explosives, machinery) and possible terrorist suspects. Air-warfare scenarios involve identifying air contacts in the combat information center of an Aegis ship. The way the teamsí behavior on the two scenarios maps to the model of team collaboration is discussed.Approved for public release; distribution is unlimited

    Every team deserves a second chance:an extended study on predicting team performance

    Get PDF
    Voting among different agents is a powerful tool in problem solving, and it has been widely applied to improve the performance in finding the correct answer to complex problems. We present a novel benefit of voting, that has not been observed before: we can use the voting patterns to assess the performance of a team and predict their final outcome. This prediction can be executed at any moment during problem-solving and it is completely domain independent. Hence, it can be used to identify when a team is failing, allowing an operator to take remedial procedures (such as changing team members, the voting rule, or increasing the allocation of resources). We present three main theoretical results: (1) we show a theoretical explanation of why our prediction method works; (2) contrary to what would be expected based on a simpler explanation using classical voting models, we show that we can make accurate predictions irrespective of the strength (i.e., performance) of the teams, and that in fact, the prediction can work better for diverse teams composed of different agents than uniform teams made of copies of the best agent; (3) we show that the quality of our prediction increases with the size of the action space. We perform extensive experimentation in two different domains: Computer Go and Ensemble Learning. In Computer Go, we obtain high quality predictions about the final outcome of games. We analyze the prediction accuracy for three different teams with different levels of diversity and strength, and show that the prediction works significantly better for a diverse team. Additionally, we show that our method still works well when trained with games against one adversary, but tested with games against another, showing the generality of the learned functions. Moreover, we evaluate four different board sizes, and experimentally confirm better predictions in larger board sizes. We analyze in detail the learned prediction functions, and how they change according to each team and action space size. In order to show that our method is domain independent, we also present results in Ensemble Learning, where we make online predictions about the performance of a team of classifiers, while they are voting to classify sets of items. We study a set of classical classification algorithms from machine learning, in a data-set of hand-written digits, and we are able to make high-quality predictions about the final performance of two different teams. Since our approach is domain independent, it can be easily applied to a variety of other domains

    Every team deserves a second chance:Identifying when things go wrong

    Get PDF
    Voting among different agents is a powerful tool in problem solving, and it has been widely applied to improve the performance in finding the correct answer to complex problems. We present a novel benefit of voting, that has not been observed before: we can use the voting patterns to assess the performance of a team and predict their final outcome. This prediction can be executed at any moment during problem-solving and it is completely domain independent. We present a theoretical explanation of why our prediction method works. Further, contrary to what would be expected based on a simpler explanation using classical voting models, we argue that we can make accurate predictions irrespective of the strength (i.e., performance) of the teams, and that in fact, the prediction can work better for diverse teams composed of different agents than uniform teams made of copies of the best agent. We perform experiments in the Computer Go domain, where we obtain a high accuracy in predicting the final outcome of the games. We analyze the prediction accuracy for three different teams with different levels of diversity and strength, and we show that the prediction works significantly better for a diverse team. Since our approach is domain independent, it can be easily applied to a variety of domains

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Predicting ConceptNet Path Quality Using Crowdsourced Assessments of Naturalness

    Full text link
    In many applications, it is important to characterize the way in which two concepts are semantically related. Knowledge graphs such as ConceptNet provide a rich source of information for such characterizations by encoding relations between concepts as edges in a graph. When two concepts are not directly connected by an edge, their relationship can still be described in terms of the paths that connect them. Unfortunately, many of these paths are uninformative and noisy, which means that the success of applications that use such path features crucially relies on their ability to select high-quality paths. In existing applications, this path selection process is based on relatively simple heuristics. In this paper we instead propose to learn to predict path quality from crowdsourced human assessments. Since we are interested in a generic task-independent notion of quality, we simply ask human participants to rank paths according to their subjective assessment of the paths' naturalness, without attempting to define naturalness or steering the participants towards particular indicators of quality. We show that a neural network model trained on these assessments is able to predict human judgments on unseen paths with near optimal performance. Most notably, we find that the resulting path selection method is substantially better than the current heuristic approaches at identifying meaningful paths.Comment: In Proceedings of the Web Conference (WWW) 201

    Improving Hybrid Brainstorming Outcomes with Scripting and Group Awareness Support

    Get PDF
    Previous research has shown that hybrid brainstorming, which combines individual and group methods, generates more ideas than either approach alone. However, the quality of these ideas remains similar across different methods. This study, guided by the dual-pathway to creativity model, tested two computer-supported scaffolds – scripting and group awareness support – for enhancing idea quality in hybrid brainstorming. 94 higher education students,grouped into triads, were tasked with generating ideas in three conditions. The Control condition used standard hybrid brainstorming without extra support. In the Experimental 1 condition, students received scripting support during individual brainstorming, and students in the Experimental 2 condition were provided with group awareness support during the group phase in addition. While the quantity of ideas was similar across all conditions, the Experimental 2 condition produced ideas of higher quality, and the Experimental 1 condition also showed improved idea quality in the individual phase compared to the Control condition

    Argumentation Mining in User-Generated Web Discourse

    Full text link
    The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people's argumentation. In this article, we go beyond the state of the art in several ways. (i) We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. (ii) We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. (iii) We create a new gold standard corpus (90k tokens in 340 documents) and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task.Comment: Cite as: Habernal, I. & Gurevych, I. (2017). Argumentation Mining in User-Generated Web Discourse. Computational Linguistics 43(1), pp. 125-17
    • 

    corecore