581,482 research outputs found

    Investigating the dimensions of modeling competence among preservice science teachers: Meta-modeling knowledge, modeling practice, and modeling product

    Get PDF
    Worldwide, teachers are expected to engage their students in authentic practices, like scientific modeling. Research suggests that teachers experience challenges when integrating modeling in their classroom instruction, with one explanation that teachers themselves lack the necessary modeling competence. Currently, theoretical conceptualizations structure the modeling competence into three dimensions: meta-modeling knowledge, modeling practice, and modeling products. While each of these dimensions is well researched on its own and the three dimensions are commonly expected to be highly positively related, studies investigating their specific relationships are widely lacking. Aiming to fill this gap, the present study investigated the meta-modeling knowledge, modeling practice, and modeling products of 35 secondary preservice biology teachers engaging in a black box modeling task. Data were collected with an established pen-and-paper questionnaire consisting of five constructed response items assessing meta-modeling knowledge and by videotaping the participants engaging in the black box modeling task. Herein, the three dimensions of modeling competence were operationalized as five variables including decontextualized and contextualized meta-modeling knowledge, complexity, and homogeneity of the modeling processes and a modeling product score. In contrast to our expectations and common assumptions in the literature, significant relationships between the five variables were widely lacking. Only the complexity of the modeling processes correlated significantly with the quality of the modeling products. To investigate this relationship further, a qualitative in-depth analysis of two cases is presented. Implications for biology teacher education are discussed

    DDSP-Piano: A Neural Sound Synthesizer Informed by Instrument Knowledge

    Get PDF
    Instrument sound synthesis using deep neural networks has received numerous improvements over the last couple of years. Among them, the Differentiable Digital Signal Processing (DDSP) framework has modernized the spectral modeling paradigm by including signal-based synthesizers and effects into fully differentiable architectures. The present work extends the applications of DDSP to the task of polyphonic sound synthesis, with the proposal of a differentiable piano synthesizer conditioned on MIDI inputs. The model architecture is motivated by high-level acoustic modeling knowledge of the instrument, which, along with the sound structure priors inherent to the DDSP components, makes for a lightweight, interpretable, and realistic-sounding piano model. A subjective listening test has revealed that the proposed approach achieves better sound quality than a state-of-the-art neural-based piano synthesizer, but physical-modeling-based models still hold the best quality. Leveraging its interpretability and modularity, a qualitative analysis of the model behavior was also conducted: it highlights where additional modeling knowledge and optimization procedures could be inserted in order to improve the synthesis quality and the manipulation of sound properties. Eventually, the proposed differentiable synthesizer can be further used with other deep learning models for alternative musical tasks handling polyphonic audio and symbolic data

    Techniques for organizational memory information systems

    Get PDF
    The KnowMore project aims at providing active support to humans working on knowledge-intensive tasks. To this end the knowledge available in the modeled business processes or their incarnations in specific workflows shall be used to improve information handling. We present a representation formalism for knowledge-intensive tasks and the specification of its object-oriented realization. An operational semantics is sketched by specifying the basic functionality of the Knowledge Agent which works on the knowledge intensive task representation. The Knowledge Agent uses a meta-level description of all information sources available in the Organizational Memory. We discuss the main dimensions that such a description scheme must be designed along, namely information content, structure, and context. On top of relational database management systems, we basically realize deductive object- oriented modeling with a comfortable annotation facility. The concrete knowledge descriptions are obtained by configuring the generic formalism with ontologies which describe the required modeling dimensions. To support the access to documents, data, and formal knowledge in an Organizational Memory an integrated domain ontology and thesaurus is proposed which can be constructed semi-automatically by combining document-analysis and knowledge engineering methods. Thereby the costs for up-front knowledge engineering and the need to consult domain experts can be considerably reduced. We present an automatic thesaurus generation tool and show how it can be applied to build and enhance an integrated ontology /thesaurus. A first evaluation shows that the proposed method does indeed facilitate knowledge acquisition and maintenance of an organizational memory

    GTA: Groupware task analysis Modeling complexity

    Get PDF
    The task analysis methods discussed in this presentation stem from Human-Computer Interaction (HCI) and Ethnography (as applied for the design of Computer Supported Cooperative Work CSCW), different disciplines that often are considered conflicting approaches when applied to the same design problems. Both approaches have their strength and weakness, and an integration of them does add value to the early stages of design of cooperation technology. In order to develop an integrated method for groupware task analysis (GTA) a conceptual framework is presented that allows a systematic perspective on complex work phenomena. The framework features a triple focus, considering (a) people, (b) work, and (c) the situation. Integrating various task-modeling approaches requires vehicles for making design information explicit, for which an object oriented formalism will be suggested. GTA consists of a method and framework that have been developed during practical design exercises. Examples from some of these cases will illustrate our approach

    Learning by Doing: An Online Causal Reinforcement Learning Framework with Causal-Aware Policy

    Full text link
    As a key component to intuitive cognition and reasoning solutions in human intelligence, causal knowledge provides great potential for reinforcement learning (RL) agents' interpretability towards decision-making by helping reduce the searching space. However, there is still a considerable gap in discovering and incorporating causality into RL, which hinders the rapid development of causal RL. In this paper, we consider explicitly modeling the generation process of states with the causal graphical model, based on which we augment the policy. We formulate the causal structure updating into the RL interaction process with active intervention learning of the environment. To optimize the derived objective, we propose a framework with theoretical performance guarantees that alternates between two steps: using interventions for causal structure learning during exploration and using the learned causal structure for policy guidance during exploitation. Due to the lack of public benchmarks that allow direct intervention in the state space, we design the root cause localization task in our simulated fault alarm environment and then empirically show the effectiveness and robustness of the proposed method against state-of-the-art baselines. Theoretical analysis shows that our performance improvement attributes to the virtuous cycle of causal-guided policy learning and causal structure learning, which aligns with our experimental results
    • …
    corecore