194,888 research outputs found

    Building dynamic capabilities through operations strategy: an empirical example

    Get PDF
    This paper suggests that the implementation of an effective operations strategy process is one of the necessary antecedents to the development of dynamic capabilities within an organisation and that once established, dynamic capabilities and operations strategy process settle into a symbiotic relationship. Key terms and a model of operations strategy process are proposed from literature as a framework for analysing data from a longitudinal case study with a UK based manufacturer of construction materials

    Feature Dynamic Bayesian Networks

    Get PDF
    Feature Markov Decision Processes (PhiMDPs) are well-suited for learning agents in general environments. Nevertheless, unstructured (Phi)MDPs are limited to relatively simple environments. Structured MDPs like Dynamic Bayesian Networks (DBNs) are used for large-scale real-world problems. In this article I extend PhiMDP to PhiDBN. The primary contribution is to derive a cost criterion that allows to automatically extract the most relevant features from the environment, leading to the "best" DBN representation. I discuss all building blocks required for a complete general learning algorithm.Comment: 7 page

    Feature Reinforcement Learning: Part I: Unstructured MDPs

    Get PDF
    General-purpose, intelligent, learning agents cycle through sequences of observations, actions, and rewards that are complex, uncertain, unknown, and non-Markovian. On the other hand, reinforcement learning is well-developed for small finite state Markov decision processes (MDPs). Up to now, extracting the right state representations out of bare observations, that is, reducing the general agent setup to the MDP framework, is an art that involves significant effort by designers. The primary goal of this work is to automate the reduction process and thereby significantly expand the scope of many existing reinforcement learning algorithms and the agents that employ them. Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in Part II. The role of POMDPs is also considered there.Comment: 24 LaTeX pages, 5 diagram

    Collaboration scripts - a conceptual analysis

    Get PDF
    This article presents a conceptual analysis of collaboration scripts used in face-to-face and computer-mediated collaborative learning. Collaboration scripts are scaffolds that aim to improve collaboration through structuring the interactive processes between two or more learning partners. Collaboration scripts consist of at least five components: (a) learning objectives, (b) type of activities, (c) sequencing, (d) role distribution, and (e) type of representation. These components serve as a basis for comparing prototypical collaboration script approaches for face-to-face vs. computer-mediated learning. As our analysis reveals, collaboration scripts for face-to-face learning often focus on supporting collaborators in engaging in activities that are specifically related to individual knowledge acquisition. Scripts for computer-mediated collaboration are typically concerned with facilitating communicative-coordinative processes that occur among group members. The two lines of research can be consolidated to facilitate the design of collaboration scripts, which both support participation and coordination, as well as induce learning activities closely related to individual knowledge acquisition and metacognition. In addition, research on collaboration scripts needs to consider the learners’ internal collaboration scripts as a further determinant of collaboration behavior. The article closes with the presentation of a conceptual framework incorporating both external and internal collaboration scripts

    The organisation of sociality: a manifesto for a new science of multi-agent systems

    No full text
    In this paper, we pose and motivate a challenge, namely the need for a new science of multi-agent systems. We propose that this new science should be grounded, theoretically on a richer conception of sociality, and methodologically on the extensive use of computational modelling for real-world applications and social simulations. Here, the steps we set forth towards meeting that challenge are mainly theoretical. In this respect, we provide a new model of multi-agent systems that reflects a fully explicated conception of cognition, both at the individual and the collective level. Finally, the mechanisms and principles underpinning the model will be examined with particular emphasis on the contributions provided by contemporary organisation theory

    An account of cognitive flexibility and inflexibility for a complex dynamic task

    Get PDF
    Problem solving involves adapting known problem solving methods and strategies to the task at hand (Schunn & Reder, 2001) and cognitive flexibility is considered to be “the human ability to adapt the cognitive processing strategies to face new and unexpected conditions of the environment” (Cañas et al., 2005, p. 95). This work presents an ACT-R 6.0 model of complex problem solving behavior for the dynamic microworld game FireChief (Omodei & Wearing, 1995) that models the performance of participants predisposed to behave either more or less flexibly based on the nature of previous training on the task (Cañas et al., 2005). The model exhibits a greater or lesser degree of cognitive inflexibility in problem solving strategy choice reflecting variations in task training. The model provides an explanation of dynamic task performance compatible with the Competing Strategies paradigm (Taatgen et al., 2006) by creating a second layer of strategy competition that renders it more flexible with respect to strategy learning, and provides an explanation of cognitive inflexibility based on reward mechanism

    Perseus: Randomized Point-based Value Iteration for POMDPs

    Full text link
    Partially observable Markov decision processes (POMDPs) form an attractive and principled framework for agent planning under uncertainty. Point-based approximate techniques for POMDPs compute a policy based on a finite set of points collected in advance from the agents belief space. We present a randomized point-based value iteration algorithm called Perseus. The algorithm performs approximate value backup stages, ensuring that in each backup stage the value of each point in the belief set is improved; the key observation is that a single backup may improve the value of many belief points. Contrary to other point-based methods, Perseus backs up only a (randomly selected) subset of points in the belief set, sufficient for improving the value of each belief point in the set. We show how the same idea can be extended to dealing with continuous action spaces. Experimental results show the potential of Perseus in large scale POMDP problems
    • …
    corecore