120,240 research outputs found

    Learning from forgetting: an experiential study of two European car manufacturers

    Get PDF
    Decision making power can be decentralized to foster organizational learning at the lower levelsı in the chain of command. However, abilities to capitalize on organizational learning may beı impeded by a concomitant process of organizational forgetting. This paper provides empirical evidence on this process gathered at the subsidiaries in Spain and Sweden of two large automobile manufacturing corporations. This evidence shows how organizational forgetting occurs after aı long period of learning and success and the antecedents of organizational forgetting. It is argued that organizational structure and national culture playa significant role in the relative success orı failure of innovative projects aiming at implementing organizational learning at the operational leve

    Stochastic learning in co-ordination games : a simulation approach

    Get PDF
    In the presence of externalities, consumption behaviour depends on the solution of a co-ordination problem. In our paper we suggest a learning approach to the study of co-ordination in consumption contexts where agents adjust their choices on the basis of the reinforcement (payoff) they receive during the game. The results of simulations allowed us to distinguish the roles of different aspects of learning in enabling co-ordination within a population of agents. Our main results highlight: 1. the role played by the speed of learning in determining failures of the co-ordination process; 2. the effect of forgetting past experiences on the speed of the co-ordination process; 3. the role of experimentation in bringing the process of co-ordination into an efficient equilibrium

    Music History- Laugh and Learn

    Get PDF
    The project I have chosen aligns with my curriculum project and research. Data will be gathered on the effects of laughter in the classroom. This research will show that humor can motivate students as well as aide memory. Overall, the project should conclude that laughter aids in the learning process. This project has great importance in the field of education, especially music education. Students have come to memorize for the tests, soon forgetting what they have learned. Adding a fun twist on a class that will aide students in their first year of college may increase enrollment. This may also help teachers discover that within reason, laughter plays an important role in education

    Unsupervised Continual Learning From Synthetic Data Generated with Agent-Based Modeling and Simulation: A preliminary experimentation

    Get PDF
    Continual Learning enables to learn a variable number of tasks sequentially without forgetting knowledge obtained from the past. Catastrophic forgetting usually occurs in neural networks for their inability to learn different tasks in sequence since the performance on the previous tasks drops down in a significant way. One way to solve this problem is providing a subset of the previous examples to the model while learning a new task. In this paper we evaluate the continual learning performance of an unsupervised model for anomaly detection by generating synthetic data using an Agent-based modeling and simulation technique. We simulated the movement of different types of individuals in a building and evaluate their trajectories depending on their role. We collected training and test sets based on their trajectories. We included, in the test set, negative examples that contain wrong trajectories. We applied a replay-based continual learning to teach the model how to distinguish anomaly trajectories depending on the users’ roles. The results show that using ABMS synthetic data it is enough a small percentage of synthetic data replay to mitigate the Catastrophic Forgetting and to achieve a satisfactory accuracy on the final binary classification (anomalous / non-anomalous)

    Autonomous Deep Learning: Continual Learning Approach for Dynamic Environments

    Full text link
    The feasibility of deep neural networks (DNNs) to address data stream problems still requires intensive study because of the static and offline nature of conventional deep learning approaches. A deep continual learning algorithm, namely autonomous deep learning (ADL), is proposed in this paper. Unlike traditional deep learning methods, ADL features a flexible structure where its network structure can be constructed from scratch with the absence of an initial network structure via the self-constructing network structure. ADL specifically addresses catastrophic forgetting by having a different-depth structure which is capable of achieving a trade-off between plasticity and stability. Network significance (NS) formula is proposed to drive the hidden nodes growing and pruning mechanism. Drift detection scenario (DDS) is put forward to signal distributional changes in data streams which induce the creation of a new hidden layer. The maximum information compression index (MICI) method plays an important role as a complexity reduction module eliminating redundant layers. The efficacy of ADL is numerically validated under the prequential test-then-train procedure in lifelong environments using nine popular data stream problems. The numerical results demonstrate that ADL consistently outperforms recent continual learning methods while characterizing the automatic construction of network structures
    • …
    corecore