59 research outputs found

    Steady states of lattice population models with immigration

    Get PDF
    In a lattice population model where individuals evolve as subcritical branching random walks subject to external immigration, the cumulants are estimated and the existence of the steady state is proved. The resulting dynamics are Lyapunov stable in that their qualitative behavior does not change under suitable perturbations of the main parameters of the model. An explicit formula of the limit distribution is derived in the solvable case of no birth. Monte Carlo simulation shows the limit distribution in the solvable case

    The Paradox of Power in CSR: A Case Study on Implementation

    No full text
    Purpose Although current literature assumes positive outcomes for stakeholders resulting from an increase in power associated with CSR, this research suggests that this increase can lead to conflict within organizations, resulting in almost complete inactivity on CSR. Methods A single in-depth case study, focusing on power as an embedded concept. Results Empirical evidence is used to demonstrate how some actors use CSR to improve their own positions within an organization. Resource dependence theory is used to highlight why this may be a more significant concern for CSR. Conclusions Increasing power for CSR has the potential to offer actors associated with it increased personal power, and thus can attract opportunistic actors with little interest in realizing the benefits of CSR for the company and its stakeholders. Thus power can be an impediment to furthering CSR strategy and activities at the individual and organizational level

    A multi-institutional study of inquiry-based lab activities using the Augmented Reality Sandbox: impacts on undergraduate student learning

    No full text
    © 2019, © 2019 Informa UK Limited, trading as Taylor & Francis Group. We developed and tested different pedagogical treatments using an Augmented Reality (AR) Sandbox to teach introductory geoscience students about reading topographic maps at five institutions in both pilot and full implementation studies. The AR Sandbox treatments were characterized as 1) unstructured play, 2) a semi-structured lesson, and 3) a structured lesson. The success of each was contrasted with the control condition of a traditional topographic map lab without the AR Sandbox. Students completed a subset of questions from the Topographic Maps Assessment (TMA) and a series of mental rotation questions post-implementation. No significant differences were found on TMA post-test scores between groups who used the unstructured Sandbox play treatment compared to the control condition. Semi-structured and structured lesson formats similarly failed to produce a statistically significant difference on the TMA post-test. This indicates that no single treatment worked universally better than another. However, regression analysis showed two factors significantly predicted performance on the TMA, including spatial performance and self-assessed knowledge (or confidence) of topographic maps. Of the groups that used the Sandbox, students with low and high scores on the mental rotation test performed best on the TMA following the structured treatment

    Reinforcement learning-based fast charging control strategy for li-ion batteries

    No full text
    none7One of the most crucial challenges faced by the Li-ion battery community concerns the search for the minimum time charging without damaging the cells. This can fall into solving large-scale nonlinear optimal control problems according to a battery model. Within this context, several model-based techniques have been proposed in the literature. However, the effectiveness of such strategies is significantly limited by model complexity and uncertainty. Additionally, it is difficult to track parameters related to aging and re-tune the model-based control policy. With the aim of overcoming these limitations, in this paper we propose a fast-charging strategy subject to safety constraints which relies on a model-free reinforcement learning framework. In particular, we focus on the policy gradient-based actor-critic algorithm, i.e., deep deterministic policy gradient (DDPG), in order to deal with continuous sets of actions and sets. The validity of the proposal is assessed in simulation when a reduced electrochemical model is considered as the real plant. Finally, the online adaptability of the proposed strategy in response to variations of the environment parameters is highlighted with consideration of state reduction.nonePark S.; Pozzi A.; Whitmeyer M.; Perez H.; Joe W.T.; Raimondo D.M.; Moura S.Park, S.; Pozzi, A.; Whitmeyer, M.; Perez, H.; Joe, W. T.; Raimondo, D. M.; Moura, S
    • …
    corecore