6,235 research outputs found

    Shock finding on a moving-mesh: I. Shock statistics in non-radiative cosmological simulations

    Full text link
    Cosmological shock waves play an important role in hierarchical structure formation by dissipating and thermalizing kinetic energy of gas flows, thereby heating the universe. Furthermore, identifying shocks in hydrodynamical simulations and measuring their Mach number accurately is critical for calculating the production of non-thermal particle components through diffusive shock acceleration. However, shocks are often significantly broadened in numerical simulations, making it challenging to implement an accurate shock finder. We here introduce a refined methodology for detecting shocks in the moving-mesh code AREPO, and show that results for shock statistics can be sensitive to implementation details. We put special emphasis on filtering against spurious shock detections due to tangential discontinuities and contacts. Both of them are omnipresent in cosmological simulations, for example in the form of shear-induced Kelvin-Helmholtz instabilities and cold fronts. As an initial application of our new implementation, we analyse shock statistics in non-radiative cosmological simulations of dark matter and baryons. We find that the bulk of energy dissipation at redshift zero occurs in shocks with Mach numbers around M2.7{\cal M}\approx2.7. Furthermore, almost 40%40\% of the thermalization is contributed by shocks in the warm hot intergalactic medium (WHIM), whereas 60%\approx60\% occurs in clusters, groups and smaller halos. Compared to previous studies, these findings revise the characterization of the most important shocks towards higher Mach numbers and lower density structures. Our results also suggest that regions with densities above and below δb=100\delta_b=100 should be roughly equally important for the energetics of cosmic ray acceleration through large-scale structure shocks.Comment: 16 pages, 13 figures, published in MNRAS, January 201

    Unsupervised Contact Learning for Humanoid Estimation and Control

    Full text link
    This work presents a method for contact state estimation using fuzzy clustering to learn contact probability for full, six-dimensional humanoid contacts. The data required for training is solely from proprioceptive sensors - endeffector contact wrench sensors and inertial measurement units (IMUs) - and the method is completely unsupervised. The resulting cluster means are used to efficiently compute the probability of contact in each of the six endeffector degrees of freedom (DoFs) independently. This clustering-based contact probability estimator is validated in a kinematics-based base state estimator in a simulation environment with realistic added sensor noise for locomotion over rough, low-friction terrain on which the robot is subject to foot slip and rotation. The proposed base state estimator which utilizes these six DoF contact probability estimates is shown to perform considerably better than that which determines kinematic contact constraints purely based on measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and Automation (ICRA) 201

    A Palimpsestuous Novel: Claire Legendre\u27s La Methode Stanislavski

    Get PDF
    Claire Legendre emerged on the French literary scene in 1997 with her novel Making-of. A prolific writer, she went on to publish an additional five novels,1 an anthology of short stories (Le Crépuscule de Barbe-Bleue, 2001), two co-authored books with Jérôme Bonnetto,2 four plays,3 one book-length essay (Le Nénuphar et l’araignée, 2015), as well as numerous smaller essays and short fictions. Despite this prolixity, Legendre’s publications have, thus far, garnered little academic attention.4 Two reasons may explain her current marginality within the field of French Studies. Her second novel, Viande (1999), relegated her to the late1990s trend of scandalous and sexually graphic publications by women writers (Authier 13-31; Bessard-Banquy 25, 95; Schaal TVFL 154-56, 223-24). Her work was, thus, promptly dismissed as antiliterary and a mere fad (Schaal “Portrait...” 26; Schaal TVFL 155-56). Then, although published by Grasset, Legendre has never actively participated in the French or Parisian literary world. She was born and remained in Nice during the early stages of her career, she subsequently moved to Prague (2008-2011), and now resides in Québec where she teaches Creative Writing at the Université de Montréal. This geographical distance has prevented her publications from garnering significant media and cultural exposure in France or elsewhere (Legendre “Personal Correspondance...”)

    Scaling Reinforcement Learning Paradigms for Motor Control

    Get PDF
    Reinforcement learning offers a general framework to explain reward related learning in artificial and biological motor control. However, current reinforcement learning methods rarely scale to high dimensional movement systems and mainly operate in discrete, low dimensional domains like game-playing, artificial toy problems, etc. This drawback makes them unsuitable for application to human or bio-mimetic motor control. In this poster, we look at promising approaches that can potentially scale and suggest a novel formulation of the actor-critic algorithm which takes steps towards alleviating the current shortcomings. We argue that methods based on greedy policies are not likely to scale into high-dimensional domains as they are problematic when used with function approximation – a must when dealing with continuous domains. We adopt the path of direct policy gradient based policy improvements since they avoid the problems of unstabilizing dynamics encountered in traditional value iteration based updates. While regular policy gradient methods have demonstrated promising results in the domain of humanoid notor control, we demonstrate that these methods can be significantly improved using the natural policy gradient instead of the regular policy gradient. Based on this, it is proved that Kakade’s ‘average natural policy gradient’ is indeed the true natural gradient. A general algorithm for estimating the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges with probability one to the nearest local minimum in Riemannian space of the cost function. The algorithm outperforms nonnatural policy gradients by far in a cart-pole balancing evaluation, and offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action systems. Keywords: Reinforcement learning, neurodynamic programming, actorcritic methods, policy gradient methods, natural policy gradien

    Developing Leadership in a National Cohort of Secondary Biology Teachers: Uses of an On-Line Course Structure to Develop Geographically Distant Professional Learning Community

    Get PDF
    This report is a descriptive study of the role that on-line courses might have on the development of Professional Learning Communities (PLC’s) that support national leadership initiatives of participating high school biology teachers. The one hundred teachers involved in the Life Sciences for a Global Community (LSGC) Institute are expected not only to deepen their content knowledge, but also impact their district and state biology curricula. Additionally, the dispersion of Institute participants across the country presents a unique opportunity to develop, communicate. and implement a national coherent reform agenda. However, the geographic distance presents a barrier to collaborative design of leadership projects. Therefore, the LSGC Institute designed web-based, distance learning courses as a means for both the instruction and development of distant professional relationships

    Online Learning of a Memory for Learning Rates

    Full text link
    The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learner can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.Comment: accepted to ICRA 2018, code available: https://github.com/fmeier/online-meta-learning ; video pitch available: https://youtu.be/9PzQ25FPPO
    corecore