23 research outputs found

    From Large-scale Molecular Clouds to Filaments and Cores : Unveiling the Role of the Magnetic Fields in Star Formation

    Get PDF
    I present a comprehensive study of the role of strong magnetic fields in characterizing the structure of molecular clouds. We run three-dimensional turbulent non-ideal magnetohydrodynamic simulations (with ambipolar diffusion) to see the effect of magnetic fields on the evolution of the column density probability distribution function (PDF). Our results indicate a systematic dependence of the column density PDF of molecular clouds on magnetic field strength and turbulence, with observationally distinguishable outcomes between supercritical (gravity dominated) and subcritical (magnetic field dominated) initial conditions. We find that most cases develop a direct power-law PDF, and only the subcritical clouds with turbulence are able to maintain a lognormal body of the PDF but with a power-law tail at high values. I also present a scenario for the formation of oscillatory quasi-equilibrium magnetic ribbons in turbulent subcritical molecular clouds. The synthetic observed relation between apparent width in projection versus observed column density is relatively flat, similar to observations of molecular cloud filaments, and unlike the simple expectation based on a Jeans length argument. Additionally, I develop a “core field structure” (CFS) method which requires spatially resolved observations of the nonthermal velocity dispersion from the Green Bank Ammonia survey (GAS) of the L1688 region of the Ophiuchus molecular cloud along with the column density map to determine magnetic field strength profile across dense cores. By applying the CFS method we find that for most cores in Ophiuchus, the mass-to-flux ratio is decreasing radially outward

    Hierarchical Control for Bipedal Locomotion using Central Pattern Generators and Neural Networks

    Full text link
    The complexity of bipedal locomotion may be attributed to the difficulty in synchronizing joint movements while at the same time achieving high-level objectives such as walking in a particular direction. Artificial central pattern generators (CPGs) can produce synchronized joint movements and have been used in the past for bipedal locomotion. However, most existing CPG-based approaches do not address the problem of high-level control explicitly. We propose a novel hierarchical control mechanism for bipedal locomotion where an optimized CPG network is used for joint control and a neural network acts as a high-level controller for modulating the CPG network. By separating motion generation from motion modulation, the high-level controller does not need to control individual joints directly but instead can develop to achieve a higher goal using a low-dimensional control signal. The feasibility of the hierarchical controller is demonstrated through simulation experiments using the Neuro-Inspired Companion (NICO) robot. Experimental results demonstrate the controller's ability to function even without the availability of an exact robot model.Comment: In: Proceedings of the Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob), Oslo, Norway, Aug. 19-22, 201

    Effect of Optimizer, Initializer, and Architecture of Hypernetworks on Continual Learning from Demonstration

    Full text link
    In continual learning from demonstration (CLfD), a robot learns a sequence of real-world motion skills continually from human demonstrations. Recently, hypernetworks have been successful in solving this problem. In this paper, we perform an exploratory study of the effects of different optimizers, initializers, and network architectures on the continual learning performance of hypernetworks for CLfD. Our results show that adaptive learning rate optimizers work well, but initializers specially designed for hypernetworks offer no advantages for CLfD. We also show that hypernetworks that are capable of stable trajectory predictions are robust to different network architectures. Our open-source code is available at https://github.com/sebastianbergner/ExploringCLFD

    Analytic Models of Brown Dwarfs and the Substellar Mass Limit

    Get PDF
    We present the analytic theory of brown dwarf evolution and the lower mass limit of the hydrogen burning main-sequence stars and introduce some modifications to the existing models. We give an exact expression for the pressure of an ideal nonrelativistic Fermi gas at a finite temperature, therefore allowing for nonzero values of the degeneracy parameter. We review the derivation of surface luminosity using an entropy matching condition and the first-order phase transition between the molecular hydrogen in the outer envelope and the partially ionized hydrogen in the inner region.We also discuss the results of modern simulations of the plasma phase transition, which illustrate the uncertainties in determining its critical temperature. Based on the existing models and with some simple modification, we find the maximum mass for a brown dwarf to be in the range 0.064��⊙–0.087��⊙. An analytic formula for the luminosity evolution allows us to estimate the time period of the nonsteady state (i.e., non-main-sequence) nuclear burning for substellar objects. We also calculate the evolution of very low mass stars. We estimate that ≃11% of stars take longer than 107 yr to reach the main sequence, and ≃5% of stars take longer than 108 yr

    GRINN: A Physics-Informed Neural Network for solving hydrodynamic systems in the presence of self-gravity

    Full text link
    Modeling self-gravitating gas flows is essential to answering many fundamental questions in astrophysics. This spans many topics including planet-forming disks, star-forming clouds, galaxy formation, and the development of large-scale structures in the Universe. However, the nonlinear interaction between gravity and fluid dynamics offers a formidable challenge to solving the resulting time-dependent partial differential equations (PDEs) in three dimensions (3D). By leveraging the universal approximation capabilities of a neural network within a mesh-free framework, physics informed neural networks (PINNs) offer a new way of addressing this challenge. We introduce the gravity-informed neural network (GRINN), a PINN-based code, to simulate 3D self-gravitating hydrodynamic systems. Here, we specifically study gravitational instability and wave propagation in an isothermal gas. Our results match a linear analytic solution to within 1\% in the linear regime and a conventional grid code solution to within 5\% as the disturbance grows into the nonlinear regime. We find that the computation time of the GRINN does not scale with the number of dimensions. This is in contrast to the scaling of the grid-based code for the hydrodynamic and self-gravity calculations as the number of dimensions is increased. Our results show that the GRINN computation time is longer than the grid code in one- and two- dimensional calculations but is an order of magnitude lesser than the grid code in 3D with similar accuracy. Physics-informed neural networks like GRINN thus show promise for advancing our ability to model 3D astrophysical flows

    Action Noise in Off-Policy Deep Reinforcement Learning: Impact on Exploration and Performance

    Full text link
    Many Deep Reinforcement Learning (D-RL) algorithms rely on simple forms of exploration such as the additive action noise often used in continuous control domains. Typically, the scaling factor of this action noise is chosen as a hyper-parameter and is kept constant during training. In this paper, we focus on action noise in off-policy deep reinforcement learning for continuous control. We analyze how the learned policy is impacted by the noise type, noise scale, and impact scaling factor reduction schedule. We consider the two most prominent types of action noise, Gaussian and Ornstein-Uhlenbeck noise, and perform a vast experimental campaign by systematically varying the noise type and scale parameter, and by measuring variables of interest like the expected return of the policy and the state-space coverage during exploration. For the latter, we propose a novel state-space coverage measure XUrel\operatorname{X}_{\mathcal{U}\text{rel}} that is more robust to estimation artifacts caused by points close to the state-space boundary than previously-proposed measures. Larger noise scales generally increase state-space coverage. However, we found that increasing the space coverage using a larger noise scale is often not beneficial. On the contrary, reducing the noise scale over the training process reduces the variance and generally improves the learning performance. We conclude that the best noise type and scale are environment dependent, and based on our observations derive heuristic rules for guiding the choice of the action noise as a starting point for further optimization.Comment: Published in Transactions on Machine Learning Research (11/2022) https://openreview.net/forum?id=NljBlZ6hm
    corecore