242 research outputs found

    Statistical and dynamical decoupling of the IGM from Dark Matter

    Full text link
    The mean mass densities of cosmic dark matter is larger than that of baryonic matter by a factor of about 5 in the Λ\LambdaCDM universe. Therefore, the gravity on large scales should be dominant by the distribution of dark matter in the universe. However, a series of observations incontrovertibly show that the velocity and density fields of baryonic matter are decoupling from underlying dark matter field. This paper shows our attemps to unveil the physics behind this puzzle. In linear approximation, the dynamics of the baryon fluid is completely governed by the gravity of the dark matter. Consequently, the mass density field of baryon matter ρb(r,t)\rho_b({\bf r},t) will be proportional to that of dark matter ρdm(r,t)\rho_{\rm dm}({\bf r},t), even though they are different from each other initially. In weak and moderate nonlinear regime, the dynamics of the baryon fluid can be sketched by Burgers equation. A basic feature of the Burgers dynamics is to yield shocks. When the Reynolds number is large, the Burgers fluid will be in the state of Burgers turbulence, which consists of shocks and complex structures. On the other hand, the collisionless dark matter may not show such shock, but a multivalued velocity field. Therefore, the weak and moderate nonlinear evolution leads to the IGM-dark matter deviation. Yet, the velocity field of Burgers fluid is still irrotational, as gravity is curl-free. In fully nonlinear regime, the vorticity of velocity field developed, and the cosmic baryonic fluid will no longer be potential, as the dynamics of vorticity is independent of gravity and can be self maintained by the nonlinearity of hydrodynamics. In this case, the cosmic baryon fluid is in the state of fully developed turbulence, which is statistically and dynamically decoupling from dark matter. This scenario provides a mechanism of cohenent explanation of observations.Comment: 21 page

    Self-Paced Multi-Task Learning

    Full text link
    In this paper, we propose a novel multi-task learning (MTL) framework, called Self-Paced Multi-Task Learning (SPMTL). Different from previous works treating all tasks and instances equally when training, SPMTL attempts to jointly learn the tasks by taking into consideration the complexities of both tasks and instances. This is inspired by the cognitive process of human brain that often learns from the easy to the hard. We construct a compact SPMTL formulation by proposing a new task-oriented regularizer that can jointly prioritize the tasks and the instances. Thus it can be interpreted as a self-paced learner for MTL. A simple yet effective algorithm is designed for optimizing the proposed objective function. An error bound for a simplified formulation is also analyzed theoretically. Experimental results on toy and real-world datasets demonstrate the effectiveness of the proposed approach, compared to the state-of-the-art methods
    corecore