16,819 research outputs found

    Angular momentum transport and element mixing in the stellar interior I. Application to the rotating Sun

    Full text link
    The purpose of this work was to obtain diffusion coefficient for the magnetic angular momentum transport and material transport in a rotating solar model. We assumed that the transport of both angular momentum and chemical elements caused by magnetic fields could be treated as a diffusion process. The diffusion coefficient depends on the stellar radius, angular velocity, and the configuration of magnetic fields. By using of this coefficient, it is found that our model becomes more consistent with the helioseismic results of total angular momentum, angular momentum density, and the rotation rate in a radiative region than the one without magnetic fields. Not only can the magnetic fields redistribute angular momentum efficiently, but they can also strengthen the coupling between the radiative and convective zones. As a result, the sharp gradient of the rotation rate is reduced at the bottom of the convective zone. The thickness of the layer of sharp radial change in the rotation rate is about 0.036 RR_{\odot} in our model. Furthermore, the difference of the sound-speed square between the seismic Sun and the model is improved by mixing the material that is associated with angular momentum transport.Comment: 8 pages, 2 figure

    Solar Models with Revised Abundances and Opacities

    Full text link
    Using reconstructed opacities, we construct solar models with low heavy-element abundance. Rotational mixing and enhanced diffusion of helium and heavy elements are used to reconcile the recently observed abundances with helioseismology. The sound speed and density of models where the relative and absolute diffusion coefficients for helium and heavy elements have been increased agree with seismically inferred values at better than the 0.005 and 0.02 fractional level respectively. However, the surface helium abundance of the enhanced diffusion model is too low. The low helium problem in the enhanced diffusion model can be solved to a great extent by rotational mixing. The surface helium and the convection zone depth of rotating model M04R3, which has a surface Z of 0.0154, agree with the seismic results at the levels of 1 σ\sigma and 3 σ\sigma respectively. M04R3 is almost as good as the standard model M98. Some discrepancies between the models constructed in accord with the new element abundances and seismic constraints can be solved individually, but it seems difficult to resolve them as a whole scenario.Comment: 10 pages, 1 figur

    Learning a Deep Listwise Context Model for Ranking Refinement

    Full text link
    Learning to rank has been intensively studied and widely applied in information retrieval. Typically, a global ranking function is learned from a set of labeled data, which can achieve good performance on average but may be suboptimal for individual queries by ignoring the fact that relevant documents for different queries may have different distributions in the feature space. Inspired by the idea of pseudo relevance feedback where top ranked documents, which we refer as the \textit{local ranking context}, can provide important information about the query's characteristics, we propose to use the inherent feature distributions of the top results to learn a Deep Listwise Context Model that helps us fine tune the initial ranked list. Specifically, we employ a recurrent neural network to sequentially encode the top results using their feature vectors, learn a local context model and use it to re-rank the top results. There are three merits with our model: (1) Our model can capture the local ranking context based on the complex interactions between top results using a deep neural network; (2) Our model can be built upon existing learning-to-rank methods by directly using their extracted feature vectors; (3) Our model is trained with an attention-based loss function, which is more effective and efficient than many existing listwise methods. Experimental results show that the proposed model can significantly improve the state-of-the-art learning to rank methods on benchmark retrieval corpora

    Difference of optical conductivity between one- and two-dimensional doped nickelates

    Full text link
    We study the optical conductivity in doped nickelates, and find the dramatic difference of the spectrum in the gap (ω\omega\alt4 eV) between one- (1D) and two-dimensional (2D) nickelates. The difference is shown to be caused by the dependence of hopping integral on dimensionality. The theoretical results explain consistently the experimental data in 1D and 2D nickelates, Y2x_{2-x}Cax_xBaNiO5_5 and La2x_{2-x}Srx_xNiO4_4, respectively. The relation between the spectrum in the X-ray aborption experiments and the optical conductivity in La2x_{2-x}Srx_xNiO4_4 is discussed.Comment: RevTeX, 4 pages, 4 figure

    Unbiased Learning to Rank with Unbiased Propensity Estimation

    Full text link
    Learning to rank with biased click data is a well-known challenge. A variety of methods has been explored to debias click data for learning to rank such as click models, result interleaving and, more recently, the unbiased learning-to-rank framework based on inverse propensity weighting. Despite their differences, most existing studies separate the estimation of click bias (namely the \textit{propensity model}) from the learning of ranking algorithms. To estimate click propensities, they either conduct online result randomization, which can negatively affect the user experience, or offline parameter estimation, which has special requirements for click data and is optimized for objectives (e.g. click likelihood) that are not directly related to the ranking performance of the system. In this work, we address those problems by unifying the learning of propensity models and ranking models. We find that the problem of estimating a propensity model from click data is a dual problem of unbiased learning to rank. Based on this observation, we propose a Dual Learning Algorithm (DLA) that jointly learns an unbiased ranker and an \textit{unbiased propensity model}. DLA is an automatic unbiased learning-to-rank framework as it directly learns unbiased ranking models from biased click data without any preprocessing. It can adapt to the change of bias distributions and is applicable to online learning. Our empirical experiments with synthetic and real-world data show that the models trained with DLA significantly outperformed the unbiased learning-to-rank algorithms based on result randomization and the models trained with relevance signals extracted by click models
    corecore