299 research outputs found

    The Librating Companions in HD 37124, HD 12661, HD 82943, 47 Uma and GJ 876: Alignment or Antialignment?

    Full text link
    We investigated the apsidal motion for the multi-planet systems. In the simulations, we found that the two planets of HD 37124, HD 12661, 47 Uma and HD 82943 separately undergo apsidal alignment or antialignment. But the companions of GJ 876 and υ\upsilon And are only in apsidal lock about 00^{\circ}. Moreover, we obtained the criteria with Laplace-Lagrange secular theory to discern whether a pair of planets for a certain system are in libration or circulation.Comment: 13 Pages, 3 figures, 2 tables, Published by ApJ Letters, 591, July 1, 2003 (Figures now included to match the publication

    The Dynamical Simulations of the Planets Orbiting GJ 876

    Full text link
    We have performed simulations to investigate the dynamics of the M dwarf star GJ 876 in an attempt to reveal any stabilizing mechanism for sustaining the system.We simulated different coplanar and noncoplanar configurations of two-planet systems and other cases.From the simulations,we found that the 2 :1 mean-motion resonance between two planets can act as an effective mechanism for maintaining the stability of the system.This result is explained by a proposed analytical model.Using this model,we studied the region of motion of the inner planet by varying the parameters of the system,and we detected that the analytical results are well consistent with the numerical simulations.Comment: 17 pages, 8 figures available through authors, to be published in ApJ, June 20,2002 (V572, see figures

    Direct Preference Optimization for Neural Machine Translation with Minimum Bayes Risk Decoding

    Full text link
    Minimum Bayes Risk (MBR) decoding can significantly improve translation performance of Multilingual Large Language Models (MLLMs). However, MBR decoding is computationally expensive and in this paper, we show how recently developed Reinforcement Learning (RL) technique, Direct Preference Optimization (DPO) can be used to fine-tune MLLMs so that we get the gains from MBR without the additional computation in inference. Our fine-tuned models have significantly improved performance on multiple NMT test sets compared to base MLLMs without preference optimization. Our method boosts the translation performance of MLLMs using relatively small monolingual fine-tuning sets
    corecore