6,081 research outputs found

    Oscillations in the G-type Giants

    Full text link
    The precise radial-velocity measurements of 4 G-type giants, 11Com, ζ\zeta Hya, ϵ\epsilon Tau, and η\eta Her were carried out. The short-term variations with amplitudes, 1-7m/s and periods, 3-10 hours were detected. A period analysis shows that the individual power distribution is in a Gaussian shape and their peak frequencies (νmax\nu_{max}) are in a good agreement with the prediction by the scaling law. With using a pre-whitening procedure, significant frequency peaks more than 3 σ\sigma are extracted for these giants. From these peaks, we determined the large frequency separation by constructing highest peak distribution of collapsed power spectrum, which is also in good agreement with what the scaling law for the large separation predicts. Echelle diagrams of oscillation frequency were created based on the extracted large separations, which is very useful to clarify the properties of oscillation modes. In these echelle diagrams, odd-even mode sequences are clearly seen. Therefore, it is certain that in these G-type giants, non-radial modes are detected in addition to radial mode. As a consequence, these properties of oscillation modes are shown to follow what Dzymbowski et al.(2001) and Dupret et al.(2009) theoretically predicted. Damping times for these giants were estimated with the same method as that developed by Stello et al.(2004). The relation of Q value (ratio of damping time to period) to the period was discussed by adding the data of the other stars ranging from dwarfs to giants.Comment: 28 pages, 16 figures, accepted for publication in PASJ 62, No.4, 201

    The correlation between modals in the quotative clause and the predicate or modified noun in the main clause in Japanese

    No full text

    Image preference estimation with a data-driven approach: A comparative study between gaze and image features

    Get PDF
    Understanding how humans subjectively look at and evaluate images is an important task for various applications in the field of multimedia interaction. While it has been repeatedly pointed out that eye movements can be used to infer the internal states of humans, not many successes have been reported concerning image understanding. We investigate the possibility of image preference estimation based on a person’s eye movements in a supervised manner in this paper. A dataset of eye movements is collected while the participants are viewing pairs of natural images, and it is used to train image preference label classifiers. The input feature is defined as a combination of various fixation and saccade event statistics, and the use of the random forest algorithm allows us to quantitatively assess how each of the statistics contributes to the classification task. We show that the gaze-based classifier had a higher level of accuracy than metadata-based baseline methods and a simple rule-based classifier throughout the experiments. We also present a quantitative comparison with image-based preference classifiers and discuss the potential and limitations of the gaze-based preference estimator
    corecore