3,201 research outputs found

    Precise Radial Velocities of Polaris: Detection of Amplitude Growth

    Full text link
    We present a first results from a long-term program of a radial velocity study of Cepheid Polaris (F7 Ib) aimed to find amplitude and period of pulsations and nature of secondary periodicities. 264 new precise radial velocity measurements were obtained during 2004-2007 with the fiber-fed echelle spectrograph Bohyunsan Observatory Echelle Spectrograph (BOES) of 1.8m telescope at Bohyunsan Optical Astronomy Observatory (BOAO) in Korea. We find a pulsational radial velocity amplitude and period of Polaris for three seasons of 2005.183, 2006.360, and 2007.349 as 2K = 2.210 +/- 0.048 km/s, 2K = 2.080 +/- 0.042 km/s, and 2K = 2.406 +/- 0.018 km/s respectively, indicating that the pulsational amplitudes of Polaris that had decayed during the last century is now increasing rapidly. The pulsational period was found to be increasing too. This is the first detection of a historical turnaround of pulsational amplitude change in Cepheids. We clearly find the presence of additional radial velocity variations on a time scale of about 119 days and an amplitude of about +/- 138 m/s, that is quasi-periodic rather than strictly periodic. We do not confirm the presence in our data the variation on a time scale 34-45 days found in earlier radial velocity data obtained in 80's and 90's. We assume that both the 119 day quasi-periodic, noncoherent variations found in our data as well as 34-45 day variations found before can be caused by the 119 day rotation periods of Polaris and by surface inhomogeneities such as single or multiple spot configuration varying with the time.Comment: 15 pages, 7 figures, Accepted for publication in The Astronomical Journa

    Multimodal One-Shot Learning of Speech and Images

    Full text link
    Imagine a robot is shown new concepts visually together with spoken tags, e.g. "milk", "eggs", "butter". After seeing one paired audio-visual example per class, it is shown a new set of unseen instances of these objects, and asked to pick the "milk". Without receiving any hard labels, could it learn to match the new continuous speech input to the correct visual instance? Although unimodal one-shot learning has been studied, where one labelled example in a single modality is given per class, this example motivates multimodal one-shot learning. Our main contribution is to formally define this task, and to propose several baseline and advanced models. We use a dataset of paired spoken and visual digits to specifically investigate recent advances in Siamese convolutional neural networks. Our best Siamese model achieves twice the accuracy of a nearest neighbour model using pixel-distance over images and dynamic time warping over speech in 11-way cross-modal matching.Comment: 5 pages, 1 figure, 3 tables; accepted to ICASSP 201

    Visually grounded learning of keyword prediction from untranscribed speech

    Full text link
    During language acquisition, infants have the benefit of visual cues to ground spoken language. Robots similarly have access to audio and visual sensors. Recent work has shown that images and spoken captions can be mapped into a meaningful common space, allowing images to be retrieved using speech and vice versa. In this setting of images paired with untranscribed spoken captions, we consider whether computer vision systems can be used to obtain textual labels for the speech. Concretely, we use an image-to-words multi-label visual classifier to tag images with soft textual labels, and then train a neural network to map from the speech to these soft targets. We show that the resulting speech system is able to predict which words occur in an utterance---acting as a spoken bag-of-words classifier---without seeing any parallel speech and text. We find that the model often confuses semantically related words, e.g. "man" and "person", making it even more effective as a semantic keyword spotter.Comment: 5 pages, 3 figures, 5 tables; small updates, added link to code; accepted to Interspeech 201
    corecore