96,846 research outputs found

    A Monitoring Campaign for Luhman 16AB. I. Detection of Resolved Near-Infrared Spectroscopic Variability

    Get PDF
    [abbreviated] We report resolved near-infrared spectroscopic monitoring of the nearby L dwarf/T dwarf binary WISE J104915.57-531906.1AB (Luhman 16AB), as part of a broader campaign to characterize the spectral energy distribution and temporal variability of this system. A continuous 45-minute sequence of low-resolution IRTF/SpeX data spanning 0.8-2.4 micron were obtained, concurrent with combined-light optical photometry with ESO/TRAPPIST. Our spectral observations confirm the flux reversal of this binary, and we detect a wavelength-dependent decline in the relative spectral fluxes of the two components coincident with a decline in the combined-light optical brightness of the system over the course of the observation. These data are successfully modeled as a combination of brightness and color variability in the T0.5 Luhman 16B, consistent cloud variations; and no significant variability in L7.5 Luhman 16A. We estimate a peak-to-peak amplitude of 13.5% at 1.25 micron over the full lightcurve. Using a two-spot brightness temperature model, we infer an average cloud covering fraction of ~30-55% for Luhman 16B, varying by 15-30% over a rotation period. A Rhines scale interpretation for the size of the variable features explains an apparent correlation between period and amplitude for three highly variable T dwarfs, and predicts relatively fast winds (1-3 km/s) for Luhman 16B consistent with lightcurve evolution on an advective time scale (1-3 rotation periods). Our observations support the model of a patchy disruption of the mineral cloud layer as a universal feature of the L dwarf/T dwarf transition.Comment: 11 pages, 7 figures; accepted for publication in Astrophysical Journa

    Multiple-F0 estimation of piano sounds exploiting spectral structure and temporal evolution

    Get PDF
    This paper proposes a system for multiple fundamental frequency estimation of piano sounds using pitch candidate selection rules which employ spectral structure and temporal evolution. As a time-frequency representation, the Resonator Time-Frequency Image of the input signal is employed, a noise suppression model is used, and a spectral whitening procedure is performed. In addition, a spectral flux-based onset detector is employed in order to select the steady-state region of the produced sound. In the multiple-F0 estimation stage, tuning and inharmonicity parameters are extracted and a pitch salience function is proposed. Pitch presence tests are performed utilizing information from the spectral structure of pitch candidates, aiming to suppress errors occurring at multiples and sub-multiples of the true pitches. A novel feature for the estimation of harmonically related pitches is proposed, based on the common amplitude modulation assumption. Experiments are performed on the MAPS database using 8784 piano samples of classical, jazz, and random chords with polyphony levels between 1 and 6. The proposed system is computationally inexpensive, being able to perform multiple-F0 estimation experiments in realtime. Experimental results indicate that the proposed system outperforms state-of-the-art approaches for the aforementioned task in a statistically significant manner. Index Terms: multiple-F0 estimation, resonator timefrequency image, common amplitude modulatio

    Multi-Resolution Fully Convolutional Neural Networks for Monaural Audio Source Separation

    Get PDF
    In deep neural networks with convolutional layers, each layer typically has fixed-size/single-resolution receptive field (RF). Convolutional layers with a large RF capture global information from the input features, while layers with small RF size capture local details with high resolution from the input features. In this work, we introduce novel deep multi-resolution fully convolutional neural networks (MR-FCNN), where each layer has different RF sizes to extract multi-resolution features that capture the global and local details information from its input features. The proposed MR-FCNN is applied to separate a target audio source from a mixture of many audio sources. Experimental results show that using MR-FCNN improves the performance compared to feedforward deep neural networks (DNNs) and single resolution deep fully convolutional neural networks (FCNNs) on the audio source separation problem.Comment: arXiv admin note: text overlap with arXiv:1703.0801
    • …
    corecore