9,421 research outputs found

    Adversarial Network Bottleneck Features for Noise Robust Speaker Verification

    Full text link
    In this paper, we propose a noise robust bottleneck feature representation which is generated by an adversarial network (AN). The AN includes two cascade connected networks, an encoding network (EN) and a discriminative network (DN). Mel-frequency cepstral coefficients (MFCCs) of clean and noisy speech are used as input to the EN and the output of the EN is used as the noise robust feature. The EN and DN are trained in turn, namely, when training the DN, noise types are selected as the training labels and when training the EN, all labels are set as the same, i.e., the clean speech label, which aims to make the AN features invariant to noise and thus achieve noise robustness. We evaluate the performance of the proposed feature on a Gaussian Mixture Model-Universal Background Model based speaker verification system, and make comparison to MFCC features of speech enhanced by short-time spectral amplitude minimum mean square error (STSA-MMSE) and deep neural network-based speech enhancement (DNN-SE) methods. Experimental results on the RSR2015 database show that the proposed AN bottleneck feature (AN-BN) dramatically outperforms the STSA-MMSE and DNN-SE based MFCCs for different noise types and signal-to-noise ratios. Furthermore, the AN-BN feature is able to improve the speaker verification performance under the clean condition

    Revisiting the TeV flare of PKS 2155-304 in 2006

    Full text link
    Blazars, a subclass of active galactic nuclei (AGN), are known to be bright γ\gamma-ray sources, frequently exhibiting active (flaring) periods. The blazar PKS~2155-304 is a high synchrotron-peaked BL Lac object located at redshift z=0.116z=0.116. On 2006 July 28, an extremely remarkable outburst of VHE γ\gamma-ray emission from this blazar was reported by the H.E.S.S. experiment, with an average flux more than 10 times the low-state level. The variability timescale of this extraordinary flare was as short as approximately 200~s. In order to guarantee the transparency of the emission region for TeV photons, the fast variability demands an extremely high Doppler factor δD>50\delta_{\rm D}>50 of the jet within the classical one-zone model, leading to the so-called "Doppler factor crisis". Here we demonstrate that the stochastic dissipation model, which is a multi-blob scenario for blazars, can self-consistently explain the giant TeV flares of PKS~2155-304 and the low-state emission before and after the flares, in terms of both multi-wavelength spectral and variability characteristics. The required Doppler factor in this model can be as low as 20, which is a reasonable and typical value for blazar jets. The obtained model parameters may shed some light on the physical properties of the relativistic jet.Comment: 14 pages, 12 figures, 4 table

    Highly Efficient Midinfrared On-Chip Electrical Generation of Graphene Plasmons by Inelastic Electron Tunneling Excitation

    Get PDF
    Inelastic electron tunneling provides a low-energy pathway for the excitation of surface plasmons and light emission. We theoretically investigate tunnel junctions based on metals and graphene. We show that graphene is potentially a highly efficient material for tunneling excitation of plasmons because of its narrow plasmon linewidths, strong emission, and large tunability in the midinfrared wavelength regime. Compared to gold and silver, the enhancement can be up to 10 times for similar wavelengths and up to 5 orders at their respective plasmon operating wavelengths. Tunneling excitation of graphene plasmons promises an efficient technology for on-chip electrical generation and manipulation of plasmons for graphene-based optoelectronics and nanophotonic integrated circuits.Comment: 12 pages, 7 figure

    Text-Independent Speaker Identification Using the Histogram Transform Model

    Get PDF

    PETA: Evaluating the Impact of Protein Transfer Learning with Sub-word Tokenization on Downstream Applications

    Full text link
    Large protein language models are adept at capturing the underlying evolutionary information in primary structures, offering significant practical value for protein engineering. Compared to natural language models, protein amino acid sequences have a smaller data volume and a limited combinatorial space. Choosing an appropriate vocabulary size to optimize the pre-trained model is a pivotal issue. Moreover, despite the wealth of benchmarks and studies in the natural language community, there remains a lack of a comprehensive benchmark for systematically evaluating protein language model quality. Given these challenges, PETA trained language models with 14 different vocabulary sizes under three tokenization methods. It conducted thousands of tests on 33 diverse downstream datasets to assess the models' transfer learning capabilities, incorporating two classification heads and three random seeds to mitigate potential biases. Extensive experiments indicate that vocabulary sizes between 50 and 200 optimize the model, whereas sizes exceeding 800 detrimentally affect the model's representational performance. Our code, model weights and datasets are available at https://github.com/ginnm/ProteinPretraining.Comment: 46 pages, 4figures, 9 table
    • …
    corecore