97 research outputs found

    Study on Influence of Residual Magnetite in Panzhihua Ilmenite Flotation

    Get PDF
    AbstractThe main utilization mode of titano-magnetite was firstly separating titano-magnetite by low intensity magnetic separation, then concentrating ilmenite from magnetic separation tailings. Magnetic separation tailings mainly contained ilmenite, but there was still a small quantity of titano-magnetite. Magnetic agglomeration of titanomagnetite occured because of existentence of remanence and pre-flotation grinding. It was found that titanomagnetite presented more optimal floatability than ilmenite. Therefore some gangues wrapped by titano-magnetite went into the floatation concentrate. In a word, titano-magnetite had negative effect on ilmenite floatation by decreasing grade and recovery of concentrate and increasing reagent consumption. The pre-removal of residual titano-magnetite before cleaning ilmenite from magnetic separation tailings by floatation was essential

    Contrastive Speech Mixup for Low-resource Keyword Spotting

    Full text link
    Most of the existing neural-based models for keyword spotting (KWS) in smart devices require thousands of training samples to learn a decent audio representation. However, with the rising demand for smart devices to become more personalized, KWS models need to adapt quickly to smaller user samples. To tackle this challenge, we propose a contrastive speech mixup (CosMix) learning algorithm for low-resource KWS. CosMix introduces an auxiliary contrastive loss to the existing mixup augmentation technique to maximize the relative similarity between the original pre-mixed samples and the augmented samples. The goal is to inject enhancing constraints to guide the model towards simpler but richer content-based speech representations from two augmented views (i.e. noisy mixed and clean pre-mixed utterances). We conduct our experiments on the Google Speech Command dataset, where we trim the size of the training set to as small as 2.5 mins per keyword to simulate a low-resource condition. Our experimental results show a consistent improvement in the performance of multiple models, which exhibits the effectiveness of our method.Comment: Accepted by ICASSP 202

    Targeting 4-1BB and PD-L1 induces potent and durable antitumor immunity in B-cell lymphoma

    Get PDF
    IntroductionAlthough PD-1/L1 mAb has demonstrated clinical benefits in certain cancer types, low response rate and resistance remain the main challenges for the application of these immune checkpoint inhibitors (ICIs). 4-1BB is a co-stimulator molecule expressed in T cells, which could enhance T cell proliferation and activation. Herein, the synergetic antitumor effect and underlying mechanism of 4-1BB agonist combined with PD-1/PD-L1 blockade were determined in B-cell lymphoma (BCL).MethodsSubcutaneous transplantation BCL tumor models and metastasis models were established to evaluate the therapeutic effect of PD-L1 antibody and/or 4-1BB agonist in vivo. For the mechanistic study, RNA-seq was applied to analyze the tumor microenvironment and immune-related signal pathway after combination treatment. The level of IFN-γ, perforin, and granzyme B were determined by ELISA and Real-time PCR assays, while tumor-infiltrating T cells were measured by flow cytometry and immunohistochemical analysis. CD4/CD8 specific antibodies were employed to deplete the related T cells to investigate the role CD4+ and CD8+ T cells played in combination treatment.ResultsOur results showed that combining anti-PD-L1 ICI and 4-1BB agonists elicited regression of BCL and significantly extended the survival of mice compared to either monotherapy. Co-targeting PD-L1 and 4-1BB preferentially promoted intratumoral cytotoxic lymphocyte infiltration and remodeled their function. RNA-sequence analysis uncovered a series of up-regulated genes related to the activation and proliferation of cytotoxic T lymphocytes, further characterized by increased cytokines including IFN-γ, granzyme B, and perforin. Furthermore, depleting CD8+ T cells not CD4+ T cells totally abrogated the antitumor efficacy, indicating the crucial function of the CD8+ T cell subset in the combination therapy.DiscussionIn summary, our findings demonstrated that 4-1BB agonistic antibody intensified the antitumor immunity of anti-PD-1/PD-L1 ICI via promoting CD8+ T cell infiltration and activation, providing a novel therapeutic strategy to BCL

    Are Soft Prompts Good Zero-shot Learners for Speech Recognition?

    Full text link
    Large self-supervised pre-trained speech models require computationally expensive fine-tuning for downstream tasks. Soft prompt tuning offers a simple parameter-efficient alternative by utilizing minimal soft prompt guidance, enhancing portability while also maintaining competitive performance. However, not many people understand how and why this is so. In this study, we aim to deepen our understanding of this emerging method by investigating the role of soft prompts in automatic speech recognition (ASR). Our findings highlight their role as zero-shot learners in improving ASR performance but also make them vulnerable to malicious modifications. Soft prompts aid generalization but are not obligatory for inference. We also identify two primary roles of soft prompts: content refinement and noise information enhancement, which enhances robustness against background noise. Additionally, we propose an effective modification on noise prompts to show that they are capable of zero-shot learning on adapting to out-of-distribution noise environments

    MossFormer2: Combining Transformer and RNN-Free Recurrent Network for Enhanced Time-Domain Monaural Speech Separation

    Full text link
    Our previously proposed MossFormer has achieved promising performance in monaural speech separation. However, it predominantly adopts a self-attention-based MossFormer module, which tends to emphasize longer-range, coarser-scale dependencies, with a deficiency in effectively modelling finer-scale recurrent patterns. In this paper, we introduce a novel hybrid model that provides the capabilities to model both long-range, coarse-scale dependencies and fine-scale recurrent patterns by integrating a recurrent module into the MossFormer framework. Instead of applying the recurrent neural networks (RNNs) that use traditional recurrent connections, we present a recurrent module based on a feedforward sequential memory network (FSMN), which is considered "RNN-free" recurrent network due to the ability to capture recurrent patterns without using recurrent connections. Our recurrent module mainly comprises an enhanced dilated FSMN block by using gated convolutional units (GCU) and dense connections. In addition, a bottleneck layer and an output layer are also added for controlling information flow. The recurrent module relies on linear projections and convolutions for seamless, parallel processing of the entire sequence. The integrated MossFormer2 hybrid model demonstrates remarkable enhancements over MossFormer and surpasses other state-of-the-art methods in WSJ0-2/3mix, Libri2Mix, and WHAM!/WHAMR! benchmarks.Comment: 5 pages, 3 figures, accepted by ICASSP 202

    SPGM: Prioritizing Local Features for enhanced speech separation performance

    Full text link
    Dual-path is a popular architecture for speech separation models (e.g. Sepformer) which splits long sequences into overlapping chunks for its intra- and inter-blocks that separately model intra-chunk local features and inter-chunk global relationships. However, it has been found that inter-blocks, which comprise half a dual-path model's parameters, contribute minimally to performance. Thus, we propose the Single-Path Global Modulation (SPGM) block to replace inter-blocks. SPGM is named after its structure consisting of a parameter-free global pooling module followed by a modulation module comprising only 2% of the model's total parameters. The SPGM block allows all transformer layers in the model to be dedicated to local feature modelling, making the overall model single-path. SPGM achieves 22.1 dB SI-SDRi on WSJ0-2Mix and 20.4 dB SI-SDRi on Libri2Mix, exceeding the performance of Sepformer by 0.5 dB and 0.3 dB respectively and matches the performance of recent SOTA models with up to 8 times fewer parameters

    An Integrative Framework for Bayesian Variable Selection with Informative Priors for Identifying Genes and Pathways

    Full text link
    The discovery of genetic or genomic markers plays a central role in the development of personalized medicine. A notable challenge exists when dealing with the high dimensionality of the data sets, as thousands of genes or millions of genetic variants are collected on a relatively small number of subjects. Traditional gene-wise selection methods using univariate analyses face difficulty to incorporate correlational, structural, or functional structures amongst the molecular measures. For microarray gene expression data, we first summarize solutions in dealing with ‘large p, small n’ problems, and then propose an integrative Bayesian variable selection (iBVS) framework for simultaneously identifying causal or marker genes and regulatory pathways. A novel partial least squares (PLS) g-prior for iBVS is developed to allow the incorporation of prior knowledge on gene-gene interactions or functional relationships. From the point view of systems biology, iBVS enables user to directly target the joint effects of multiple genes and pathways in a hierarchical modeling diagram to predict disease status or phenotype. The estimated posterior selection probabilities offer probabilitic and biological interpretations. Both simulated data and a set of microarray data in predicting stroke status are used in validating the performance of iBVS in a Probit model with binary outcomes. iBVS offers a general framework for effective discovery of various molecular biomarkers by combining data-based statistics and knowledge-based priors. Guidelines on making posterior inferences, determining Bayesian significance levels, and improving computational efficiencies are also discussed
    • …
    corecore