658 research outputs found

    Tuning the Morphology and Surface Property of Mineral Particles by Grinding Media

    Get PDF
    Grinding of minerals for particle size reduction and liberation is a prerequisite for successful mineral flotation separation and powder modification. Different grinding media produce mineral particles with different physical morphology and surface chemistry properties. Different mill particles expose different proportions of cleavage surfaces which lead to different shape indexes and different surface reactivities to organics, such as collector. Rod mill produces scheelite particles with a higher exposure of more reactive {101} surface that are beneficial for a stronger interaction with collector. More exposure of {101} surface also causes the rod mill particles to possess such values as larger elongation and flatness that are essential for particles attachment to air bubbles by shortening the induction time. The rod mill particles have a lower critical surface tension, greater hydrophobicity and a better flotation recovery when treated with the collector. In addition, the rod mill particles with a narrow particle size distribution have a smaller specific surface area, so the full monolayer adsorption of the collector on their surfaces can be achieved at a relatively lower concentration. These findings will help establish the relation between the particle surface physicochemistry and wettability, hence providing valuable guidance for the optimization of flotation separation and powder modification technology

    Transcriptome Comparison between Fetal and Adult Mouse Livers: Implications for Circadian Clock Mechanisms

    Get PDF
    Microarray transcriptome analyses of fetal mouse liver did not detect circadian expression rhythms of clock genes or clock-controlled genes, although some rhythmic transcripts that were likely not driven by endogenous cellular clocks were identified. This finding reveals a key distinction between the circadian oscillators in fetal and adult mouse livers. Thus, in this study, the transcriptomes of fetal and adult livers were systematically compared to identify differences in the gene expression profiles between these two developmental stages. Approximately 1000 transcripts were differentially enriched between the fetal and adult livers. These transcripts represent genes with cellular functions characteristic of distinct developmental stages. Clock genes were also differentially expressed between the fetal and adult livers. Developmental differences in liver gene expression might have contributed to the differences in oscillation status and functional states of the cellular circadian clock between fetal and adult livers

    Epigenetic Control of Circadian Clock Operation during Development

    Get PDF
    The molecular players of circadian clock oscillation have been identified and extensively characterized. The epigenetic mechanisms behind the circadian gene expression control has also been recently studied, although there are still details to be illucidated. In this review, we briefly summarize the current understanding of the mammalian clock. We also provide evidence for the lack of circadian oscillation in particular cell types. As the circadian clock has intimate interaction with the various cellular functions in different type of cells, it must have plasticity and specicity in its operation within different epigenetic environments. The lack of circadian oscillation in certain cells provide an unique opportunity to study the required epigenetic environment in the cell that permit circadian oscillation and to idenfify key influencing factors for proper clock function. How epigenetic mechansims, including DNA methylaiton and chromatin modifications, participate in control of clock oscillation still awaits future studies at the genomic scale

    Fast gradient method for Low-Rank Matrix Estimation

    Full text link
    Projected gradient descent and its Riemannian variant belong to a typical class of methods for low-rank matrix estimation. This paper proposes a new Nesterov's Accelerated Riemannian Gradient algorithm by efficient orthographic retraction and tangent space projection. The subspace relationship between iterative and extrapolated sequences on the low-rank matrix manifold provides a computational convenience. With perturbation analysis of truncated singular value decomposition and two retractions, we systematically analyze the local convergence of gradient algorithms and Nesterov's variants in the Euclidean and Riemannian settings. Theoretically, we estimate the exact rate of local linear convergence under different parameters using the spectral radius in a closed form and give the optimal convergence rate and the corresponding momentum parameter. When the parameter is unknown, the adaptive restart scheme can avoid the oscillation problem caused by high momentum, thus approaching the optimal convergence rate. Extensive numerical experiments confirm the estimations of convergence rate and demonstrate that the proposed algorithm is competitive with first-order methods for matrix completion and matrix sensing.Comment: Accepted for publication in Journal of Scientific Computin

    Lokalno diskriminantna projekcija difuzije i njena primjena za prepoznavanje emocionalnog stanja iz govornog signala

    Get PDF
    The existing Diffusion Maps method brings diffusion to data samples by Markov random walk. In this paper, to provide a general solution form of Diffusion Maps, first, we propose the generalized single-graph-diffusion embedding framework on the basis of graph embedding framework. Second, by designing the embedding graph of the framework, an algorithm, namely Locally Discriminant Diffusion Projection (LDDP), is proposed for speech emotion recognition. This algorithm is the projection form of the improved Diffusion Maps, which includes both discriminant information and local information. The linear or kernelized form of LDDP (i.e., LLDDP or KLDDP) is used to achieve the dimensionality reduction of original speech emotion features. We validate the proposed algorithm on two widely used speech emotion databases, EMO-DB and eNTERFACE\u2705. The experimental results show that the proposed LDDP methods, including LLDDP and KLDDP, outperform some other state-of-the-art dimensionality reduction methods which are based on graph embedding or discriminant analysis.Postojeće metode mapiranja difuzije u uzorke podataka primjenjuju Markovljevu slučajnu šetnju. U ovom radu, kako bismo pružili općenito rješenje za mapiranje difuzije, prvo predlažemo generalizirano okruženje za difuziju jednog grafa, zasnovano na okruženju za primjenu grafova. Drugo, konstruirajući ugrađeni graf, predlažemo algoritam lokalno diskriminantne projekcije difuzije (LDDP) za prepoznavanje emocionalnog stanja iz govornog signala. Ovaj algoritam je projekcija poboljšane difuzijske mape koja uključuje diskriminantnu i lokalnu informaciju. Linearna ili jezgrovita formulacija LDDP-a (i.e., LLDDP ili KLDDP) koristi se u svrhu redukcije dimenzionalnosti originalnog skupa značajki za prepoznavanje emocionalnog stanja iz govornog signala. Predloženi algoritam testiran je nad dvama široko korištenim bazama podataka za prepoznavanje emocionalnog stanja iz govornog signala, EMO-DB i eNTERFACE\u2705. Eksperimentalni rezultati pokazuju kako predložena LDDP metoda, uključujući LLDDP i KLDDP, pokazuje bolje ponašanje od nekih drugih najsuvremenijih metoda redukcije dimenzionalnosti, zasnovanim na ugrađenim grafovima ili analizi diskriminantnosti

    A Focused Study on Sequence Length for Dialogue Summarization

    Full text link
    Output length is critical to dialogue summarization systems. The dialogue summary length is determined by multiple factors, including dialogue complexity, summary objective, and personal preferences. In this work, we approach dialogue summary length from three perspectives. First, we analyze the length differences between existing models' outputs and the corresponding human references and find that summarization models tend to produce more verbose summaries due to their pretraining objectives. Second, we identify salient features for summary length prediction by comparing different model settings. Third, we experiment with a length-aware summarizer and show notable improvement on existing models if summary length can be well incorporated. Analysis and experiments are conducted on popular DialogSum and SAMSum datasets to validate our findings.Comment: Preprint version - ICASSP submissio

    MeaeQ: Mount Model Extraction Attacks with Efficient Queries

    Full text link
    We study model extraction attacks in natural language processing (NLP) where attackers aim to steal victim models by repeatedly querying the open Application Programming Interfaces (APIs). Recent works focus on limited-query budget settings and adopt random sampling or active learning-based sampling strategies on publicly available, unannotated data sources. However, these methods often result in selected queries that lack task relevance and data diversity, leading to limited success in achieving satisfactory results with low query costs. In this paper, we propose MeaeQ (Model extraction attack with efficient Queries), a straightforward yet effective method to address these issues. Specifically, we initially utilize a zero-shot sequence inference classifier, combined with API service information, to filter task-relevant data from a public text corpus instead of a problem domain-specific dataset. Furthermore, we employ a clustering-based data reduction technique to obtain representative data as queries for the attack. Extensive experiments conducted on four benchmark datasets demonstrate that MeaeQ achieves higher functional similarity to the victim model than baselines while requiring fewer queries. Our code is available at https://github.com/C-W-D/MeaeQ.Comment: Accepted by EMNLP 2023 main conferenc

    Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?

    Full text link
    Prompt tuning (PT) which only tunes the embeddings of an additional sequence of tokens per task, keeping the pre-trained language model (PLM) frozen, has shown remarkable performance in few-shot learning. Despite this, PT has been shown to rely heavily on good initialization of the prompt embeddings. In this work, we study meta prompt tuning (MPT) to systematically explore how meta-learning can help improve (if it can) cross-task generalization in PT through learning to initialize the prompt embeddings from other relevant tasks. We empirically analyze a representative set of meta learning algorithms in a wide range of adaptation settings with different source/target task configurations on a large set of few-shot tasks. With extensive experiments and analysis, we demonstrate the effectiveness of MPT. We find the improvement to be significant particularly on classification tasks. For other kinds of tasks such as question answering, we observe that while MPT can outperform PT in most cases, it does not always outperform multi-task learning. We further provide an in-depth analysis from the perspective of task similarity
    corecore