10,966 research outputs found

    The research on explosion suppression effect of aluminum alloy explosion-proof materials cleaned by ultrasonic

    Get PDF
    Premixed gas explosion pipe system was established to study the change rule of explosion pressure and pressure rise rate of 10% methane/ air premixed gas under four condition that no material was filled, used material was filled, new materials was filled and cleaned materials was filled in explosive pipe. The results show that compared with the used material and cleaned material, the average maximum explosion pressure was reduced by 21.62% and the average pressure rise rate decreased by 84.80%. The results show that the suppression performance of used aluminum alloy explosion-proof materials improved greatly after the used materials is cleaned

    Series expansions in cross-ambiguity functions

    Get PDF
    Master'sMASTER OF SCIENC

    Self-Organization Towards 1/f1/f Noise in Deep Neural Networks

    Full text link
    Despite 1/f1/f noise being ubiquitous in both natural and artificial systems, no general explanations for the phenomenon have received widespread acceptance. One well-known system where 1/f1/f noise has been observed in is the human brain, with this 'noise' proposed by some to be important to the healthy function of the brain. As deep neural networks (DNNs) are loosely modelled after the human brain, and as they start to achieve human-level performance in specific tasks, it might be worth investigating if the same 1/f1/f noise is present in these artificial networks as well. Indeed, we find the existence of 1/f1/f noise in DNNs - specifically Long Short-Term Memory (LSTM) networks modelled on real world dataset - by measuring the Power Spectral Density (PSD) of different activations within the network in response to a sequential input of natural language. This was done in analogy to the measurement of 1/f1/f noise in human brains with techniques such as electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI). We further examine the exponent values in the 1/f1/f noise in "inner" and "outer" activations in the LSTM cell, finding some resemblance in the variations of the exponents in the fMRI signal. In addition, comparing the values of the exponent at "rest" compared to when performing "tasks" of the LSTM network, we find a similar trend to that of the human brain where the exponent while performing tasks is less negative

    Suppression of breast cancer cell growth by Na(+)/H(+ )exchanger regulatory factor 1 (NHERF1)

    Get PDF
    INTRODUCTION: Na(+)/H(+ )exchanger regulatory factor 1 (NHERF1, also known as EBP50 or NHERF) is a putative tumour suppressor gene in human breast cancer. Located at 17q25.1, NHERF1 is frequently targeted during breast tumourigenesis. Loss of heterozygosity (LOH) at the NHERF1 locus is found in more than 50% of breast tumours. In addition, NHERF1 is mutated in a subset of primary breast tumours and breast cancer cell lines. LOH at the NHERF1 locus is strongly associated with aggressive features of breast tumours, implicating NHERF1 as a haploinsufficiency tumour suppressor gene. However, the putative NHERF1 tumour suppressor activity has not been functionally verified. METHODS: To confirm the NHERF1 tumour suppressor activity suggested by our genetic analyses, we used retrovirus-transduced short hairpin RNA (shRNA) to knock down NHERF1 expression in breast cancer cell lines MCF7 and T47D. These cells were then assessed for cell growth in vitro and in vivo. The control and NHERF1 knockdown cells were also serum-starved and re-fed to compare their cell cycle progression as measured by fluorescence-activated cell sorting analyses. RESULTS: We found that downregulation of the endogenous NHERF1 in T47D or MCF7 cells resulted in enhanced cell proliferation in both anchorage-dependent and -independent conditions compared with that of the vector control cells. NHERF1 knockdown T47D cells implanted at mammary fat pads of athymic mice formed larger tumours than did control cells. We found that serum-starved NHERF1 knockdown cells had a faster G(1)-to-S transition after serum re-stimulation than the control cells. Immunoblotting showed that the accelerated cell cycle progression in NHERF1 knockdown cells was accompanied by increased expression of cyclin E and elevated Rb phosphorylation level. CONCLUSION: Our findings suggested that the normal NHERF1 function in mammary epithelial cells involves blockage of cell cycle progression. Our study affirmed the tumour suppressor activity of NHERF1 in breast which may be related to its regulatory effect on cell cycle. It warrants future investigation of this novel tumour suppressor pathway in human breast cancer which may turn up therapeutic opportunities

    Improved Noisy Student Training for Automatic Speech Recognition

    Full text link
    Recently, a semi-supervised learning method known as "noisy student training" has been shown to improve image classification performance of deep networks significantly. Noisy student training is an iterative self-training method that leverages augmentation to improve network performance. In this work, we adapt and improve noisy student training for automatic speech recognition, employing (adaptive) SpecAugment as the augmentation method. We find effective methods to filter, balance and augment the data generated in between self-training iterations. By doing so, we are able to obtain word error rates (WERs) 4.2%/8.6% on the clean/noisy LibriSpeech test sets by only using the clean 100h subset of LibriSpeech as the supervised set and the rest (860h) as the unlabeled set. Furthermore, we are able to achieve WERs 1.7%/3.4% on the clean/noisy LibriSpeech test sets by using the unlab-60k subset of LibriLight as the unlabeled set for LibriSpeech 960h. We are thus able to improve upon the previous state-of-the-art clean/noisy test WERs achieved on LibriSpeech 100h (4.74%/12.20%) and LibriSpeech (1.9%/4.1%).Comment: 5 pages, 5 figures, 4 tables; v2: minor revisions, reference adde

    Tubulin cofactors and Arl2 are cage-like chaperones that regulate the soluble αβ-tubulin pool for microtubule dynamics.

    Get PDF
    Microtubule dynamics and polarity stem from the polymerization of αβ-tubulin heterodimers. Five conserved tubulin cofactors/chaperones and the Arl2 GTPase regulate α- and β-tubulin assembly into heterodimers and maintain the soluble tubulin pool in the cytoplasm, but their physical mechanisms are unknown. Here, we reconstitute a core tubulin chaperone consisting of tubulin cofactors TBCD, TBCE, and Arl2, and reveal a cage-like structure for regulating αβ-tubulin. Biochemical assays and electron microscopy structures of multiple intermediates show the sequential binding of αβ-tubulin dimer followed by tubulin cofactor TBCC onto this chaperone, forming a ternary complex in which Arl2 GTP hydrolysis is activated to alter αβ-tubulin conformation. A GTP-state locked Arl2 mutant inhibits ternary complex dissociation in vitro and causes severe defects in microtubule dynamics in vivo. Our studies suggest a revised paradigm for tubulin cofactors and Arl2 functions as a catalytic chaperone that regulates soluble αβ-tubulin assembly and maintenance to support microtubule dynamics

    Unlocking the Transferability of Tokens in Deep Models for Tabular Data

    Full text link
    Fine-tuning a pre-trained deep neural network has become a successful paradigm in various machine learning tasks. However, such a paradigm becomes particularly challenging with tabular data when there are discrepancies between the feature sets of pre-trained models and the target tasks. In this paper, we propose TabToken, a method aims at enhancing the quality of feature tokens (i.e., embeddings of tabular features). TabToken allows for the utilization of pre-trained models when the upstream and downstream tasks share overlapping features, facilitating model fine-tuning even with limited training examples. Specifically, we introduce a contrastive objective that regularizes the tokens, capturing the semantics within and across features. During the pre-training stage, the tokens are learned jointly with top-layer deep models such as transformer. In the downstream task, tokens of the shared features are kept fixed while TabToken efficiently fine-tunes the remaining parts of the model. TabToken not only enables knowledge transfer from a pre-trained model to tasks with heterogeneous features, but also enhances the discriminative ability of deep tabular models in standard classification and regression tasks

    TarTar: A Timed Automata Repair Tool

    Full text link
    We present TarTar, an automatic repair analysis tool that, given a timed diagnostic trace (TDT) obtained during the model checking of a timed automaton model, suggests possible syntactic repairs of the analyzed model. The suggested repairs include modified values for clock bounds in location invariants and transition guards, adding or removing clock resets, etc. The proposed repairs are guaranteed to eliminate executability of the given TDT, while preserving the overall functional behavior of the system. We give insights into the design and architecture of TarTar, and show that it can successfully repair 69% of the seeded errors in system models taken from a diverse suite of case studies.Comment: 15 pages, 7 figure
    corecore