432 research outputs found

    Drying Shrinkage of Hardened Cement Paste and Its Relationship to the Microstructure

    Get PDF
    The aim of the present study is to relate microstructure to the drying shrinkage of hardened cement paste. Three microstructural features, calcium silicate hydrate(C-S-H), calcium hydroxide(CH) and pore structure were studied. A new method to determine the C-S-H content of hardened cement paste is presented. Drying shrinkage behavior of cement pastes were investigated by drying specimens through successive steps of RH 100% to 7% RH and re-saturating the specimens. The total shrinkage of cement paste after drying to 7% RH and irreversible shrinkage were decreased with the increasing amount of C-S-H and CH. Prolonged curing resulted in a paste with finer pore structure and more weight loss when dried in lower humidity. For a certain paste, the same amount of weight loss induced less liner shrinkage in the 54-23% RH range than in the 100-54% RH range. The total shrinkage of cement paste after drying to 7% RH and irreversible shrinkage decreases with increasing amount of C-S-H and CH. The formation of C-S-H increase the resistance of cement paste to shrinkage rather than enhance drying shrinkage by providing more gel pores and empty of which would bring large stress on the solid skeleton

    Face-based age estimation using improved Swin Transformer with attention-based convolution

    Get PDF
    Recently Transformer models is new direction in the computer vision field, which is based on self multihead attention mechanism. Compared with the convolutional neural network, this Transformer uses the self-attention mechanism to capture global contextual information and extract more strong features by learning the association relationship between different features, which has achieved good results in many vision tasks. In face-based age estimation, some facial patches that contain rich age-specific information are critical in the age estimation task. The present study proposed an attention-based convolution (ABC) age estimation framework, called improved Swin Transformer with ABC, in which two separate regions were implemented, namely ABC and Swin Transformer. ABC extracted facial patches containing rich age-specific information using a shallow convolutional network and a multiheaded attention mechanism. Subsequently, the features obtained by ABC were spliced with the flattened image in the Swin Transformer, which were then input to the Swin Transformer to predict the age of the image. The ABC framework spliced the important regions that contained rich age-specific information into the original image, which could fully mobilize the long-dependency of the Swin Transformer, that is, extracting stronger features by learning the dependency relationship between different features. ABC also introduced loss of diversity to guide the training of self-attention mechanism, reducing overlap between patches so that the diverse and important patches were discovered. Through extensive experiments, this study showed that the proposed framework outperformed several state-of-the-art methods on age estimation benchmark datasets

    JEC-QA: A Legal-Domain Question Answering Dataset

    Full text link
    We present JEC-QA, the largest question answering dataset in the legal domain, collected from the National Judicial Examination of China. The examination is a comprehensive evaluation of professional skills for legal practitioners. College students are required to pass the examination to be certified as a lawyer or a judge. The dataset is challenging for existing question answering methods, because both retrieving relevant materials and answering questions require the ability of logic reasoning. Due to the high demand of multiple reasoning abilities to answer legal questions, the state-of-the-art models can only achieve about 28% accuracy on JEC-QA, while skilled humans and unskilled humans can reach 81% and 64% accuracy respectively, which indicates a huge gap between humans and machines on this task. We will release JEC-QA and our baselines to help improve the reasoning ability of machine comprehension models. You can access the dataset from http://jecqa.thunlp.org/.Comment: 9 pages, 2 figures, 10 tables, accepted by AAAI202

    The role of prostate-specific antigen in the osteoblastic bone metastasis of prostate cancer: a literature review

    Get PDF
    Prostate cancer is the only human malignancy that generates predominantly osteoblastic bone metastases, and osteoblastic bone metastases account for more than 90% of osseous metastases of prostate cancer. Prostate-specific antigen (PSA) plays an important role in the osteoblastic bone metastasis of prostate cancer, which can promote osteomimicry of prostate cancer cells, suppress osteoclast differentiation, and facilitate osteoblast proliferation and activation at metastatic sites. In the meantime, it can activate osteogenic factors, including insulin-like growth factor, transforming growth factor β2 and urokinase-type plasminogen activator, and meanwhile suppress osteolytic factors such as parathyroid hormone-related protein. To recapitulate, PSA plays a significant role in the osteoblastic predominance of prostate cancer bone metastasis and bone remodeling by regulating multiple cells and factors involved in osseous metastasis

    Piperidinium bis­(2-oxidobenzoato-κ2 O 1,O 2)borate

    Get PDF
    The asymmetric unit of the title compound, C5H12N+·C14H8BO6 − or [C5H12N][BO4(C7H4O)2], contains two piperidinium cations and two bis­(salicylato)borate anions. The coordination geometries around the B atoms are distorted tetra­hedral. In the two mol­ecules, the aromatic rings are oriented at dihedral angles of 76.27 (3) and 83.86 (3)°. The rings containing B atoms have twist-boat conformations, while the two cations adopt chair conformations. In the crystal, the component species are linked by N—H⋯O hydrogen bonds. In the crystal structure, intra- and inter­molecular N—H⋯O hydrogen bonds link the mol­ecules

    Increased levels of soluble CD226 in sera accompanied by decreased membrane CD226 expression on peripheral blood mononuclear cells from cancer patients

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>As a cellular membrane triggering receptor, CD226 is involved in the NK cell- or CTL-mediated lysis of tumor cells of different origin, including freshly isolated tumor cells and tumor cell lines. Here, we evaluated soluble CD226 (sCD226) levels in sera, and membrane CD226 (mCD226) expression on peripheral blood mononuclear cells (PBMC) from cancer patients as well as normal subjects, and demonstrated the possible function and origin of the altered sCD226, which may provide useful information for understanding the mechanisms of tumor escape and for immunodiagnosis and immunotherapy.</p> <p>Results</p> <p>Soluble CD226 levels in serum samples from cancer patients were significantly higher than those in healthy individuals (<it>P </it>< 0.001), while cancer patients exhibited lower PBMC mCD226 expression than healthy individuals (<it>P </it>< 0.001). CD226-Fc fusion protein could significantly inhibit the cytotoxicity of NK cells against K562 cells in a dose-dependent manner. Furthermore, three kinds of protease inhibitors could notably increase mCD226 expression on PMA-stimulated PBMCs and Jurkat cells with a decrease in the sCD226 level in the cell culture supernatant.</p> <p>Conclusion</p> <p>These findings suggest that sCD226 might be shed from cell membranes by certain proteases, and, further, sCD226 may be used as a predictor for monitoring cancer, and more important, a possible immunotherapy target, which may be useful in clinical application.</p

    Learning to Fuse Multiple Brain Functional Networks for Automated Autism Identification

    Get PDF
    Functional connectivity network (FCN) has become a popular tool to identify potential biomarkers for brain dysfunction, such as autism spectrum disorder (ASD). Due to its importance, researchers have proposed many methods to estimate FCNs from resting-state functional MRI (rs-fMRI) data. However, the existing FCN estimation methods usually only capture a single relationship between brain regions of interest (ROIs), e.g., linear correlation, nonlinear correlation, or higher-order correlation, thus failing to model the complex interaction among ROIs in the brain. Additionally, such traditional methods estimate FCNs in an unsupervised way, and the estimation process is independent of the downstream tasks, which makes it difficult to guarantee the optimal performance for ASD identification. To address these issues, in this paper, we propose a multi-FCN fusion framework for rs-fMRI-based ASD classification. Specifically, for each subject, we first estimate multiple FCNs using different methods to encode rich interactions among ROIs from different perspectives. Then, we use the label information (ASD vs. healthy control (HC)) to learn a set of fusion weights for measuring the importance/discrimination of those estimated FCNs. Finally, we apply the adaptively weighted fused FCN on the ABIDE dataset to identify subjects with ASD from HCs. The proposed FCN fusion framework is straightforward to implement and can significantly improve diagnostic accuracy compared to traditional and state-of-the-art methods

    Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules

    Full text link
    Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at the expense of huge parameter sizes and the consequent computational costs. In this paper, we propose Variator, a parameter-efficient acceleration method that enhances computational efficiency through plug-and-play compression plugins. Compression plugins are designed to reduce the sequence length via compressing multiple hidden vectors into one and trained with original PLMs frozen. Different from traditional model acceleration methods, which compress PLMs to smaller sizes, Variator offers two distinct advantages: (1) In real-world applications, the plug-and-play nature of our compression plugins enables dynamic selection of different compression plugins with varying acceleration ratios based on the current workload. (2) The compression plugin comprises a few compact neural network layers with minimal parameters, significantly saving storage and memory overhead, particularly in scenarios with a growing number of tasks. We validate the effectiveness of Variator on seven datasets. Experimental results show that Variator can save 53% computational costs using only 0.9% additional parameters with a performance drop of less than 2%. Moreover, when the model scales to billions of parameters, Variator matches the strong performance of uncompressed PLMs.Comment: Accepted by Findings of EMNL
    corecore