2,385 research outputs found

    Bisphosphonate inhibits the expression of cyclin A2 at the transcriptional level in normal human oral keratinocytes.

    Get PDF
    Nitrogen-containing bisphosphonates (N-BPs) are the most widely used anti-resorptive agents in the treatment of bone-related diseases. N-BPs inhibit bone resorption by specifically targeting osteoclasts, bone-resorbing cells. However, soft tissue toxicity, such as oral or gastrointestinal (GI) ulcerations has frequently been reported in N-BP users, suggesting that N-BPs may also directly target cells other than osteoclasts. Previously, we reported that BPs inhibit proliferation without inducing the apoptosis of normal human oral keratinocytes (NHOKs). However, the molecular mechanisms through which N-BPs inhibit the proliferation of NHOKs are not yet fully understood. In this study, we performed gene expression profiling in N-BP-treated NHOKs and identified cyclin A2 as one of the most commonly downregulated genes. When the NHOKs were treated with N-BPs, we found that the level of cyclin A2 was suppressed in a dose- and time-dependent manner. In addition, the protein level of cyclin A2 was also significantly lower in oral epithelial cells in N-BP-treated oral mucosal tissue constructs. Cyclin A2 promoter reporter assay revealed that N-BPs inhibited the luciferase activity, indicating that the inhibition of cyclin A2 expression occurs at the transcriptional level. Furthermore, N-BPs did not alter the expression of cyclin A2 in normal human oral fibroblasts (NHOFs), suggesting that the effect of N-BPs on cyclin A2 expression may be cell-type specific. Thus, the findings of our study demonstrate that the inhibition of NHOK proliferation by N-BPs is mediated, at least in part, by the suppression of cyclin A2 expression at the transcriptional level, which may explain the underlying mechanisms of soft tissue toxicity by N-BPs

    Breaking the Spurious Causality of Conditional Generation via Fairness Intervention with Corrective Sampling

    Full text link
    To capture the relationship between samples and labels, conditional generative models often inherit spurious correlations from the training dataset. This can result in label-conditional distributions that are imbalanced with respect to another latent attribute. To mitigate this issue, which we call spurious causality of conditional generation, we propose a general two-step strategy. (a) Fairness Intervention (FI): emphasize the minority samples that are hard to generate due to the spurious correlation in the training dataset. (b) Corrective Sampling (CS): explicitly filter the generated samples and ensure that they follow the desired latent attribute distribution. We have designed the fairness intervention to work for various degrees of supervision on the spurious attribute, including unsupervised, weakly-supervised, and semi-supervised scenarios. Our experimental results demonstrate that FICS can effectively resolve spurious causality of conditional generation across various datasets.Comment: TMLR 202

    S-CLIP: Semi-supervised Vision-Language Learning using Few Specialist Captions

    Full text link
    Vision-language models, such as contrastive language-image pre-training (CLIP), have demonstrated impressive results in natural image domains. However, these models often struggle when applied to specialized domains like remote sensing, and adapting to such domains is challenging due to the limited number of image-text pairs available for training. To address this, we propose S-CLIP, a semi-supervised learning method for training CLIP that utilizes additional unpaired images. S-CLIP employs two pseudo-labeling strategies specifically designed for contrastive learning and the language modality. The caption-level pseudo-label is given by a combination of captions of paired images, obtained by solving an optimal transport problem between unpaired and paired images. The keyword-level pseudo-label is given by a keyword in the caption of the nearest paired image, trained through partial label learning that assumes a candidate set of labels for supervision instead of the exact one. By combining these objectives, S-CLIP significantly enhances the training of CLIP using only a few image-text pairs, as demonstrated in various specialist domains, including remote sensing, fashion, scientific figures, and comics. For instance, S-CLIP improves CLIP by 10% for zero-shot classification and 4% for image-text retrieval on the remote sensing benchmark, matching the performance of supervised CLIP while using three times fewer image-text pairs.Comment: NeurIPS 202

    MASKER: Masked Keyword Regularization for Reliable Text Classification

    Full text link
    Pre-trained language models have achieved state-of-the-art accuracies on various text classification tasks, e.g., sentiment analysis, natural language inference, and semantic textual similarity. However, the reliability of the fine-tuned text classifiers is an often underlooked performance criterion. For instance, one may desire a model that can detect out-of-distribution (OOD) samples (drawn far from training distribution) or be robust against domain shifts. We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context. In particular, we find that (a) OOD samples often contain in-distribution keywords, while (b) cross-domain samples may not always contain keywords; over-relying on the keywords can be problematic for both cases. In light of this observation, we propose a simple yet effective fine-tuning method, coined masked keyword regularization (MASKER), that facilitates context-based prediction. MASKER regularizes the model to reconstruct the keywords from the rest of the words and make low-confidence predictions without enough context. When applied to various pre-trained language models (e.g., BERT, RoBERTa, and ALBERT), we demonstrate that MASKER improves OOD detection and cross-domain generalization without degrading classification accuracy. Code is available at https://github.com/alinlab/MASKER.Comment: AAAI 2021. First two authors contributed equall

    Measuring surface dislocation nucleation in defect-scarce nanostructures

    Get PDF
    Linear defects in crystalline materials, known as dislocations, are central to the understanding of plastic deformation and mechanical strength, as well as control of performance in a variety of electronic and photonic materials. Despite a thorough understanding of dislocation structure and their interactions, measurements of the energetics and kinetics of dislocation nucleation have not been possible, as synthesizing and testing pristine crystals absent of defects has been prohibitively challenging. In this talk, experiments that directly measure the surface dislocation nucleation strengths in high qualityPd nanowhiskers subjected to uniaxial tension will be presented. We find that, whereas nucleation strengths are weakly size- and strain-rate-dependent, a strong temperature dependence is uncovered, corroborating predictions that nucleation is assisted by thermal fluctuations. We measure atomic-scale activation volumes, which explain both the ultrahigh athermal strength as well as the temperature-dependent scatter, evident in our experiments and well captured by a thermal activation model. Modeling of the probabilistic nature of surface dislocation nucleation suggests activation energies consistent with surface self-diffusion as the rate-limiting step needed to promote displacive activity. In this context, approaches allowing for modification of the surface chemistry and structure of metallic nanostructures to either inhibit or enhance surface diffusion will be discussed

    The Prevalence of Hepatitis C Virus Infection in Korea: Pooled Analysis

    Get PDF
    This study evaluated the prevalence of hepatitis C virus (HCV) infections in Korea. Pooled estimates of the anti-HCV positivity were calculated using the data published in 15 reports on the general population and health check-up examinees. The overall pooled estimate of the prevalence of HCV among middle-aged adults (40 yr old and above) was 1.68% (95% confidence interval: 1.51-1.86%) during the year of 1990-2000 among the general population. Most of the published data indicated that the prevalence of anti-HCV increased with age. The anti-HCV positivity was significantly higher in females than in males. Because the risk of HCV exposure in blood recipients has decreased remarkably, the spread of HCV through means other than a transfusion must be prevented

    Discovering and Mitigating Visual Biases through Keyword Explanation

    Full text link
    Addressing biases in computer vision models is crucial for real-world AI deployments. However, mitigating visual biases is challenging due to their unexplainable nature, often identified indirectly through visualization or sample statistics, which necessitates additional human supervision for interpretation. To tackle this issue, we propose the Bias-to-Text (B2T) framework, which interprets visual biases as keywords. Specifically, we extract common keywords from the captions of mispredicted images to identify potential biases in the model. We then validate these keywords by measuring their similarity to the mispredicted images using a vision-language scoring model. The keyword explanation form of visual bias offers several advantages, such as a clear group naming for bias discovery and a natural extension for debiasing using these group names. Our experiments demonstrate that B2T can identify known biases, such as gender bias in CelebA, background bias in Waterbirds, and distribution shifts in ImageNet-R/C. Additionally, B2T uncovers novel biases in larger datasets, such as Dollar Street and ImageNet. For example, we discovered a contextual bias between "bee" and "flower" in ImageNet. We also highlight various applications of B2T keywords, including debiased training, CLIP prompting, and model comparison.Comment: CVPR 2024. First two authors contributed equall
    corecore