110 research outputs found

    KMT2A promotes melanoma cell growth by targeting hTERT signaling pathway.

    Get PDF
    Melanoma is an aggressive cutaneous malignancy, illuminating the exact mechanisms and finding novel therapeutic targets are urgently needed. In this study, we identified KMT2A as a potential target, which promoted the growth of human melanoma cells. KMT2A knockdown significantly inhibited cell viability and cell migration and induced apoptosis, whereas KMT2A overexpression effectively promoted cell proliferation in various melanoma cell lines. Further study showed that KMT2A regulated melanoma cell growth by targeting the hTERT-dependent signal pathway. Knockdown of KMT2A markedly inhibited the promoter activity and expression of hTERT, and hTERT overexpression rescued the viability inhibition caused by KMT2A knockdown. Moreover, KMT2A knockdown suppressed tumorsphere formation and the expression of cancer stem cell markers, which was also reversed by hTERT overexpression. In addition, the results from a xenograft mouse model confirmed that KMT2A promoted melanoma growth via hTERT signaling. Finally, analyses of clinical samples demonstrated that the expression of KMT2A and hTERT were positively correlated in melanoma tumor tissues, and KMT2A high expression predicted poor prognosis in melanoma patients. Collectively, our results indicate that KMT2A promotes melanoma growth by activating the hTERT signaling, suggesting that the KMT2A/hTERT signaling pathway may be a potential therapeutic target for melanoma

    Orthogonal Subspace Learning for Language Model Continual Learning

    Full text link
    Benefiting from massive corpora and advanced hardware, large language models (LLMs) exhibit remarkable capabilities in language understanding and generation. However, their performance degrades in scenarios where multiple tasks are encountered sequentially, also known as catastrophic forgetting. In this paper, we propose orthogonal low-rank adaptation (O-LoRA), a simple and efficient approach for continual learning in language models, effectively mitigating catastrophic forgetting while learning new tasks. Specifically, O-LoRA learns tasks in different (low-rank) vector subspaces that are kept orthogonal to each other in order to minimize interference. Our method induces only marginal additional parameter costs and requires no user data storage for replay. Experimental results on continual learning benchmarks show that our method outperforms state-of-the-art methods. Furthermore, compared to previous approaches, our method excels in preserving the generalization ability of LLMs on unseen tasks.Comment: EMNLP 2023 finding

    Transformation vs Tradition: Artificial General Intelligence (AGI) for Arts and Humanities

    Full text link
    Recent advances in artificial general intelligence (AGI), particularly large language models and creative image generation systems have demonstrated impressive capabilities on diverse tasks spanning the arts and humanities. However, the swift evolution of AGI has also raised critical questions about its responsible deployment in these culturally significant domains traditionally seen as profoundly human. This paper provides a comprehensive analysis of the applications and implications of AGI for text, graphics, audio, and video pertaining to arts and the humanities. We survey cutting-edge systems and their usage in areas ranging from poetry to history, marketing to film, and communication to classical art. We outline substantial concerns pertaining to factuality, toxicity, biases, and public safety in AGI systems, and propose mitigation strategies. The paper argues for multi-stakeholder collaboration to ensure AGI promotes creativity, knowledge, and cultural values without undermining truth or human dignity. Our timely contribution summarizes a rapidly developing field, highlighting promising directions while advocating for responsible progress centering on human flourishing. The analysis lays the groundwork for further research on aligning AGI's technological capacities with enduring social goods

    TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models

    Full text link
    Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety. However, the continual learning aspect of these aligned LLMs has been largely overlooked. Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs, owing to both their simplicity and the models' potential exposure during instruction tuning. In this paper, we introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs. TRACE consists of 8 distinct datasets spanning challenging tasks including domain-specific tasks, multilingual capabilities, code generation, and mathematical reasoning. All datasets are standardized into a unified format, allowing for effortless automatic evaluation of LLMs. Our experiments show that after training on TRACE, aligned LLMs exhibit significant declines in both general ability and instruction-following capabilities. For example, the accuracy of llama2-chat 13B on gsm8k dataset declined precipitously from 28.8\% to 2\% after training on our datasets. This highlights the challenge of finding a suitable tradeoff between achieving performance on specific tasks while preserving the original prowess of LLMs. Empirical findings suggest that tasks inherently equipped with reasoning paths contribute significantly to preserving certain capabilities of LLMs against potential declines. Motivated by this, we introduce the Reasoning-augmented Continual Learning (RCL) approach. RCL integrates task-specific cues with meta-rationales, effectively reducing catastrophic forgetting in LLMs while expediting convergence on novel tasks

    Climate-Driven Changes in High-Intensity Wildfire on Orbital Timescales in Eurasia since 320 ka

    Get PDF
    AbstractWildfire is an integral part of the Earth’s climate system and plays an important role in shaping terrestrial ecosystems and biodiversity, atmospheric chemistry, regional climate, and the carbon cycle in the Earth’s history. However, the lack of high-resolution records of long wildfires limits our understanding of the natural variability, long-term trends of wildfire activity, and the reasons behind the changes in wildfire on orbital timescales. Here, a 320 ka long high-resolution wildfire record from the subarctic North Pacific is reconstructed with black carbon (BC), including its two subtypes char and soot. A 7-day-long back trajectory simulation analysis reveals the higher frequency of trajectories comes from Siberia. Our data show that continuous incidence of wildfire on a continental scale over the last 320 ka was higher during glacial periods than during the interglacial periods. The increase in wildfire frequency during glacial periods is ascribed to less precipitation. Contrasting patterns of wildfire incidence between marine isotope stages 2 and 6 may be ascribed to different fuel availability, which is related to contrasting configurations of the Northern Hemisphere ice sheet between glacial periods. A significant periodicity of 23 ka of our wildfire record suggests the precession of the Earth’s orbit pace wildfire development. The tight coupling of intensified wildfire and enhanced nutrient utilization efficiency suggests a nontrivial role of fire in the climate system
    corecore