370 research outputs found

    Ferroptosis in Leukemia: Lessons and Challenges

    Get PDF
    Ferroptosis is a newly defined programmed cell death (PCD) process with the hallmark of the accumulation of iron-dependent lipid peroxidation, which is more immunogenic over apoptosis. Ferroptosis shows great potential as a therapeutic target against acute kidney injury (AKI), cancers, cardiovascular diseases, neurodegenerative diseases, and hepatic diseases. Accumulating evidence has highlighted that ferroptosis plays an unneglectable role in regulating the development and progression of multiple pathologies of leukemia including acute myeloid leukemia (AML), chronic myeloid leukemia (CML), acute lymphoblastic leukemia (ALL), and chronic lymphocytic leukemia (CLL). Herein, we focus on the state-of-the-art renewal in the relationship of ferroptosis with leukemia. Meanwhile, this chapter further highlights the iron, lipid and amino acid metabolism, as well as ferroptosis-based molecular mechanisms. Collectively, we summarize the contribution of ferroptosis to the pathogenesis of leukemia and discuss ferroptosis as a novel therapeutic target for different types of leukemia

    PCR: Proxy-based Contrastive Replay for Online Class-Incremental Continual Learning

    Full text link
    Online class-incremental continual learning is a specific task of continual learning. It aims to continuously learn new classes from data stream and the samples of data stream are seen only once, which suffers from the catastrophic forgetting issue, i.e., forgetting historical knowledge of old classes. Existing replay-based methods effectively alleviate this issue by saving and replaying part of old data in a proxy-based or contrastive-based replay manner. Although these two replay manners are effective, the former would incline to new classes due to class imbalance issues, and the latter is unstable and hard to converge because of the limited number of samples. In this paper, we conduct a comprehensive analysis of these two replay manners and find that they can be complementary. Inspired by this finding, we propose a novel replay-based method called proxy-based contrastive replay (PCR). The key operation is to replace the contrastive samples of anchors with corresponding proxies in the contrastive-based way. It alleviates the phenomenon of catastrophic forgetting by effectively addressing the imbalance issue, as well as keeps a faster convergence of the model. We conduct extensive experiments on three real-world benchmark datasets, and empirical results consistently demonstrate the superiority of PCR over various state-of-the-art methods.Comment: To appear in CVPR 2023. 10 pages, 8 figures and 3 table

    MetaNODE: Prototype Optimization as a Neural ODE for Few-Shot Learning

    Full text link
    Few-Shot Learning (FSL) is a challenging task, \emph{i.e.}, how to recognize novel classes with few examples? Pre-training based methods effectively tackle the problem by pre-training a feature extractor and then predicting novel classes via a cosine nearest neighbor classifier with mean-based prototypes. Nevertheless, due to the data scarcity, the mean-based prototypes are usually biased. In this paper, we attempt to diminish the prototype bias by regarding it as a prototype optimization problem. To this end, we propose a novel meta-learning based prototype optimization framework to rectify prototypes, \emph{i.e.}, introducing a meta-optimizer to optimize prototypes. Although the existing meta-optimizers can also be adapted to our framework, they all overlook a crucial gradient bias issue, \emph{i.e.}, the mean-based gradient estimation is also biased on sparse data. To address the issue, we regard the gradient and its flow as meta-knowledge and then propose a novel Neural Ordinary Differential Equation (ODE)-based meta-optimizer to polish prototypes, called MetaNODE. In this meta-optimizer, we first view the mean-based prototypes as initial prototypes, and then model the process of prototype optimization as continuous-time dynamics specified by a Neural ODE. A gradient flow inference network is carefully designed to learn to estimate the continuous gradient flow for prototype dynamics. Finally, the optimal prototypes can be obtained by solving the Neural ODE. Extensive experiments on miniImagenet, tieredImagenet, and CUB-200-2011 show the effectiveness of our method.Comment: Accepted by AAAI 202

    MetaDiff: Meta-Learning with Conditional Diffusion for Few-Shot Learning

    Full text link
    Equipping a deep model the abaility of few-shot learning, i.e., learning quickly from only few examples, is a core challenge for artificial intelligence. Gradient-based meta-learning approaches effectively address the challenge by learning how to learn novel tasks. Its key idea is learning a deep model in a bi-level optimization manner, where the outer-loop process learns a shared gradient descent algorithm (i.e., its hyperparameters), while the inner-loop process leverage it to optimize a task-specific model by using only few labeled data. Although these existing methods have shown superior performance, the outer-loop process requires calculating second-order derivatives along the inner optimization path, which imposes considerable memory burdens and the risk of vanishing gradients. Drawing inspiration from recent progress of diffusion models, we find that the inner-loop gradient descent process can be actually viewed as a reverse process (i.e., denoising) of diffusion where the target of denoising is model weights but the origin data. Based on this fact, in this paper, we propose to model the gradient descent optimizer as a diffusion model and then present a novel task-conditional diffusion-based meta-learning, called MetaDiff, that effectively models the optimization process of model weights from Gaussion noises to target weights in a denoising manner. Thanks to the training efficiency of diffusion models, our MetaDiff do not need to differentiate through the inner-loop path such that the memory burdens and the risk of vanishing gradients can be effectvely alleviated. Experiment results show that our MetaDiff outperforms the state-of-the-art gradient-based meta-learning family in few-shot learning tasks.Comment: Accepted by AAAI 202

    Electrothermal combined optimization on notch in air-cooled high-speed permanent-magnet generator

    Get PDF
    A 30kVA, 96000rpm, air cooled high-speed permanent magnetic generator (HSPMG) is investigated in this paper. Considering effects on both the magnetic circuit and heat transfer paths comprehensively, the stator slot notch in this HSPMG is optimized. First, by using the time-stepping finite element method, the transient electromagnetic fields of HSPMG is numerically calculated, and the electromagnetic losses in different components are obtained. Then, after the determination of other mechanical losses in such a machine, a three-dimensional fluid-thermal coupling calculation model is established, and the working temperature distribution in the HSPMG is studied. Thus, the electromagnetic-fluid-thermal coupling analysis method on the HSPMG is proposed, by using which the influences of machine notch height on machine magnetic circuit and cooling air flowing path are investigated. Meanwhile, both the electromagnetic performance and the temperature distribution in HSPMG with different stator notch height are studied, and a series of analytical equations are deduced to describe the variations of machine performances with stator notch. By using the proposed unbalance relative weighting method, the notch height is optimized to enhance the performance of HSPMG. The obtained conclusions could provide reference for HSPMG electromagnetic calculation, cooling system design, and optimization design
    • …
    corecore