76 research outputs found

    Spinal CX3CL1/CX3CR1 may not directly participate in the development of morphine tolerance in rats

    Get PDF
    CX3CL1 (fractalkine), the sole member of chemokine CX3C family, is implicated in inflammatory and neuropathic pain via activating its receptor CX3CR1 on neural cells in spinal cord. However, it has not been fully elucidated whether CX3CL1 or CX3CR1 contributes to the development of morphine tolerance. In this study, we found that chronic morphine exposure did not alter the expressions of CX3CL1 and CX3CR1 in spinal cord. And neither exogenous CX3CL1 nor CX3CR1 inhibitor could affect the development of morphine tolerance. The cellular localizations of spinal CX3CL1 and CX3CR1 changed from neuron and microglia, respectively, to all the neural cells during the development of morphine tolerance. A microarray profiling revealed that 15 members of chemokine family excluding CX3CL1 and CX3CR1 were up-regulated in morphine-treated rats. Our study provides evidence that spinal CX3CL1 and CX3CR1 may not be involved in the development of morphine tolerance directly

    Uncertainty-guided semi-supervised few-shot class-incremental learning with knowledge distillation

    No full text
    Abstract Class-Incremental Learning (CIL) aims at incrementally learning novel classes without forgetting old ones. This capability becomes more challenging when novel tasks contain one or a few labeled training samples, which leads to a more practical learning scenario, i.e., Few-Shot Class-Incremental Learning (FSCIL). The dilemma on FSCIL lies in serious overfitting and exacerbated catastrophic forgetting caused by the limited training data from novel classes. In this paper, excited by the easy accessibility of unlabeled data, we conduct a pioneering work and focus on a Semi-Supervised Few-Shot Class-Incremental Learning (Semi-FSCIL) problem, which requires the model incrementally to learn new classes from extremely limited labeled samples and a large number of unlabeled samples. To address this problem, a simple but efficient framework is first constructed based on the knowledge distillation technique to alleviate catastrophic forgetting. To efficiently mitigate the overfitting problem on novel categories with unlabeled data, uncertainty-guided semi-supervised learning is incorporated into this framework to select unlabeled samples into incremental learning sessions considering the model uncertainty. This process provides extra reliable supervision for the distillation process and contributes to better formulating the class means. Our extensive experiments on CIFAR100, miniImageNet and CUB200 datasets demonstrate the promising performance of our proposed method, and define baselines in this new research direction
    • …
    corecore