100 research outputs found

    Constructing Sample-to-Class Graph for Few-Shot Class-Incremental Learning

    Full text link
    Few-shot class-incremental learning (FSCIL) aims to build machine learning model that can continually learn new concepts from a few data samples, without forgetting knowledge of old classes. The challenges of FSCIL lies in the limited data of new classes, which not only lead to significant overfitting issues but also exacerbates the notorious catastrophic forgetting problems. As proved in early studies, building sample relationships is beneficial for learning from few-shot samples. In this paper, we promote the idea to the incremental scenario, and propose a Sample-to-Class (S2C) graph learning method for FSCIL. Specifically, we propose a Sample-level Graph Network (SGN) that focuses on analyzing sample relationships within a single session. This network helps aggregate similar samples, ultimately leading to the extraction of more refined class-level features. Then, we present a Class-level Graph Network (CGN) that establishes connections across class-level features of both new and old classes. This network plays a crucial role in linking the knowledge between different sessions and helps improve overall learning in the FSCIL scenario. Moreover, we design a multi-stage strategy for training S2C model, which mitigates the training challenges posed by limited data in the incremental process. The multi-stage training strategy is designed to build S2C graph from base to few-shot stages, and improve the capacity via an extra pseudo-incremental stage. Experiments on three popular benchmark datasets show that our method clearly outperforms the baselines and sets new state-of-the-art results in FSCIL

    <第五回講演会>中国資本市場発展の趨勢

    Get PDF

    Late onset post-LASIK keratectasia with reversal and stabilization after use of latanoprost and corneal collagen cross-linking

    Get PDF
    We report a case of late onset keratectasia after laser in situ keratomileusis (LASIK) and its quick reversal and stabilization after use of latanoprost and riboflavin/ultraviolet-A corneal collagen cross-linking (CXL). A 39-year-old man with normal intraocular pressure developed a rapid deterioration of vision in his left eye 6 years after LASIK-retreatment for high myopic astigmatism. Keratectasia was diagnosed by corneal topography and ultrasound pachymetry. After two months of treatment with latanoprost and a minor intraocular pressure reduction, uncorrected distance visual acuity improved from 20/100 to 20/20 and corneal topography showed reversal of keratectasia. CXL was performed after the reversal to achieve long-term stabilization. At 1, 3, 6, 13 and 39 months followup exams after the CXL, stable vision, refraction, and topography were registered. This case shows that keratectasia may rapidly occur several years after LASIK and that a quick reversal and stabilization may be achieved by use of latanoprost followed by CXL

    Dynamic V2X Autonomous Perception from Road-to-Vehicle Vision

    Full text link
    Vehicle-to-everything (V2X) perception is an innovative technology that enhances vehicle perception accuracy, thereby elevating the security and reliability of autonomous systems. However, existing V2X perception methods focus on static scenes from mainly vehicle-based vision, which is constrained by sensor capabilities and communication loads. To adapt V2X perception models to dynamic scenes, we propose to build V2X perception from road-to-vehicle vision and present Adaptive Road-to-Vehicle Perception (AR2VP) method. In AR2VP,we leverage roadside units to offer stable, wide-range sensing capabilities and serve as communication hubs. AR2VP is devised to tackle both intra-scene and inter-scene changes. For the former, we construct a dynamic perception representing module, which efficiently integrates vehicle perceptions, enabling vehicles to capture a more comprehensive range of dynamic factors within the scene.Moreover, we introduce a road-to-vehicle perception compensating module, aimed at preserving the maximized roadside unit perception information in the presence of intra-scene changes.For inter-scene changes, we implement an experience replay mechanism leveraging the roadside unit's storage capacity to retain a subset of historical scene data, maintaining model robustness in response to inter-scene shifts. We conduct perception experiment on 3D object detection and segmentation, and the results show that AR2VP excels in both performance-bandwidth trade-offs and adaptability within dynamic environments

    Multi-Label Continual Learning using Augmented Graph Convolutional Network

    Full text link
    Multi-Label Continual Learning (MLCL) builds a class-incremental framework in a sequential multi-label image recognition data stream. The critical challenges of MLCL are the construction of label relationships on past-missing and future-missing partial labels of training data and the catastrophic forgetting on old classes, resulting in poor generalization. To solve the problems, the study proposes an Augmented Graph Convolutional Network (AGCN++) that can construct the cross-task label relationships in MLCL and sustain catastrophic forgetting. First, we build an Augmented Correlation Matrix (ACM) across all seen classes, where the intra-task relationships derive from the hard label statistics. In contrast, the inter-task relationships leverage hard and soft labels from data and a constructed expert network. Then, we propose a novel partial label encoder (PLE) for MLCL, which can extract dynamic class representation for each partial label image as graph nodes and help generate soft labels to create a more convincing ACM and suppress forgetting. Last, to suppress the forgetting of label dependencies across old tasks, we propose a relationship-preserving constrainter to construct label relationships. The inter-class topology can be augmented automatically, which also yields effective class representations. The proposed method is evaluated using two multi-label image benchmarks. The experimental results show that the proposed way is effective for MLCL image recognition and can build convincing correlations across tasks even if the labels of previous tasks are missing

    EVNet: An Explainable Deep Network for Dimension Reduction

    Full text link
    Dimension reduction (DR) is commonly utilized to capture the intrinsic structure and transform high-dimensional data into low-dimensional space while retaining meaningful properties of the original data. It is used in various applications, such as image recognition, single-cell sequencing analysis, and biomarker discovery. However, contemporary parametric-free and parametric DR techniques suffer from several significant shortcomings, such as the inability to preserve global and local features and the pool generalization performance. On the other hand, regarding explainability, it is crucial to comprehend the embedding process, especially the contribution of each part to the embedding process, while understanding how each feature affects the embedding results that identify critical components and help diagnose the embedding process. To address these problems, we have developed a deep neural network method called EVNet, which provides not only excellent performance in structural maintainability but also explainability to the DR therein. EVNet starts with data augmentation and a manifold-based loss function to improve embedding performance. The explanation is based on saliency maps and aims to examine the trained EVNet parameters and contributions of components during the embedding process. The proposed techniques are integrated with a visual interface to help the user to adjust EVNet to achieve better DR performance and explainability. The interactive visual interface makes it easier to illustrate the data features, compare different DR techniques, and investigate DR. An in-depth experimental comparison shows that EVNet consistently outperforms the state-of-the-art methods in both performance measures and explainability.Comment: 18 pages, 15 figures, accepted by TVC

    Muon Flux Measurement at China Jinping Underground Laboratory

    Full text link
    China Jinping Underground Laboratory (CJPL) is ideal for studying solar-, geo-, and supernova neutrinos. A precise measurement of the cosmic-ray background would play an essential role in proceeding with the R\&D research for these MeV-scale neutrino experiments. Using a 1-ton prototype detector for the Jinping Neutrino Experiment (JNE), we detected 264 high-energy muon events from a 645.2-day dataset at the first phase of CJPL (CJPL-I), reconstructed their directions, and measured the cosmic-ray muon flux to be (3.53±0.22stat.±0.07sys.)×1010(3.53\pm0.22_{\text{stat.}}\pm0.07_{\text{sys.}})\times10^{-10} cm2^{-2}s1^{-1}. The observed angular distributions indicate the leakage of cosmic-ray muon background and agree with the simulation accounting for Jinping mountain's terrain. A survey of muon fluxes at different laboratory locations situated under mountains and below mine shaft indicated that the former is generally a factor of (4±2)(4\pm2) larger than the latter with the same vertical overburden. This study provides a convenient back-of-the-envelope estimation for muon flux of an underground experiment

    Performance of the 1-ton Prototype Neutrino Detector at CJPL-I

    Full text link
    China Jinping Underground Laboratory (CJPL) provides an ideal site for solar, geo-, and supernova neutrino studies. With a prototype neutrino detector running since 2017, containing 1-ton liquid scintillator (LS), we tested its experimental hardware, performed the physics calibration, and measured its radioactive backgrounds, as an early stage of the Jinping Neutrino Experiment (JNE). We investigated the radon background and implemented the nitrogen sealing technology to control it. This paper presents the details of these studies and will serve as a key reference for the construction and optimization of the future large detector at JNE
    corecore