815 research outputs found

    Kinetic Ballooning Mode Under Steep Gradient: High Order Eigenstates and Mode Structure Parity Transition

    Get PDF
    The existence of kinetic ballooning mode (KBM) high order (non-ground) eigenstates for tokamak plasmas with steep gradient is demonstrated via gyrokinetic electromagnetic eigenvalue solutions, which reveals that eigenmode parity transition is an intrinsic property of electromagnetic plasmas. The eigenstates with quantum number l=0l=0 for ground state and l=1,2,3l=1,2,3\ldots for non-ground states are found to coexist and the most unstable one can be the high order states (l0l\neq0). The conventional KBM is the l=0l=0 state. It is shown that the l=1l=1 KBM has the same mode structure parity as the micro-tearing mode (MTM). In contrast to the MTM, the l=1l=1 KBM can be driven by pressure gradient even without collisions and electron temperature gradient. The relevance between various eigenstates of KBM under steep gradient and edge plasma physics is discussed.Comment: 6 pages, 6 figure

    Efficient Methods for Non-stationary Online Learning

    Full text link
    Non-stationary online learning has drawn much attention in recent years. In particular, dynamic regret and adaptive regret are proposed as two principled performance measures for online convex optimization in non-stationary environments. To optimize them, a two-layer online ensemble is usually deployed due to the inherent uncertainty of the non-stationarity, in which a group of base-learners are maintained and a meta-algorithm is employed to track the best one on the fly. However, the two-layer structure raises the concern about the computational complexity -- those methods typically maintain O(logT)\mathcal{O}(\log T) base-learners simultaneously for a TT-round online game and thus perform multiple projections onto the feasible domain per round, which becomes the computational bottleneck when the domain is complicated. In this paper, we present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from O(logT)\mathcal{O}(\log T) to 11. Moreover, our obtained algorithms require only one gradient query and one function evaluation at each round. Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods. Empirical studies verify our theoretical findings.Comment: preliminary conference version appeared at NeurIPS 2022; this extended version improves the paper presentation, further investigates the interval dynamic regret, and adds two applications (online non-stochastic control and online PCA

    Deep Descriptor Transforming for Image Co-Localization

    Full text link
    Reusable model design becomes desirable with the rapid expansion of machine learning applications. In this paper, we focus on the reusability of pre-trained deep convolutional models. Specifically, different from treating pre-trained models as feature extractors, we reveal more treasures beneath convolutional layers, i.e., the convolutional activations could act as a detector for the common object in the image co-localization problem. We propose a simple but effective method, named Deep Descriptor Transforming (DDT), for evaluating the correlations of descriptors and then obtaining the category-consistent regions, which can accurately locate the common object in a set of images. Empirical studies validate the effectiveness of the proposed DDT method. On benchmark image co-localization datasets, DDT consistently outperforms existing state-of-the-art methods by a large margin. Moreover, DDT also demonstrates good generalization ability for unseen categories and robustness for dealing with noisy data.Comment: Accepted by IJCAI 201
    corecore