33 research outputs found

    Unifying Token and Span Level Supervisions for Few-Shot Sequence Labeling

    Full text link
    Few-shot sequence labeling aims to identify novel classes based on only a few labeled samples. Existing methods solve the data scarcity problem mainly by designing token-level or span-level labeling models based on metric learning. However, these methods are only trained at a single granularity (i.e., either token level or span level) and have some weaknesses of the corresponding granularity. In this paper, we first unify token and span level supervisions and propose a Consistent Dual Adaptive Prototypical (CDAP) network for few-shot sequence labeling. CDAP contains the token-level and span-level networks, jointly trained at different granularities. To align the outputs of two networks, we further propose a consistent loss to enable them to learn from each other. During the inference phase, we propose a consistent greedy inference algorithm that first adjusts the predicted probability and then greedily selects non-overlapping spans with maximum probability. Extensive experiments show that our model achieves new state-of-the-art results on three benchmark datasets.Comment: Accepted by ACM Transactions on Information System

    B\"{a}cklund transformations for high-order constrained flows of the AKNS hierarchy: canonicity and spectrality property

    Full text link
    New infinite number of one- and two-point B\"{a}cklund transformations (BTs) with explicit expressions are constructed for the high-order constrained flows of the AKNS hierarchy. It is shown that these BTs are canonical transformations including B\"{a}cklund parameter η\eta and a spectrality property holds with respect to η\eta and the 'conjugated' variable μ\mu for which the point (η,μ)(\eta, \mu) belongs to the spectral curve. Also the formulas of m-times repeated Darboux transformations for the high-order constrained flows of the AKNS hierarchy are presented.Comment: 21 pages, Latex, to be published in J. Phys.

    Graphene-Based Nanocomposites for Energy Storage

    Get PDF
    Since the first report of using micromechanical cleavage method to produce graphene sheets in 2004, graphene/graphene-based nanocomposites have attracted wide attention both for fundamental aspects as well as applications in advanced energy storage and conversion systems. In comparison to other materials, graphene-based nanostructured materials have unique 2D structure, high electronic mobility, exceptional electronic and thermal conductivities, excellent optical transmittance, good mechanical strength, and ultrahigh surface area. Therefore, they are considered as attractive materials for hydrogen (H2) storage and high-performance electrochemical energy storage devices, such as supercapacitors, rechargeable lithium (Li)-ion batteries, Li–sulfur batteries, Li–air batteries, sodium (Na)-ion batteries, Na–air batteries, zinc (Zn)–air batteries, and vanadium redox flow batteries (VRFB), etc., as they can improve the efficiency, capacity, gravimetric energy/power densities, and cycle life of these energy storage devices. In this article, recent progress reported on the synthesis and fabrication of graphene nanocomposite materials for applications in these aforementioned various energy storage systems is reviewed. Importantly, the prospects and future challenges in both scalable manufacturing and more energy storage-related applications are discussed

    >

    No full text

    Compressed sensing MR image reconstruction via a deep frequency-division network

    No full text
    International audienceCompressed sensing MRI (CS-MRI) is considered as a powerful technique for decreasing the scan time of MRI while ensuring the image quality. However, state of the art reconstruction algorithms are still subjected to two challenges including terrible parameters tuning and image details loss resulted from over-smoothing. In this paper, we propose a deep frequency-division network (DFDN) to face these two image reconstruction issues. The proposed DFDN approach applies a deep iterative reconstruction network (DIRN) to replace the regularization terms and the corresponding parameters by a stacked convolution neural network (CNN). And then multiple DIRN blocks are cascaded continuously as one deeper neural network. Data consistency (DC) layer is incorporated after each DIRN block to correct the k-space data of intermediate results. Image content loss is computed after each DC layer and frequency-division loss is gained by weighting the high frequency loss and low frequency loss after each DIRN block. The combination of image content loss and frequency-division loss is considered as the total loss for constraining the network training procedure. Validations of the proposed method have been performed on two brain datasets. Visual results and quantitative evaluations show that the proposed DFDN algorithm has better performance in sparse MRI reconstruction than other comparative methods
    corecore