779 research outputs found

    Evaluation of oilseed rape seed yield losses caused by Leptosphaeria biglobosa in central China

    Get PDF
    This document is the Accepted Manuscript version of the following article: Xiang Cai, Yongju Huang, Daohong Jiang, Bruce D. L. Fitt, Guoqing Li, and Long Yang, "Evaluation of oilseed rape seed yield losses caused by Leptosphaeria biglobosa in central China", European Journal of Plant Pathology, first published 9 June 2017. Under embargo. Embargo end date: 9 June 2018. The final publication is available at Springer via: http://dx.doi.org/10.1007/s10658-017-1266-x.Phoma stem canker of oilseed rape (Brassica napus), caused by Leptosphaeria maculans/L. biglobosa is a globally important disease. Severe phoma stem canker symptoms have been observed on winter oilseed rape in China but the seed yield loss caused by this disease remains unknown. In May 2012 and May 2013, 17 and 13 crops were surveyed, respectively, in seven counties of Hubei Province, central China. Stems with phoma stem canker disease symptoms were sampled for pathogen isolation and identification. Only L. biglobosa was identified by culture morphology and species-specific PCR; no L. maculans was found. To evaluate the yield losses, yield components (number of branches per plant, number of pods per plant, 1000-seed weight, number of seeds per pod) were assessed on healthy and diseased plants sampled from crops in four counties and on plants from inoculated pot experiments (plants of three cultivars were inoculated at the green bud stage by injecting L. biglobosa conidia into the stem between the first and second leaf scars). Results of the field surveys showed that diseased plants had 14–61% less branches and 32–83% less pods than healthy plants, respectively. The estimated seed yield loss varied from 10% to 21% and from 13% to 37% in 2012 and 2013, respectively. In the pot experiments, there were no differences in numbers of branches or pods but there were differences in number of seeds per pod between inoculated and control plants. For the three cultivars tested, the inoculated plants had yield losses of 29–56% compared with the control. This study indicates that L. biglobosa could cause substantial seed yield loss in China.Peer reviewedFinal Accepted Versio

    InferEM: Inferring the Speaker's Intention for Empathetic Dialogue Generation

    Full text link
    Current approaches to empathetic response generation typically encode the entire dialogue history directly and put the output into a decoder to generate friendly feedback. These methods focus on modelling contextual information but neglect capturing the direct intention of the speaker. We argue that the last utterance in the dialogue empirically conveys the intention of the speaker. Consequently, we propose a novel model named InferEM for empathetic response generation. We separately encode the last utterance and fuse it with the entire dialogue through multi-head attention based intention fusion module to capture the speaker's intention. Besides, we utilize previous utterances to predict the last utterance, which simulates human's psychology to guess what the interlocutor may speak in advance. To balance the optimizing rates of the utterance prediction and response generation, a multi-task learning strategy is designed for InferEM. Experimental results demonstrate the plausibility and validity of InferEM in improving empathetic expression.Comment: 5 pages, 4 figure

    GA2MIF: Graph and Attention Based Two-Stage Multi-Source Information Fusion for Conversational Emotion Detection

    Full text link
    Multimodal Emotion Recognition in Conversation (ERC) plays an influential role in the field of human-computer interaction and conversational robotics since it can motivate machines to provide empathetic services. Multimodal data modeling is an up-and-coming research area in recent years, which is inspired by human capability to integrate multiple senses. Several graph-based approaches claim to capture interactive information between modalities, but the heterogeneity of multimodal data makes these methods prohibit optimal solutions. In this work, we introduce a multimodal fusion approach named Graph and Attention based Two-stage Multi-source Information Fusion (GA2MIF) for emotion detection in conversation. Our proposed method circumvents the problem of taking heterogeneous graph as input to the model while eliminating complex redundant connections in the construction of graph. GA2MIF focuses on contextual modeling and cross-modal modeling through leveraging Multi-head Directed Graph ATtention networks (MDGATs) and Multi-head Pairwise Cross-modal ATtention networks (MPCATs), respectively. Extensive experiments on two public datasets (i.e., IEMOCAP and MELD) demonstrate that the proposed GA2MIF has the capacity to validly capture intra-modal long-range contextual information and inter-modal complementary information, as well as outperforms the prevalent State-Of-The-Art (SOTA) models by a remarkable margin.Comment: 14 page

    GraphMFT: A Graph Network based Multimodal Fusion Technique for Emotion Recognition in Conversation

    Full text link
    Multimodal machine learning is an emerging area of research, which has received a great deal of scholarly attention in recent years. Up to now, there are few studies on multimodal Emotion Recognition in Conversation (ERC). Since Graph Neural Networks (GNNs) possess the powerful capacity of relational modeling, they have an inherent advantage in the field of multimodal learning. GNNs leverage the graph constructed from multimodal data to perform intra- and inter-modal information interaction, which effectively facilitates the integration and complementation of multimodal data. In this work, we propose a novel Graph network based Multimodal Fusion Technique (GraphMFT) for emotion recognition in conversation. Multimodal data can be modeled as a graph, where each data object is regarded as a node, and both intra- and inter-modal dependencies existing between data objects can be regarded as edges. GraphMFT utilizes multiple improved graph attention networks to capture intra-modal contextual information and inter-modal complementary information. In addition, the proposed GraphMFT attempts to address the challenges of existing graph-based multimodal conversational emotion recognition models such as MMGCN. Empirical results on two public multimodal datasets reveal that our model outperforms the State-Of-The-Art (SOTA) approaches with the accuracy of 67.90% and 61.30%.Comment: Accepted by Neurocomputin

    GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation Approach for Multimodal Conversational Emotion Recognition

    Full text link
    Emotion Recognition in Conversation (ERC) plays a significant part in Human-Computer Interaction (HCI) systems since it can provide empathetic services. Multimodal ERC can mitigate the drawbacks of uni-modal approaches. Recently, Graph Neural Networks (GNNs) have been widely used in a variety of fields due to their superior performance in relation modeling. In multimodal ERC, GNNs are capable of extracting both long-distance contextual information and inter-modal interactive information. Unfortunately, since existing methods such as MMGCN directly fuse multiple modalities, redundant information may be generated and diverse information may be lost. In this work, we present a directed Graph based Cross-modal Feature Complementation (GraphCFC) module that can efficiently model contextual and interactive information. GraphCFC alleviates the problem of heterogeneity gap in multimodal fusion by utilizing multiple subspace extractors and Pair-wise Cross-modal Complementary (PairCC) strategy. We extract various types of edges from the constructed graph for encoding, thus enabling GNNs to extract crucial contextual and interactive information more accurately when performing message passing. Furthermore, we design a GNN structure called GAT-MLP, which can provide a new unified network framework for multimodal learning. The experimental results on two benchmark datasets show that our GraphCFC outperforms the state-of-the-art (SOTA) approaches.Comment: 13 page

    Hyperspectral Image Classification Using a Spectral-Spatial Sparse Coding Model

    Get PDF
    We present a sparse coding based spectral-spatial classification model for hyperspectral image (HSI) datasets. The proposed method consists of an efficient sparse coding method in which the l1/lq regularized multi-class logistic regression technique was utilized to achieve a compact representation of hyperspectral image pixels for land cover classification. We applied the proposed algorithm to a HSI dataset collected at the Kennedy Space Center and compared our algorithm to a recently proposed method, Gaussian process maximum likelihood (GP-ML) classifier. Experimental results show that the proposed method can achieve significantly better performances than the GP-ML classifier when training data is limited with a compact pixel representation, leading to more efficient HSI classification systems

    Multiple Unpinned Dirac Points in Group-Va Single-layers with Phosphorene Structure

    Full text link
    Emergent Dirac fermion states underlie many intriguing properties of graphene, and the search for them constitute one strong motivation to explore two-dimensional (2D) allotropes of other elements. Phosphorene, the ultrathin layers of black phosphorous, has been a subject of intense investigations recently, and it was found that other group-Va elements could also form 2D layers with similar puckered lattice structure. Here, by a close examination of their electronic band structure evolution, we discover two types of Dirac fermion states emerging in the low-energy spectrum. One pair of (type-I) Dirac points is sitting on high-symmetry lines, while two pairs of (type-II) Dirac points are located at generic kk-points, with different anisotropic dispersions determined by the reduced symmetries at their locations. Such fully-unpinned (type-II) 2D Dirac points are discovered for the first time. In the absence of spin-orbit coupling, we find that each Dirac node is protected by the sublattice symmetry from gap opening, which is in turn ensured by any one of three point group symmetries. The spin-orbit coupling generally gaps the Dirac nodes, and for the type-I case, this drives the system into a quantum spin Hall insulator phase. We suggest possible ways to realize the unpinned Dirac points in strained phosphorene.Comment: 30 pages, 6 figure

    Sparse Coding Based Dense Feature Representation Model for Hyperspectral Image Classification

    Get PDF
    We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification

    The DKU-MSXF Speaker Verification System for the VoxCeleb Speaker Recognition Challenge 2023

    Full text link
    This paper is the system description of the DKU-MSXF System for the track1, track2 and track3 of the VoxCeleb Speaker Recognition Challenge 2023 (VoxSRC-23). For Track 1, we utilize a network structure based on ResNet for training. By constructing a cross-age QMF training set, we achieve a substantial improvement in system performance. For Track 2, we inherite the pre-trained model from Track 1 and conducte mixed training by incorporating the VoxBlink-clean dataset. In comparison to Track 1, the models incorporating VoxBlink-clean data exhibit a performance improvement by more than 10% relatively. For Track3, the semi-supervised domain adaptation task, a novel pseudo-labeling method based on triple thresholds and sub-center purification is adopted to make domain adaptation. The final submission achieves mDCF of 0.1243 in task1, mDCF of 0.1165 in Track 2 and EER of 4.952% in Track 3.Comment: arXiv admin note: text overlap with arXiv:2210.0509
    corecore