175,351 research outputs found

    IndicBART: A Pre-trained Model for Indic Natural Language Generation

    Full text link
    In this paper, we study pre-trained sequence-to-sequence models for a group of related languages, with a focus on Indic languages. We present IndicBART, a multilingual, sequence-to-sequence pre-trained model focusing on 11 Indic languages and English. IndicBART utilizes the orthographic similarity between Indic scripts to improve transfer learning between similar Indic languages. We evaluate IndicBART on two NLG tasks: Neural Machine Translation (NMT) and extreme summarization. Our experiments on NMT and extreme summarization show that a model specific to related languages like IndicBART is competitive with large pre-trained models like mBART50 despite being significantly smaller. It also performs well on very low-resource translation scenarios where languages are not included in pre-training or fine-tuning. Script sharing, multilingual training, and better utilization of limited model capacity contribute to the good performance of the compact IndicBART model.Comment: Published at ACL 2022, 15 page

    TEMPORAL DATA EXTRACTION AND QUERY SYSTEM FOR EPILEPSY SIGNAL ANALYSIS

    Get PDF
    The 2016 Epilepsy Innovation Institute (Ei2) community survey reported that unpredictability is the most challenging aspect of seizure management. Effective and precise detection, prediction, and localization of epileptic seizures is a fundamental computational challenge. Utilizing epilepsy data from multiple epilepsy monitoring units can enhance the quantity and diversity of datasets, which can lead to more robust epilepsy data analysis tools. The contributions of this dissertation are two-fold. One is the implementation of a temporal query for epilepsy data; the other is the machine learning approach for seizure detection, seizure prediction, and seizure localization. The three key components of our temporal query interface are: 1) A pipeline for automatically extract European Data Format (EDF) information and epilepsy annotation data from cross-site sources; 2) Data quantity monitoring for Epilepsy temporal data; 3) A web-based annotation query interface for preliminary research and building customized epilepsy datasets. The system extracted and stored about 450,000 epilepsy-related events of more than 2,497 subjects from seven institutes up to September 2019. Leveraging the epilepsy temporal events query system, we developed machine learning models for seizure detection, prediction, and localization. Using 135 extracted features from EEG signals, we trained a channel-based eXtreme Gradient Boosting model to detect seizures on 8-second EEG segments. A long-term EEG recording evaluation shows that the model can detect about 90.34% seizures on an existing EEG dataset with 961 hours of data. The model achieved 89.88% accuracy, 92.32% sensitivity, and 84.76% AUC based on the segments evaluation. We also introduced a transfer learning approach consisting of 1) a base deep learning model pre-trained by ImageNet dataset and 2) customized fully connected layers, to train the patient-specific pre-ictal and inter-ictal data from our database. Two convolutional neural network architectures were evaluated using 53 pre-ictal segments and 265 continuous hours of inter-ictal EEG data. The evaluation shows that our model reached 86.79% sensitivity and 3.38% false-positive rate. Another transfer learning model for seizure localization uses a pre-trained ResNext50 structure and was trained with an image augmentation dataset labeling by fingerprint. Our model achieved 88.22% accuracy, 34.99% sensitivity, 1.02% false-positive rate, and 34.3% positive likelihood rate

    An adaptive ensemble approach to ambient intelligence assisted people search

    Get PDF
    Some machine learning algorithms have shown a better overall recognition rate for facial recognition than humans, provided that the models are trained with massive image databases of human faces. However, it is still a challenge to use existing algorithms to perform localized people search tasks where the recognition must be done in real time, and where only a small face database is accessible. A localized people search is essential to enable robot–human interactions. In this article, we propose a novel adaptive ensemble approach to improve facial recognition rates while maintaining low computational costs, by combining lightweight local binary classifiers with global pre-trained binary classifiers. In this approach, the robot is placed in an ambient intelligence environment that makes it aware of local context changes. Our method addresses the extreme unbalance of false positive results when it is used in local dataset classifications. Furthermore, it reduces the errors caused by affine deformation in face frontalization, and by poor camera focus. Our approach shows a higher recognition rate compared to a pre-trained global classifier using a benchmark database under various resolution images, and demonstrates good efficacy in real-time tasks

    Cross-lingual Pre-training Based Transfer for Zero-shot Neural Machine Translation

    Full text link
    Transfer learning between different language pairs has shown its effectiveness for Neural Machine Translation (NMT) in low-resource scenario. However, existing transfer methods involving a common target language are far from success in the extreme scenario of zero-shot translation, due to the language space mismatch problem between transferor (the parent model) and transferee (the child model) on the source side. To address this challenge, we propose an effective transfer learning approach based on cross-lingual pre-training. Our key idea is to make all source languages share the same feature space and thus enable a smooth transition for zero-shot translation. To this end, we introduce one monolingual pre-training method and two bilingual pre-training methods to obtain a universal encoder for different languages. Once the universal encoder is constructed, the parent model built on such encoder is trained with large-scale annotated data and then directly applied in zero-shot translation scenario. Experiments on two public datasets show that our approach significantly outperforms strong pivot-based baseline and various multilingual NMT approaches.Comment: Accepted as a conference paper at AAAI 2020 (oral presentation

    Gravity Spy: Integrating Advanced LIGO Detector Characterization, Machine Learning, and Citizen Science

    Get PDF
    (abridged for arXiv) With the first direct detection of gravitational waves, the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) has initiated a new field of astronomy by providing an alternate means of sensing the universe. The extreme sensitivity required to make such detections is achieved through exquisite isolation of all sensitive components of LIGO from non-gravitational-wave disturbances. Nonetheless, LIGO is still susceptible to a variety of instrumental and environmental sources of noise that contaminate the data. Of particular concern are noise features known as glitches, which are transient and non-Gaussian in their nature, and occur at a high enough rate so that accidental coincidence between the two LIGO detectors is non-negligible. In this paper we describe an innovative project that combines crowdsourcing with machine learning to aid in the challenging task of categorizing all of the glitches recorded by the LIGO detectors. Through the Zooniverse platform, we engage and recruit volunteers from the public to categorize images of glitches into pre-identified morphological classes and to discover new classes that appear as the detectors evolve. In addition, machine learning algorithms are used to categorize images after being trained on human-classified examples of the morphological classes. Leveraging the strengths of both classification methods, we create a combined method with the aim of improving the efficiency and accuracy of each individual classifier. The resulting classification and characterization should help LIGO scientists to identify causes of glitches and subsequently eliminate them from the data or the detector entirely, thereby improving the rate and accuracy of gravitational-wave observations. We demonstrate these methods using a small subset of data from LIGO's first observing run.Comment: 27 pages, 8 figures, 1 tabl
    • …
    corecore