2 research outputs found
COVID-19 Diagnosis from Cough Acoustics using ConvNets and Data Augmentation
With the periodic rise and fall of COVID-19 and countries being inflicted by
its waves, an efficient, economic, and effortless diagnosis procedure for the
virus has been the utmost need of the hour. COVID-19 positive individuals may
even be asymptomatic making the diagnosis difficult, but amongst the infected
subjects, the asymptomatic ones need not be entirely free of symptoms caused by
the virus. They might not show any observable symptoms like the symptomatic
subjects, but they may differ from uninfected ones in the way they cough. These
differences in the coughing sounds are minute and indiscernible to the human
ear, however, these can be captured using machine learning-based statistical
models. In this paper, we present a deep learning approach to analyze the
acoustic dataset provided in Track 1 of the DiCOVA 2021 Challenge containing
cough sound recordings belonging to both COVID-19 positive and negative
examples. To perform the classification on the sound recordings as belonging to
a COVID-19 positive or negative examples, we propose a ConvNet model. Our model
achieved an AUC score percentage of 72.23 on the blind test set provided by the
same for an unbiased evaluation of the models. The ConvNet model incorporated
with Data Augmentation further increased the AUC-ROC percentage from 72.23 to
87.07. It also outperformed the DiCOVA 2021 Challenge's baseline model by 23%
thus, claiming the top position on the DiCOVA 2021 Challenge leaderboard. This
paper proposes the use of Mel frequency cepstral coefficients as the feature
input for the proposed model.Comment: DiCOVA, top 1st, This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessibl
Textual entailment as an evaluation metric for abstractive text summarization
International audienceAutomated text summarization systems require to be heedful of the reader and the communication goals since it may be the determining component of whether the original textual content is actually worth reading in full. The summary can also assist enhance document indexing for information retrieval, and it is generally much less biased than a human-written summary. A crucial part while building intelligent systems is evaluating them. Consequently, the choice of evaluation metric(s) is of utmost importance. Current standard evaluation metrics like BLEU and ROUGE, although fairly effective for evaluation of extractive text summarization systems, become futile when it comes to comparing semantic information between two texts, i.e in abstractive summarization. We propose textual entailment as a potential metric to evaluate abstractive summaries. The results show the contribution of text entailment as a strong automated evaluation model for such summaries. The textual entailment scores between the text and generated summaries, and between the reference and predicted summaries were calculated, and an overall summarizer score was generated to give a fair idea of how efficient the generated summaries are. We put forward some novel methods that use the entailment scores and the final summarizer scores for a reasonable evaluation of the same across various scenarios. A Final Entailment Metric Score (FEMS) was generated to get an insightful idea in order to compare both the generated summaries