194 research outputs found
INDIAN LANGUAGE TEXT MINING
India is the home of different languages, due to its cultural and geographical diversity. In the Constitution of India, a provision is made for each of the Indian states to choose their own official language for communicating at the state level for official purpose. In India, the growth in consumption of Indian language content started because of growth of electronic devices and technology. The availability of constantly increasing amount of textual data of various Indian regional languages in electronic form has accelerated. But not much work has been done in Indian languages text processing. So there is a huge gap from the stored data to the knowledge that could be constructed from the data. This transition won't occur automatically, that's where Text mining comes into picture. This research is concerned with the study and analyzes the text mining for Indian regional languages Text mining refers to such a knowledge discovery process when the source data under consideration is text. Text mining is a new and exciting research area that tries to solve the information overload problem by using techniques from information retrieval, information extraction as well as natural language processing (NLP) and connects them with the algorithms and methods of KDD, data mining, machine learning and statistics. Some applications of text mining are: document classification, information retrieval, clustering documents, information extraction, and performance evaluation. In this paper we made an attempt to show the need of text mining for Indian language
XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages
Contemporary works on abstractive text summarization have focused primarily
on high-resource languages like English, mostly due to the limited availability
of datasets for low/mid-resource ones. In this work, we present XL-Sum, a
comprehensive and diverse dataset comprising 1 million professionally annotated
article-summary pairs from BBC, extracted using a set of carefully designed
heuristics. The dataset covers 44 languages ranging from low to high-resource,
for many of which no public dataset is currently available. XL-Sum is highly
abstractive, concise, and of high quality, as indicated by human and intrinsic
evaluation. We fine-tune mT5, a state-of-the-art pretrained multilingual model,
with XL-Sum and experiment on multilingual and low-resource summarization
tasks. XL-Sum induces competitive results compared to the ones obtained using
similar monolingual datasets: we show higher than 11 ROUGE-2 scores on 10
languages we benchmark on, with some of them exceeding 15, as obtained by
multilingual training. Additionally, training on low-resource languages
individually also provides competitive performance. To the best of our
knowledge, XL-Sum is the largest abstractive summarization dataset in terms of
the number of samples collected from a single source and the number of
languages covered. We are releasing our dataset and models to encourage future
research on multilingual abstractive summarization. The resources can be found
at \url{https://github.com/csebuetnlp/xl-sum}.Comment: Findings of the Association for Computational Linguistics, ACL 2021
(camera-ready
ChartSumm: A Comprehensive Benchmark for Automatic Chart Summarization of Long and Short Summaries
Automatic chart to text summarization is an effective tool for the visually
impaired people along with providing precise insights of tabular data in
natural language to the user. A large and well-structured dataset is always a
key part for data driven models. In this paper, we propose ChartSumm: a
large-scale benchmark dataset consisting of a total of 84,363 charts along with
their metadata and descriptions covering a wide range of topics and chart types
to generate short and long summaries. Extensive experiments with strong
baseline models show that even though these models generate fluent and
informative summaries by achieving decent scores in various automatic
evaluation metrics, they often face issues like suffering from hallucination,
missing out important data points, in addition to incorrect explanation of
complex trends in the charts. We also investigated the potential of expanding
ChartSumm to other languages using automated translation tools. These make our
dataset a challenging benchmark for future research.Comment: Accepted as a long paper at the Canadian AI 202
A Comprehensive Review of Sentiment Analysis on Indian Regional Languages: Techniques, Challenges, and Trends
Sentiment analysis (SA) is the process of understanding emotion within a text. It helps identify the opinion, attitude, and tone of a text categorizing it into positive, negative, or neutral. SA is frequently used today as more and more people get a chance to put out their thoughts due to the advent of social media. Sentiment analysis benefits industries around the globe, like finance, advertising, marketing, travel, hospitality, etc. Although the majority of work done in this field is on global languages like English, in recent years, the importance of SA in local languages has also been widely recognized. This has led to considerable research in the analysis of Indian regional languages. This paper comprehensively reviews SA in the following major Indian Regional languages: Marathi, Hindi, Tamil, Telugu, Malayalam, Bengali, Gujarati, and Urdu. Furthermore, this paper presents techniques, challenges, findings, recent research trends, and future scope for enhancing results accuracy
A novel concept-level approach for ultra-concise opinion summarization
The Web 2.0 has resulted in a shift as to how users consume and interact with the information, and has introduced a wide range of new textual genres, such as reviews or microblogs, through which users communicate, exchange, and share opinions. The exploitation of all this user-generated content is of great value both for users and companies, in order to assist them in their decision-making processes. Given this context, the analysis and development of automatic methods that can help manage online information in a quicker manner are needed. Therefore, this article proposes and evaluates a novel concept-level approach for ultra-concise opinion abstractive summarization. Our approach is characterized by the integration of syntactic sentence simplification, sentence regeneration and internal concept representation into the summarization process, thus being able to generate abstractive summaries, which is one the most challenging issues for this task. In order to be able to analyze different settings for our approach, the use of the sentence regeneration module was made optional, leading to two different versions of the system (one with sentence regeneration and one without). For testing them, a corpus of 400 English texts, gathered from reviews and tweets belonging to two different domains, was used. Although both versions were shown to be reliable methods for generating this type of summaries, the results obtained indicate that the version without sentence regeneration yielded to better results, improving the results of a number of state-of-the-art systems by 9%, whereas the version with sentence regeneration proved to be more robust to noisy data.This research work has been partially funded by the University of Alicante, Generalitat Valenciana, Spanish Government and the European Commission through the projects, “Tratamiento inteligente de la información para la ayuda a la toma de decisiones” (GRE12-44), “Explotación y tratamiento de la información disponible en Internet para la anotación y generación de textos adaptados al usuario” (GRE13-15), DIIM2.0 (PROMETEOII/2014/001), ATTOS (TIN2012-38536-C03-03), LEGOLANG-UAGE (TIN2012-31224), SAM (FP7-611312), and FIRST (FP7-287607)
- …