10 research outputs found

    Deception Detection Across Domains, Languages and Modalities

    Full text link
    With the increase of deception and misinformation especially in social media, it has become crucial to develop machine learning methods to automatically identify deception. In this dissertation, we identify key challenges underlying text-based deception detection in a cross-domain setting, where we do not have training data in the target domain. We analyze the differences between domains and as a result develop methods to improve cross-domain deception detection. We additionally develop approaches that take advantage of cross-lingual properties to support deception detection across languages. This involves the usage of either multilingual NLP models or translation models. Finally, to better understand multi-modal (text, image and speech) deception detection, we create strategies to assist in determining which modality is the most beneficial for detecting the truthful and deceptive classes

    KAM-CoT: Knowledge Augmented Multimodal Chain-of-Thoughts Reasoning

    Full text link
    Large Language Models (LLMs) have demonstrated impressive performance in natural language processing tasks by leveraging chain of thought (CoT) that enables step-by-step thinking. Extending LLMs with multimodal capabilities is the recent interest, but incurs computational cost and requires substantial hardware resources. To address these challenges, we propose KAM-CoT a framework that integrates CoT reasoning, Knowledge Graphs (KGs), and multiple modalities for a comprehensive understanding of multimodal tasks. KAM-CoT adopts a two-stage training process with KG grounding to generate effective rationales and answers. By incorporating external knowledge from KGs during reasoning, the model gains a deeper contextual understanding reducing hallucinations and enhancing the quality of answers. This knowledge-augmented CoT reasoning empowers the model to handle questions requiring external context, providing more informed answers. Experimental findings show KAM-CoT outperforms the state-of-the-art methods. On the ScienceQA dataset, we achieve an average accuracy of 93.87%, surpassing GPT-3.5 (75.17%) by 18% and GPT-4 (83.99%) by 10%. Remarkably, KAM-CoT achieves these results with only 280M trainable parameters at a time, demonstrating its cost-efficiency and effectiveness.Comment: AAAI 202

    Silo NLP's Participation at WAT2022

    Get PDF
    This paper provides the system description of "Silo NLP's" submission to the Workshop on Asian Translation (WAT2022). We have participated in the Indic Multimodal tasks (English->Hindi, English->Malayalam, and English->Bengali Multimodal Translation). For text-only translation, we trained Transformers from scratch and fine-tuned mBART-50 models. For multimodal translation, we used the same mBART architecture and extracted object tags from the images to use as visual features concatenated with the text sequence. Our submission tops many tasks including English->Hindi multimodal translation (evaluation test), English->Malayalam text-only and multimodal translation (evaluation test), English->Bengali multimodal translation (challenge test), and English->Bengali text-only translation (evaluation test).Peer reviewe

    Silo NLP's Participation at WAT2022

    Get PDF
    This paper provides the system description of "Silo NLP's" submission to the Workshop on Asian Translation (WAT2022). We have participated in the Indic Multimodal tasks (English->Hindi, English->Malayalam, and English->Bengali Multimodal Translation). For text-only translation, we trained Transformers from scratch and fine-tuned mBART-50 models. For multimodal translation, we used the same mBART architecture and extracted object tags from the images to use as visual features concatenated with the text sequence. Our submission tops many tasks including English->Hindi multimodal translation (evaluation test), English->Malayalam text-only and multimodal translation (evaluation test), English->Bengali multimodal translation (challenge test), and English->Bengali text-only translation (evaluation test).Peer reviewe

    Bengali Visual Genome 1.0

    No full text
    Data ------- Bengali Visual Genome (BVG for short) 1.0 has similar goals as Hindi Visual Genome (HVG) 1.1: to support the Bengali language. Bengali Visual Genome 1.0 is the multi-modal dataset in Bengali for machine translation and image captioning. Bengali Visual Genome is a multimodal dataset consisting of text and images suitable for English-to-Bengali multimodal machine translation tasks and multimodal research. We follow the same selection of short English segments (captions) and the associated images from Visual Genome as HGV 1.1 has. For BVG, we manually translated these captions from English to Bengali taking the associated images into account. The manual translation is performed by the native Bengali speakers without referring to any machine translation system. The training set contains 29K segments. Further 1K and 1.6K segments are provided in development and test sets, respectively, which follow the same (random) sampling from the original Hindi Visual Genome. A third test set is called the ``challenge test set'' and consists of 1.4K segments. The challenge test set was created for the WAT2019 multi-modal task by searching for (particularly) ambiguous English words based on the embedding similarity and manually selecting those where the image helps to resolve the ambiguity. The surrounding words in the sentence however also often include sufficient cues to identify the correct meaning of the ambiguous word. Dataset Formats --------------- The multimodal dataset contains both text and images. The text parts of the dataset (train and test sets) are in simple tab-delimited plain text files. All the text files have seven columns as follows: Column1 - image_id Column2 - X Column3 - Y Column4 - Width Column5 - Height Column6 - English Text Column7 - Bengali Text The image part contains the full images with the corresponding image_id as the file name. The X, Y, Width and Height columns indicate the rectangular region in the image described by the caption. Data Statistics --------------- The statistics of the current release are given below. Parallel Corpus Statistics -------------------------- Dataset Segments English Words Bengali Words ---------- -------- ------------- ------------- Train 28930 143115 113978 Dev 998 4922 3936 Test 1595 7853 6408 Challenge Test 1400 8186 6657 ---------- -------- ------------- ------------- Total 32923 164076 130979 The word counts are approximate, prior to tokenization. Citation -------- If you use this corpus, please cite the following paper: @inproceedings{hindi-visual-genome:2022, title= "{Bengali Visual Genome: A Multimodal Dataset for Machine Translation and Image Captioning}", author={Sen, Arghyadeep and Parida, Shantipriya and Kotwal, Ketan and Panda, Subhadarshi and Bojar, Ond{\v{r}}ej and Dash, Satya Ranjan}, editor={Satapathy, Suresh Chandra and Peer, Peter and Tang, Jinshan and Bhateja, Vikrant and Ghosh, Anumoy}, booktitle= {Intelligent Data Engineering and Analytics}, publisher= {Springer Nature Singapore}, address= {Singapore}, pages = {63--70}, isbn = {978-981-16-6624-7}, doi = {10.1007/978-981-16-6624-7_7},

    Open Machine Translation for Low Resource South American Languages (AmericasNLP 2021 Shared Task Contribution)

    No full text
    This paper describes the team Tamalli's submission to AmericasNLP2021 shared task on Open Machine Translation for low resource South American languages. Our goal was to evaluate different Machine Translation (MT) techniques, statistical and neural-based, under several configuration settings. We obtained the second-best results for the language pairs Spanish-Bribri, Spanish-Asháninka, and Spanish-Rarámuri in the category (Development set not used for training). Our performed experiments will serve as a point of reference for researchers working on MT with low-resource languages

    Hausa Visual Genome 1.0

    No full text
    Data ------- Hausa Visual Genome 1.0, a multimodal dataset consisting of text and images suitable for English-to-Hausa multimodal machine translation tasks and multimodal research. We follow the same selection of short English segments (captions) and the associated images from Visual Genome as the dataset Hindi Visual Genome 1.1 has. We automatically translated the English captions to Hausa and manually post-edited, taking the associated images into account. The training set contains 29K segments. Further 1K and 1.6K segments are provided in development and test sets, respectively, which follow the same (random) sampling from the original Hindi Visual Genome. Additionally, a challenge test set of 1400 segments is available for the multi-modal task. This challenge test set was created in Hindi Visual Genome by searching for (particularly) ambiguous English words based on the embedding similarity and manually selecting those where the image helps to resolve the ambiguity. Dataset Formats ----------------------- The multimodal dataset contains both text and images. The text parts of the dataset (train and test sets) are in simple tab-delimited plain text files. All the text files have seven columns as follows: Column1 - image_id Column2 - X Column3 - Y Column4 - Width Column5 - Height Column6 - English Text Column7 - Hausa Text The image part contains the full images with the corresponding image_id as the file name. The X, Y, Width, and Height columns indicate the rectangular region in the image described by the caption. Data Statistics -------------------- The statistics of the current release are given below. Parallel Corpus Statistics ----------------------------------- Dataset Segments English Words Hausa Words ---------- -------- ------------- ----------- Train 28930 143106 140981 Dev 998 4922 4857 Test 1595 7853 7736 Challenge Test 1400 8186 8752 ---------- -------- ------------- ----------- Total 32923 164067 162326 The word counts are approximate, prior to tokenization. Citation ----------- If you use this corpus, please cite the following paper: @InProceedings{abdulmumin-EtAl:2022:LREC, author = {Abdulmumin, Idris and Dash, Satya Ranjan and Dawud, Musa Abdullahi and Parida, Shantipriya and Muhammad, Shamsuddeen and Ahmad, Ibrahim Sa'id and Panda, Subhadarshi and Bojar, Ond{\v{r}}ej and Galadanci, Bashir Shehu and Bello, Bello Shehu}, title = "{Hausa Visual Genome: A Dataset for Multi-Modal English to Hausa Machine Translation}", booktitle = {Proceedings of the Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {6471--6479}, url = {https://aclanthology.org/2022.lrec-1.694}

    Hausa Visual Genome: A Dataset for Multi-Modal English to Hausa Machine Translation

    Full text link
    Multi-modal Machine Translation (MMT) enables the use of visual information to enhance the quality of translations. The visual information can serve as a valuable piece of context information to decrease the ambiguity of input sentences. Despite the increasing popularity of such a technique, good and sizeable datasets are scarce, limiting the full extent of their potential. Hausa, a Chadic language, is a member of the Afro-Asiatic language family. It is estimated that about 100 to 150 million people speak the language, with more than 80 million indigenous speakers. This is more than any of the other Chadic languages. Despite a large number of speakers, the Hausa language is considered low-resource in natural language processing (NLP). This is due to the absence of sufficient resources to implement most NLP tasks. While some datasets exist, they are either scarce, machine-generated, or in the religious domain. Therefore, there is a need to create training and evaluation data for implementing machine learning tasks and bridging the research gap in the language. This work presents the Hausa Visual Genome (HaVG), a dataset that contains the description of an image or a section within the image in Hausa and its equivalent in English. To prepare the dataset, we started by translating the English description of the images in the Hindi Visual Genome (HVG) into Hausa automatically. Afterward, the synthetic Hausa data was carefully post-edited considering the respective images. The dataset comprises 32,923 images and their descriptions that are divided into training, development, test, and challenge test set. The Hausa Visual Genome is the first dataset of its kind and can be used for Hausa-English machine translation, multi-modal research, and image description, among various other natural language processing and generation tasks.Comment: Accepted at Language Resources and Evaluation Conference 2022 (LREC2022

    Open machine translation for low resource South American languages (AmericasNLP 2021 shared task contribution)

    No full text
    This paper describes the team (“Tamalli”)’s submission to AmericasNLP2021 shared task on Open Machine Translation for low resource South American languages. Our goal was to evaluate different Machine Translation (MT) techniques, statistical and neural-based, under several configuration settings. We obtained the second-best results for the language pairs “Spanish-Bribri”, “Spanish-Asháninka”, and “Spanish-Rarámuri” in the category “Development set not used for training”. Our performed experiments will serve as a point of reference for researchers working on MT with low-resource languages.This paper describes the team (“Tamalli”)’s submission to AmericasNLP2021 shared task on Open Machine Translation for low resource South American languages. Our goal was to evaluate different Machine Translation (MT) techniques, statistical and neural-based, under several configuration settings. We obtained the second-best results for the language pairs “Spanish-Bribri”, “Spanish-Asháninka”, and “Spanish-Rarámuri” in the category “Development set not used for training”. Our performed experiments will serve as a point of reference for researchers working on MT with low-resource languages.C

    Clinicopathologic Profile and Treatment Outcomes of Colorectal Cancer in Young Adults: A Multicenter Study From India

    No full text
    PURPOSEColorectal cancer (CRC) in young adults is a rising concern in developing countries such as India. This study investigates clinicopathologic profiles, treatment patterns, and outcomes of CRC in young adults, focusing on adolescent and young adult (AYA) CRC in a low- and middle-income country (LMIC).METHODSA retrospective registry study from January 2018 to December 2020 involved 126 young adults (age 40 years and younger) with CRC. Patient demographics, clinical features, tumor characteristics, treatment modalities, and survival outcomes were analyzed after obtaining institutional ethics committees' approval.RESULTSAmong 126 AYA patients, 62.70% had colon cancer and 37.30% had rectal cancer. Most patients (67%) were age 30-39 years, with no significant gender predisposition. Females had higher metastatic burden. Abdominal pain with obstruction features was common. Adenocarcinoma (65%) with signet ring differentiation (26%) suggested aggressive behavior. Limited access to molecular testing hindered mutation identification. Capecitabine-based chemotherapy was favored because of logistical constraints. Adjuvant therapy showed comparable recurrence-free survival in young adults and older patients. For localized colon cancer, the 2-year median progression-free survival was 74%, and for localized rectal cancer, it was 18 months. Palliative therapy resulted in a median overall survival of 33 months (95% CI, 18 to 47). Limited access to targeted agents affected treatment options, with only 27.5% of patients with metastatic disease receiving them. Chemotherapy was generally well tolerated, with hematologic side effect being most common.CONCLUSIONThis collaborative study in an LMIC offers crucial insights into CRC in AYA patients in India. Differences in disease characteristics, treatment patterns, and limited access to targeted agents highlight the need for further research and resource allocation to improve outcomes in this population
    corecore