13 research outputs found

    Comparative Effectiveness of Cognitive-Behavioral Therapy and Dialectical Behavior Therapy on Emotion Regulation, Positive and Negative Affection, Aggressive and Self-Harm Behaviors of 13-16-Year-Old Female Students

    Get PDF
    This study was aimed to compare the effectiveness of cognitive-behavioral therapy with dialectical behavior therapy on emotion regulation, positive and negative affection, aggressive and self-harm behaviors of 13 to 16-year-old female students. The results showed that both CBT and DBT have a significant effect on increasing emotional regulation and positive affect, and decreasing negative affect, reducing aggressive behavior and self-harm. Also there was no significant difference between two treatments in increasing the positive affection and decreasing negative affection, but the effect of DBT on increasing emotional regulation and reducing self-harm and aggressive behaviors was significantly more than CBT

    Pre-Training Multi-Modal Dense Retrievers for Outside-Knowledge Visual Question Answering

    Full text link
    This paper studies a category of visual question answering tasks, in which accessing external knowledge is necessary for answering the questions. This category is called outside-knowledge visual question answering (OK-VQA). A major step in developing OK-VQA systems is to retrieve relevant documents for the given multi-modal query. Current state-of-the-art asymmetric dense retrieval model for this task uses an architecture with a multi-modal query encoder and a uni-modal document encoder. Such an architecture requires a large amount of training data for effective performance. We propose an automatic data generation pipeline for pre-training passage retrieval models for OK-VQA tasks. The proposed approach leads to 26.9% Precision@5 improvements compared to the current state-of-the-art asymmetric architecture. Additionally, the proposed pre-training approach exhibits a good ability in zero-shot retrieval scenarios

    A Symmetric Dual Encoding Dense Retrieval Framework for Knowledge-Intensive Visual Question Answering

    Full text link
    Knowledge-Intensive Visual Question Answering (KI-VQA) refers to answering a question about an image whose answer does not lie in the image. This paper presents a new pipeline for KI-VQA tasks, consisting of a retriever and a reader. First, we introduce DEDR, a symmetric dual encoding dense retrieval framework in which documents and queries are encoded into a shared embedding space using uni-modal (textual) and multi-modal encoders. We introduce an iterative knowledge distillation approach that bridges the gap between the representation spaces in these two encoders. Extensive evaluation on two well-established KI-VQA datasets, i.e., OK-VQA and FVQA, suggests that DEDR outperforms state-of-the-art baselines by 11.6% and 30.9% on OK-VQA and FVQA, respectively. Utilizing the passages retrieved by DEDR, we further introduce MM-FiD, an encoder-decoder multi-modal fusion-in-decoder model, for generating a textual answer for KI-VQA tasks. MM-FiD encodes the question, the image, and each retrieved passage separately and uses all passages jointly in its decoder. Compared to competitive baselines in the literature, this approach leads to 5.5% and 8.5% improvements in terms of question answering accuracy on OK-VQA and FVQA, respectively

    PEACH: Pre-Training Sequence-to-Sequence Multilingual Models for Translation with Semi-Supervised Pseudo-Parallel Document Generation

    Full text link
    Multilingual pre-training significantly improves many multilingual NLP tasks, including machine translation. Most existing methods are based on some variants of masked language modeling and text-denoising objectives on monolingual data. Multilingual pre-training on monolingual data ignores the availability of parallel data in many language pairs. Also, some other works integrate the available human-generated parallel translation data in their pre-training. This kind of parallel data is definitely helpful, but it is limited even in high-resource language pairs. This paper introduces a novel semi-supervised method, SPDG, that generates high-quality pseudo-parallel data for multilingual pre-training. First, a denoising model is pre-trained on monolingual data to reorder, add, remove, and substitute words, enhancing the pre-training documents' quality. Then, we generate different pseudo-translations for each pre-training document using dictionaries for word-by-word translation and applying the pre-trained denoising model. The resulting pseudo-parallel data is then used to pre-train our multilingual sequence-to-sequence model, PEACH. Our experiments show that PEACH outperforms existing approaches used in training mT5 and mBART on various translation tasks, including supervised, zero- and few-shot scenarios. Moreover, PEACH's ability to transfer knowledge between similar languages makes it particularly useful for low-resource languages. Our results demonstrate that with high-quality dictionaries for generating accurate pseudo-parallel, PEACH can be valuable for low-resource languages.Comment: 15 pages, 5 figures, 16 tables, 1 algorithm, LoResMT@EACL 202

    The efficacy and applicability of chimeric antigen receptor (CAR) T cell-based regimens for primary bone tumors: a comprehensive review of current evidence

    No full text
    Primary bone tumors (PBT), although rare, could pose significant mortality and morbidity risks due to their high incidence of lung metastasis. Survival rates of patients with PBTs may vary based on the tumor type, therapeutic interventions, and the time of diagnosis. Despite advances in the management of patients with these tumors over the past four decades, the survival rates seem not to have improved significantly, implicating the need for novel therapeutic interventions. Surgical resection with wide margins, radiotherapy, and systemic chemotherapy are the main lines of treatment for PBTs. Neoadjuvant and adjuvant chemotherapy, along with emerging immunotherapeutic approaches such as chimeric antigen receptor (CAR)-T cell therapy, have the potential to improve the treatment outcomes for patients with PBTs. CAR-T cell therapy has been introduced as an option in hematologic malignancies, with FDA approval for several CD19-targeting CAR-T cell products. This review aims to highlight the potential of immunotherapeutic strategies, specifically CAR T cell therapy, in managing PBTs. </p
    corecore