557 research outputs found

    Effects of Ox-LDL on Macrophages NAD(P)H Autofluorescence Changes by Two-photon Microscopy

    Get PDF
    Ox-LDL uptakes by macrophage play a critical role in the happening of atherosclerosis. Because of its low damage on observed cells and better signal-to- background ratio, two-photon excitation fluorescence microscopy is used to observe NAD(P)H autofluorescence of macrophage under difference cultured conditions- bare cover glass, coated with fibronectin or poly-D-lysine. The results show that the optimal condition is fibronectin coated surface, on which, macrophages profile can be clearly identified on NAD(P)H autofluorescence images collected by two-photon microscopy. Moreover, different morphology and intensities of autofluorescence under different conditions were observed as well. In the future, effects of ox-LDL on macrophages will be investigated by purposed system to research etiology of atherosclerosis.Comment: Submitted on behalf of TIMA Editions (http://irevues.inist.fr/tima-editions

    Examining the online reading behavior and performance of fifth-graders: evidence from eye-movement data

    Get PDF
    Online reading is developing at an increasingly rapid rate, but the debate concerning whether learning is more effective when using hypertexts than when using traditional linear texts is still persistent. In addition, several researchers stated that online reading comprehension always starts with a question, but little empirical evidence has been gathered to investigate this claim. This study used eye-tracking technology and retrospective think aloud technique to examine online reading behaviors of fifth-graders (N = 50). The participants were asked to read four texts on the website. The present study employed a three-way mixed design: 2 (reading ability: high vs. low) 2 (reading goals: with vs. without) 2 (text types: hypertext vs. linear text). The dependent variables were eye-movement indices and the frequencies of using online reading strategy. The results show that fifth-graders, irrespective of their reading ability, found it difficult to navigate the nonlinear structure of hypertexts when searching for and integrating information. When they read with goals, they adjusted their reading speed and the focus of their attention. Their offline reading ability also influenced their online reading performance. These results suggest that online reading skills and strategies have to be taught in order to enhance the online reading abilities of elementary-school students

    UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion Recognition

    Full text link
    Multimodal sentiment analysis (MSA) and emotion recognition in conversation (ERC) are key research topics for computers to understand human behaviors. From a psychological perspective, emotions are the expression of affect or feelings during a short period, while sentiments are formed and held for a longer period. However, most existing works study sentiment and emotion separately and do not fully exploit the complementary knowledge behind the two. In this paper, we propose a multimodal sentiment knowledge-sharing framework (UniMSE) that unifies MSA and ERC tasks from features, labels, and models. We perform modality fusion at the syntactic and semantic levels and introduce contrastive learning between modalities and samples to better capture the difference and consistency between sentiments and emotions. Experiments on four public benchmark datasets, MOSI, MOSEI, MELD, and IEMOCAP, demonstrate the effectiveness of the proposed method and achieve consistent improvements compared with state-of-the-art methods.Comment: Accepted to EMNLP 2022 main conferenc

    UniSA: Unified Generative Framework for Sentiment Analysis

    Full text link
    Sentiment analysis is a crucial task that aims to understand people's emotional states and predict emotional categories based on multimodal information. It consists of several subtasks, such as emotion recognition in conversation (ERC), aspect-based sentiment analysis (ABSA), and multimodal sentiment analysis (MSA). However, unifying all subtasks in sentiment analysis presents numerous challenges, including modality alignment, unified input/output forms, and dataset bias. To address these challenges, we propose a Task-Specific Prompt method to jointly model subtasks and introduce a multimodal generative framework called UniSA. Additionally, we organize the benchmark datasets of main subtasks into a new Sentiment Analysis Evaluation benchmark, SAEval. We design novel pre-training tasks and training methods to enable the model to learn generic sentiment knowledge among subtasks to improve the model's multimodal sentiment perception ability. Our experimental results show that UniSA performs comparably to the state-of-the-art on all subtasks and generalizes well to various subtasks in sentiment analysis.Comment: Accepted to ACM MM 202

    Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models

    Full text link
    Task-oriented dialogue (TOD) systems facilitate users in executing various activities via multi-turn dialogues, but Large Language Models (LLMs) often struggle to comprehend these intricate contexts. In this study, we propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks. Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts, demonstrating its potential as a powerful tool in enhancing LLMs' comprehension in complex dialogue tasks

    Metronidazole-Induced Irreversible Optic Neuropathy

    Get PDF
    Metronidazole-induced optic neuropathy is a rare complication. Most patients have excellent visual recovery. In this study, we report a patient who presented with a sudden onset of severe visual loss after a 1-week course of metronidazole. Myelitis developed simultaneously. The vision and the accompanying neurological deficiency of the patient did not improve even after metronidazole was discontinued immediately and various treatments were given

    Improving Factual Consistency of Text Summarization by Adversarially Decoupling Comprehension and Embellishment Abilities of LLMs

    Full text link
    Despite the recent progress in text summarization made by large language models (LLMs), they often generate summaries that are factually inconsistent with original articles, known as "hallucinations" in text generation. Unlike previous small models (e.g., BART, T5), current LLMs make fewer silly mistakes but more sophisticated ones, such as imposing cause and effect, adding false details, overgeneralizing, etc. These hallucinations are challenging to detect through traditional methods, which poses great challenges for improving the factual consistency of text summarization. In this paper, we propose an adversarially DEcoupling method to disentangle the Comprehension and EmbellishmeNT abilities of LLMs (DECENT). Furthermore, we adopt a probing-based efficient training to cover the shortage of sensitivity for true and false in the training process of LLMs. In this way, LLMs are less confused about embellishing and understanding; thus, they can execute the instructions more accurately and have enhanced abilities to distinguish hallucinations. Experimental results show that DECENT significantly improves the reliability of text summarization based on LLMs

    SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue Agents

    Full text link
    Task-oriented dialogue (TOD) models have made significant progress in recent years. However, previous studies primarily focus on datasets written by annotators, which has resulted in a gap between academic research and real-world spoken conversation scenarios. While several small-scale spoken TOD datasets are proposed to address robustness issues such as ASR errors, they ignore the unique challenges in spoken conversation. To tackle the limitations, we introduce SpokenWOZ, a large-scale speech-text dataset for spoken TOD, containing 8 domains, 203k turns, 5.7k dialogues and 249 hours of audios from human-to-human spoken conversations. SpokenWOZ further incorporates common spoken characteristics such as word-by-word processing and reasoning in spoken language. Based on these characteristics, we present cross-turn slot and reasoning slot detection as new challenges. We conduct experiments on various baselines, including text-modal models, newly proposed dual-modal models, and LLMs, e.g., ChatGPT. The results show that the current models still have substantial room for improvement in spoken conversation, where the most advanced dialogue state tracker only achieves 25.65% in joint goal accuracy and the SOTA end-to-end model only correctly completes the user request in 52.1% of dialogues. The dataset, code, and leaderboard are available: https://spokenwoz.github.io/SpokenWOZ-github.io/

    Clinical radiomics-based machine learning versus three-dimension convolutional neural network analysis for differentiation of thymic epithelial tumors from other prevascular mediastinal tumors on chest computed tomography scan

    Get PDF
    PurposeTo compare the diagnostic performance of radiomic analysis with machine learning (ML) model with a convolutional neural network (CNN) in differentiating thymic epithelial tumors (TETs) from other prevascular mediastinal tumors (PMTs).MethodsA retrospective study was performed in patients with PMTs and undergoing surgical resection or biopsy in National Cheng Kung University Hospital, Tainan, Taiwan, E-Da Hospital, Kaohsiung, Taiwan, and Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan between January 2010 and December 2019. Clinical data including age, sex, myasthenia gravis (MG) symptoms and pathologic diagnosis were collected. The datasets were divided into UECT (unenhanced computed tomography) and CECT (enhanced computed tomography) for analysis and modelling. Radiomics model and 3D CNN model were used to differentiate TETs from non-TET PMTs (including cyst, malignant germ cell tumor, lymphoma and teratoma). The macro F1-score and receiver operating characteristic (ROC) analysis were performed to evaluate the prediction models.ResultIn the UECT dataset, there were 297 patients with TETs and 79 patients with other PMTs. The performance of radiomic analysis with machine learning model using LightGBM with Extra Tree (macro F1-Score = 83.95%, ROC-AUC = 0.9117) had better performance than the 3D CNN model (macro F1-score = 75.54%, ROC-AUC = 0.9015). In the CECT dataset, there were 296 patients with TETs and 77 patients with other PMTs. The performance of radiomic analysis with machine learning model using LightGBM with Extra Tree (macro F1-Score = 85.65%, ROC-AUC = 0.9464) had better performance than the 3D CNN model (macro F1-score = 81.01%, ROC-AUC = 0.9275).ConclusionOur study revealed that the individualized prediction model integrating clinical information and radiomic features using machine learning demonstrated better predictive performance in the differentiation of TETs from other PMTs at chest CT scan than 3D CNN model
    corecore