85 research outputs found

    Analysis of the early response to chemotherapy in lung cancer using apparent diffusion coefficient single-slice histogram

    Get PDF
    Purpose: To evaluate the application of apparent diffusion coefficient (ADC) values derived from diffusion-weighted imaging (DWI) using single-slice histogram analysis to study the chemotherapy responses in lung cancer.Methods: A total of 22 chemotherapy patients with advanced lung cancer from the Nanjing Drum Tower Hospital (Nanjing, China) were included in the study. We obtained DWI before and during chemotherapy, performed single-slice histogram analysis of ADC values, and assessed responses after 3 months of chemotherapy. Differences in ADC histogram parameters were compared between the responder and non-responder groups.Results: After therapy, we classified 13 as responders and 9 patients as non-responders. The recorded peak ADC value (ADCpeak) and lowest ADC value (ADClowest) did not show any significant difference in baseline ADClowest and ADCpeak between responders and non-responders. After chemotherapy, 13 responders had significant increase in ADClowest and ADCpeak compared with pre-treatment values (p < 0.001). ADClowest significantly increased in 9 non-responders (p < 0.05), although ADCpeak did not significantly increase. ADCpeak changes were significantly larger in the responder group than in the nonresponder group (p = 0.024). ADClowest changes after treatment were larger in the responder group than in the non-responder group, though not significantly.Conclusion: ADC values derived from single-slice histogram analysis may provide a useful and clinically feasible method for monitoring early chemotherapy response in patients with lung cancer.Keywords: Lung cancer, Chemotherapy, Apparent diffusion coefficient values, Diffusion-weighted imaging, Single-slice histogram analysi

    Comparison of the safety and efficacy of propofol and dexmedetomidine as sedatives when used as a modified topical formulation

    Get PDF
    Purpose: To evaluate the safety and efficacy of propofol and dexmedetomidine as sedatives in patients with anticipated difficult airways, used as a modified topical preparation.Methods: A total of 432 patients were enrolled in this study. They were classified as ASA I and ASA II. The patients were equally divided into group A (propofol group) and group B (dexmedetomidine group). A modified Awake Fiberoptic Intubation (AFOI) was carried out for these patients, followed by airway assessment and evaluation of clinical outcome based on intubation scores, adverse events, and postoperative data.Results: Patients in both groups had successful intubation at the first attempt. There was no significant difference in baseline characteristics between the two groups. The SARI scores which characterized the overall score for tracheal intubation were 4.6 and 4.2 for groups A and B, respectively. With respect to rescue infusion and consciousness, 11 patients (5.09 %) in group A required rescue, as against 5 patients (2.31 %) in group B. Seven (7) patients (3.24 %) in group A (propofol group) had severe airway obstruction, while only 4 patients (1.85) in group B had the same adverse reaction. Patients in group B had more satisfactory and favourable outcomes than those in group A who were treated with modified AFOI.Conclusion: The use of dexmedetomidine based on modified topical anaesthesia is safe and comfortable in terms of patient convenience and difficult airway management. Thus, dexmedetomidine is a safe, feasible and effective method for managing difficult airway when applied using the modified AFOI

    RAHNet: Retrieval Augmented Hybrid Network for Long-tailed Graph Classification

    Full text link
    Graph classification is a crucial task in many real-world multimedia applications, where graphs can represent various multimedia data types such as images, videos, and social networks. Previous efforts have applied graph neural networks (GNNs) in balanced situations where the class distribution is balanced. However, real-world data typically exhibit long-tailed class distributions, resulting in a bias towards the head classes when using GNNs and limited generalization ability over the tail classes. Recent approaches mainly focus on re-balancing different classes during model training, which fails to explicitly introduce new knowledge and sacrifices the performance of the head classes. To address these drawbacks, we propose a novel framework called Retrieval Augmented Hybrid Network (RAHNet) to jointly learn a robust feature extractor and an unbiased classifier in a decoupled manner. In the feature extractor training stage, we develop a graph retrieval module to search for relevant graphs that directly enrich the intra-class diversity for the tail classes. Moreover, we innovatively optimize a category-centered supervised contrastive loss to obtain discriminative representations, which is more suitable for long-tailed scenarios. In the classifier fine-tuning stage, we balance the classifier weights with two weight regularization techniques, i.e., Max-norm and weight decay. Experiments on various popular benchmarks verify the superiority of the proposed method against state-of-the-art approaches.Comment: Accepted by the ACM International Conference on Multimedia (MM) 202

    ALEX: Towards Effective Graph Transfer Learning with Noisy Labels

    Full text link
    Graph Neural Networks (GNNs) have garnered considerable interest due to their exceptional performance in a wide range of graph machine learning tasks. Nevertheless, the majority of GNN-based approaches have been examined using well-annotated benchmark datasets, leading to suboptimal performance in real-world graph learning scenarios. To bridge this gap, the present paper investigates the problem of graph transfer learning in the presence of label noise, which transfers knowledge from a noisy source graph to an unlabeled target graph. We introduce a novel technique termed Balance Alignment and Information-aware Examination (ALEX) to address this challenge. ALEX first employs singular value decomposition to generate different views with crucial structural semantics, which help provide robust node representations using graph contrastive learning. To mitigate both label shift and domain shift, we estimate a prior distribution to build subgraphs with balanced label distributions. Building on this foundation, an adversarial domain discriminator is incorporated for the implicit domain alignment of complex multi-modal distributions. Furthermore, we project node representations into a different space, optimizing the mutual information between the projected features and labels. Subsequently, the inconsistency of similarity structures is evaluated to identify noisy samples with potential overfitting. Comprehensive experiments on various benchmark datasets substantiate the outstanding superiority of the proposed ALEX in different settings.Comment: Accepted by the ACM International Conference on Multimedia (MM) 202

    Towards Long-Tailed Recognition for Graph Classification via Collaborative Experts

    Full text link
    Graph classification, aiming at learning the graph-level representations for effective class assignments, has received outstanding achievements, which heavily relies on high-quality datasets that have balanced class distribution. In fact, most real-world graph data naturally presents a long-tailed form, where the head classes occupy much more samples than the tail classes, it thus is essential to study the graph-level classification over long-tailed data while still remaining largely unexplored. However, most existing long-tailed learning methods in visions fail to jointly optimize the representation learning and classifier training, as well as neglect the mining of the hard-to-classify classes. Directly applying existing methods to graphs may lead to sub-optimal performance, since the model trained on graphs would be more sensitive to the long-tailed distribution due to the complex topological characteristics. Hence, in this paper, we propose a novel long-tailed graph-level classification framework via Collaborative Multi-expert Learning (CoMe) to tackle the problem. To equilibrate the contributions of head and tail classes, we first develop balanced contrastive learning from the view of representation learning, and then design an individual-expert classifier training based on hard class mining. In addition, we execute gated fusion and disentangled knowledge distillation among the multiple experts to promote the collaboration in a multi-expert framework. Comprehensive experiments are performed on seven widely-used benchmark datasets to demonstrate the superiority of our method CoMe over state-of-the-art baselines.Comment: Accepted by IEEE Transactions on Big Data (TBD 2024

    Analysis of Yarrowia lipolytica Growth, Catabolism, and Terpenoid Biosynthesis during Utilization of Lipid-derived Feedstock

    Get PDF
    This study employs biomass growth analyses and 13C-isotope tracing to investigate lipid feedstock utilization by Yarrowia lipolytica. Compared to glucose, oil-feedstock in the minimal medium increases the yeast\u27s biomass yields and cell sizes, but decreases its protein content (\u3c20% of total biomass) and enzyme abundances for product synthesis. Labeling results indicate a segregated metabolic network (the glycolysis vs. the TCA cycle) during co-catabolism of sugars (glucose or glycerol) with fatty acid substrates, which facilitates resource allocations for biosynthesis without catabolite repressions. This study has also examined the performance of a β-carotene producing strain in different growth mediums. Canola oil-containing yeast-peptone (YP) has resulted in the best β-carotene titer (121 ± 13 mg/L), two-fold higher than the glucose based YP medium. These results highlight the potential of Y. lipolytica for the valorization of waste-derived lipid feedstock

    Abdomen anatomic characteristics on CT scans as predictive markers for short-term complications following radical resection of colorectal cancer

    Get PDF
    BackgroundPrediction and management of short-term postoperative complications in patients with colorectal cancer are essential in postoperative rehabilitation. Through CT scan images, we can easily measure some parameters of abdomen anatomic characteristics. This study aimed to assess whether there is a relationship between the abdomen anatomic characteristics and short-term postoperative complications.Materials and methodsWe conducted a retrospective study. Eighty patients in each complication group and non-complication group were recruited with propensity score match. Demographics, perioperative laboratory results and surgical information were collected and compared between groups with univariate analysis. Significant elements were brought into subsequent logistic regression analysis and ROC analysis for further identification.ResultsUnivariate analysis showed that preoperative white blood cells, preoperative neutrophil counts, rectus abdominis thickness (RAT), subcutaneous fat thickness (SFT), and abdomen depth (AD) were significantly different between the complication group and non-complication group. Logistic regression analysis demonstrated that higher RAT (p = 0.002), SFT (p < 0.001) and AD (p < 0.001) independently predicted the incidence of short-term postoperative complications.ConclusionsIn this study on patients undergoing radical resection of colorectal cancer, abdomen anatomic characteristics including higher RAT, SFT and AD are associated with an increased risk of short-term postoperative complications

    WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing

    Full text link
    Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. To tackle the problem, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM jointly learns masked speech prediction and denoising in pre-training. By this means, WavLM does not only keep the speech content modeling capability by the masked speech prediction, but also improves the potential to non-ASR tasks by the speech denoising. In addition, WavLM employs gated relative position bias for the Transformer structure to better capture the sequence ordering of input speech. We also scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks. The code and pre-trained models are available at https://aka.ms/wavlm.Comment: Submitted to the Journal of Selected Topics in Signal Processing (JSTSP
    • …
    corecore