53 research outputs found

    Leptin Receptor Overlapping Transcript (LEPROT) Is Associated with the Tumor Microenvironment and a Prognostic Predictor in Pan-Cancer

    Get PDF
    Background: Leptin receptor overlapping transcript (LEPROT) is reported to be involved in metabolism regulation and energy balance as well as molecular signaling of breast cancer and osteosarcoma. LEPROT is expressed in various tissue and is suggested to be involved in cancer developments but with contradictory roles. The comprehensive knowledge of the effects of LEPROT on cancer development and progression across pan-cancer is still missing. Methods: The expressions of LEPROT in cancers were compared with corresponding normal tissues across pan-cancer types. The relationships between expression and methylation of LEPROT were then demonstrated. The correlations of LEPROT with the tumor microenvironment (TME), including immune checkpoints, tumor immune cells infiltration (TII), and cancer-associated fibroblasts (CAFs), were also investigated. Co-expression analyses and functional enrichments were conducted to suggest the most relevant genes and the mechanisms of the effects in cancers for LEPROT. Finally, the correlations of LEPROT with patient survival and immunotherapy response were explored. Results: LEPROT expression was found to be significantly aberrant in 15/19 (78.9%) cancers compared with corresponding normal tissues; LEPROT was downregulated in 12 cancers and upregulated in three cancers. LEPROT expressions were overall negatively correlated with its methylation alterations. Moreover, LEPROT was profoundly correlated with the TME, including immune checkpoints, TIIs, and CAFs. According to co-expression analyses and functional enrichments, the interactions of LEPROT with the TME may be mediated by the interleukin six signal transducer/the Janus kinase/signal transducers and activators of the transcription signaling pathway. Prognostic values may exist for LEPROT to predict patient survival and immunotherapy response in a context-dependent way. Conclusions: LEPROT affects cancer development by interfering with the TME and regulating inflammatory or immune signals. LEPROT may also serve as a potential prognostic marker or a target in cancer therapy. This is the first study to investigate the roles of LEPROT across pan-cancer

    'Don't Get Too Technical with Me': A Discourse Structure-Based Framework for Science Journalism

    Full text link
    Science journalism refers to the task of reporting technical findings of a scientific paper as a less technical news article to the general public audience. We aim to design an automated system to support this real-world task (i.e., automatic science journalism) by 1) introducing a newly-constructed and real-world dataset (SciTechNews), with tuples of a publicly-available scientific paper, its corresponding news article, and an expert-written short summary snippet; 2) proposing a novel technical framework that integrates a paper's discourse structure with its metadata to guide generation; and, 3) demonstrating with extensive automatic and human experiments that our framework outperforms other baseline methods (e.g. Alpaca and ChatGPT) in elaborating a content plan meaningful for the target audience, simplifying the information selected, and producing a coherent final report in a layman's style.Comment: Accepted to EMNLP 202

    HongTu: Scalable Full-Graph GNN Training on Multiple GPUs (via communication-optimized CPU data offloading)

    Full text link
    Full-graph training on graph neural networks (GNN) has emerged as a promising training method for its effectiveness. Full-graph training requires extensive memory and computation resources. To accelerate this training process, researchers have proposed employing multi-GPU processing. However the scalability of existing frameworks is limited as they necessitate maintaining the training data for every layer in GPU memory. To efficiently train on large graphs, we present HongTu, a scalable full-graph GNN training system running on GPU-accelerated platforms. HongTu stores vertex data in CPU memory and offloads training to GPUs. HongTu employs a memory-efficient full-graph training framework that reduces runtime memory consumption by using partition-based training and recomputation-caching-hybrid intermediate data management. To address the issue of increased host-GPU communication caused by duplicated neighbor access among partitions, HongTu employs a deduplicated communication framework that converts the redundant host-GPU communication to efficient inter/intra-GPU data access. Further, HongTu uses a cost model-guided graph reorganization method to minimize communication overhead. Experimental results on a 4XA100 GPU server show that HongTu effectively supports billion-scale full-graph GNN training while reducing host-GPU data communication by 25%-71%. Compared to the full-graph GNN system DistGNN running on 16 CPU nodes, HongTu achieves speedups ranging from 7.8X to 20.2X. For small graphs where the training data fits into the GPUs, HongTu achieves performance comparable to existing GPU-based GNN systems.Comment: 28 pages 11 figures, SIGMOD202

    LLM-Powered Conversational Voice Assistants: Interaction Patterns, Opportunities, Challenges, and Design Guidelines

    Full text link
    Conventional Voice Assistants (VAs) rely on traditional language models to discern user intent and respond to their queries, leading to interactions that often lack a broader contextual understanding, an area in which Large Language Models (LLMs) excel. However, current LLMs are largely designed for text-based interactions, thus making it unclear how user interactions will evolve if their modality is changed to voice. In this work, we investigate whether LLMs can enrich VA interactions via an exploratory study with participants (N=20) using a ChatGPT-powered VA for three scenarios (medical self-diagnosis, creative planning, and debate) with varied constraints, stakes, and objectivity. We observe that LLM-powered VA elicits richer interaction patterns that vary across tasks, showing its versatility. Notably, LLMs absorb the majority of VA intent recognition failures. We additionally discuss the potential of harnessing LLMs for more resilient and fluid user-VA interactions and provide design guidelines for tailoring LLMs for voice assistance

    Efficient Memory Management for GPU-based Deep Learning Systems

    Full text link
    GPU (graphics processing unit) has been used for many data-intensive applications. Among them, deep learning systems are one of the most important consumer systems for GPU nowadays. As deep learning applications impose deeper and larger models in order to achieve higher accuracy, memory management becomes an important research topic for deep learning systems, given that GPU has limited memory size. Many approaches have been proposed towards this issue, e.g., model compression and memory swapping. However, they either degrade the model accuracy or require a lot of manual intervention. In this paper, we propose two orthogonal approaches to reduce the memory cost from the system perspective. Our approaches are transparent to the models, and thus do not affect the model accuracy. They are achieved by exploiting the iterative nature of the training algorithm of deep learning to derive the lifetime and read/write order of all variables. With the lifetime semantics, we are able to implement a memory pool with minimal fragments. However, the optimization problem is NP-complete. We propose a heuristic algorithm that reduces up to 13.3% of memory compared with Nvidia's default memory pool with equal time complexity. With the read/write semantics, the variables that are not in use can be swapped out from GPU to CPU to reduce the memory footprint. We propose multiple swapping strategies to automatically decide which variable to swap and when to swap out (in), which reduces the memory cost by up to 34.2% without communication overhead

    LightRW: FPGA Accelerated Graph Dynamic Random Walks

    Full text link
    Graph dynamic random walks (GDRWs) have recently emerged as a powerful paradigm for graph analytics and learning applications, including graph embedding and graph neural networks. Despite the fact that many existing studies optimize the performance of GDRWs on multi-core CPUs, massive random memory accesses and costly synchronizations cause severe resource underutilization, and the processing of GDRWs is usually the key performance bottleneck in many graph applications. This paper studies an alternative architecture, FPGA, to address these issues in GDRWs, as FPGA has the ability of hardware customization so that we are able to explore fine-grained pipeline execution and specialized memory access optimizations. Specifically, we propose {LightRW}, a novel FPGA-based accelerator for GDRWs. LightRW embraces a series of optimizations to enable fine-grained pipeline execution on the chip and to exploit the massive parallelism of FPGA while significantly reducing memory accesses. As current commonly used sampling methods in GDRWs do not efficiently support fine-grained pipeline execution, we develop a parallelized reservoir sampling method to sample multiple vertices per cycle for efficient pipeline execution. To address the random memory access issues, we propose a degree-aware configurable caching method that buffers hot vertices on-chip to alleviate random memory accesses and a dynamic burst access engine that efficiently retrieves neighbors. Experimental results show that our optimization techniques are able to improve the performance of GDRWs on FPGA significantly. Moreover, LightRW delivers up to 9.55x and 9.10x speedup over the state-of-the-art CPU-based MetaPath and Node2vec random walks, respectively. This work is open-sourced on GitHub at https://github.com/Xtra-Computing/LightRW.Comment: Accepted to SIGMOD 202

    Efficient Memory Management for GPU-based Deep Learning Systems

    Get PDF
    GPU (graphics processing unit) has been used for many data-intensive applications. Among them, deep learning systems are one of the most important consumer systems for GPU nowadays. As deep learning applications impose deeper and larger models in order to achieve higher accuracy, memory management becomes an important research topic for deep learning systems, given that GPU has limited memory size. Many approaches have been proposed towards this issue, e.g., model compression and memory swapping. However, they either degrade the model accuracy or require a lot of manual intervention. In this paper, we propose two orthogonal approaches to reduce the memory cost from the system perspective. Our approaches are transparent to the models, and thus do not affect the model accuracy. They are achieved by exploiting the iterative nature of the training algorithm of deep learning to derive the lifetime and read/write order of all variables. With the lifetime semantics, we are able to implement a memory pool with minimal fragments. However, the optimization problem is NP-complete. We propose a heuristic algorithm that reduces up to 13.3% of memory compared with Nvidia's default memory pool with equal time complexity. With the read/write semantics, the variables that are not in use can be swapped out from GPU to CPU to reduce the memory footprint. We propose multiple swapping strategies to automatically decide which variable to swap and when to swap out (in), which reduces the memory cost by up to 34.2% without communication overhead

    Structure-Based Investigation on the Binding and Activation of Typical Pesticides With Thyroid Receptor

    Get PDF
    A broad range of pesticides have been reported to interfere with the normal function of the thyroid endocrine system. However, the precise mechanism(s) of action has not yet been thoroughly elucidated. In this study, 21 pesticides were assessed for their binding interactions and the potential to disrupt thyroid homeostasis. In the GH3 luciferase reporter gene assays, 5 of the pesticides tested had agonistic effects in the order of procymidone &gt; imidacloprid &gt; mancozeb &gt; fluroxypyr &gt; atrazine. 11 pesticides inhibited luciferase activity of T3 to varying degrees, demonstrating their antagonistic activity. And there are 4 pesticides showed mixed effects when treated with different concentrations. Surface plasmon resonance (SPR) biosensor technique was used to directly measure the binding interactions of these pesticides to the human thyroid hormone receptor (hTR). 13 pesticides were observed to bind directly with TR, with a KD ranging from 4.80E-08 M to 9.44E-07 M. The association and disassociation of the hTR/pesticide complex revealed 2 distinctive binding modes between the agonists and antagonists. At the same time, a different binding mode was displayed by the pesticides showed mix agonist and antagonist activity. In addition, the molecular docking simulation analyses indicated that the interaction energy calculated by CDOCKER for the agonists and antagonists correlated well with the KD values measured by the surface plasmon resonance assay. These results help to explain the differences of the TR activities of these tested pesticides.</p

    "Mango Mango, How to Let The Lettuce Dry Without A Spinner?'': Exploring User Perceptions of Using An LLM-Based Conversational Assistant Toward Cooking Partner

    Full text link
    The rapid advancement of the Large Language Model (LLM) has created numerous potentials for integration with conversational assistants (CAs) assisting people in their daily tasks, particularly due to their extensive flexibility. However, users' real-world experiences interacting with these assistants remain unexplored. In this research, we chose cooking, a complex daily task, as a scenario to investigate people's successful and unsatisfactory experiences while receiving assistance from an LLM-based CA, Mango Mango. We discovered that participants value the system's ability to provide extensive information beyond the recipe, offer customized instructions based on context, and assist them in dynamically planning the task. However, they expect the system to be more adaptive to oral conversation and provide more suggestive responses to keep users actively involved. Recognizing that users began treating our LLM-CA as a personal assistant or even a partner rather than just a recipe-reading tool, we propose several design considerations for future development.Comment: Under submission to CHI202
    corecore