391 research outputs found

    DNA Methylation and Non-small Cell Lung Cancer

    Get PDF
    Genomic DNA methylation is a major form of epigenetic modification. Hypermethylation could affect the binding of transcription factors to DNA and change the structure of chromatin resulting in silence of tumor suppressor genes, which plays an important role in cancer initiation and progression. In recent years, the study of DNA methylation in lung cancer, mostly in non-small cell lung cancer, has made great progress and become a new target for early detection, risk assessment, prognosis and cancer therapy

    DrugChat: Towards Enabling ChatGPT-Like Capabilities on Drug Molecule Graphs

    Full text link
    A ChatGPT-like system for drug compounds could be a game-changer in pharmaceutical research, accelerating drug discovery, enhancing our understanding of structure-activity relationships, guiding lead optimization, aiding drug repurposing, reducing the failure rate, and streamlining clinical trials. In this work, we make an initial attempt towards enabling ChatGPT-like capabilities on drug molecule graphs, by developing a prototype system DrugChat. DrugChat works in a similar way as ChatGPT. Users upload a compound molecule graph and ask various questions about this compound. DrugChat will answer these questions in a multi-turn, interactive manner. The DrugChat system consists of a graph neural network (GNN), a large language model (LLM), and an adaptor. The GNN takes a compound molecule graph as input and learns a representation for this graph. The adaptor transforms the graph representation produced by the GNN into another representation that is acceptable to the LLM. The LLM takes the compound representation transformed by the adaptor and users' questions about this compound as inputs and generates answers. All these components are trained end-to-end. To train DrugChat, we collected instruction tuning datasets which contain 10,834 drug compounds and 143,517 question-answer pairs. The code and data is available at \url{https://github.com/UCSD-AI4H/drugchat

    Sustainable decisions on product upgrade confrontations with remanufacturing operations

    Get PDF
    In recent decades, remanufacturing is perceived to be an environmentally friendly option due to the reduced consumption of materials, energy etc. It should be noted that whether the remanufacturing operations are undertaken by the original equipment manufacturers (OEMs) or outsourced to the remanufacturers, given the size and the growth of remanufactured products, many OEMs intend to fend off the potential cannibalization of new products sales through differentiating their quality levels from those of remanufactured ones by launching upgraded versions. To understand whether and how the product upgrading strategy impacts on optimal outcomes in the context of the remanufacturing operations undertaken by OEMs or third-party remanufacturers (TPRs), in this paper, we develop two models that highlight the OEM’s product upgrading strategy under the scenarios where (1) the OEM owns its remanufacturing operations in-house (Model O) or (2) remanufacturing operations are undertaken by a TPR (Model T). Among other results, we find that, from an economic performance perspective, it is more beneficial for the OEM to perform remanufacturing operations in-house; however, from an environmental sustainability perspective, such behavior is not always good for our environment. In particular, when the level of product upgrading is pronounced, the remanufacturing operations undertaken by the OEM are always detrimental to our environment, due to indulging in remanufacturing, as seen in Model O

    BLO-SAM: Bi-level Optimization Based Overfitting-Preventing Finetuning of SAM

    Full text link
    The Segment Anything Model (SAM), a foundation model pretrained on millions of images and segmentation masks, has significantly advanced semantic segmentation, a fundamental task in computer vision. Despite its strengths, SAM encounters two major challenges. Firstly, it struggles with segmenting specific objects autonomously, as it relies on users to manually input prompts like points or bounding boxes to identify targeted objects. Secondly, SAM faces challenges in excelling at specific downstream tasks, like medical imaging, due to a disparity between the distribution of its pretraining data, which predominantly consists of general-domain images, and the data used in downstream tasks. Current solutions to these problems, which involve finetuning SAM, often lead to overfitting, a notable issue in scenarios with very limited data, like in medical imaging. To overcome these limitations, we introduce BLO-SAM, which finetunes SAM based on bi-level optimization (BLO). Our approach allows for automatic image segmentation without the need for manual prompts, by optimizing a learnable prompt embedding. Furthermore, it significantly reduces the risk of overfitting by training the model's weight parameters and the prompt embedding on two separate subsets of the training dataset, each at a different level of optimization. We apply BLO-SAM to diverse semantic segmentation tasks in general and medical domains. The results demonstrate BLO-SAM's superior performance over various state-of-the-art image semantic segmentation methods

    SmartIntentNN: Towards Smart Contract Intent Detection

    Full text link
    Researchers currently have been focusing on smart contract vulnerability detection, but we find that developers' intent to write smart contracts is a more noteworthy security concern because smart contracts with malicious intent have caused significant financial loss to users. A more unfortunate fact is that we can only rely on manual audits to check for unfriendly smart contracts. In this paper, we propose \textsc{SmartIntentNN}, Smart Contract Intent Neural Network, a deep learning-based tool that aims to automate the process of developers' intent detection in smart contracts, saving human resources and overhead. The demo video is available on \url{https://youtu.be/ho1SMtYm-wI}.Comment: 4 pages, 3 figures, conference tool track. arXiv admin note: substantial text overlap with arXiv:2211.1072

    Deep Smart Contract Intent Detection

    Full text link
    Nowadays, security activities in smart contracts concentrate on vulnerability detection. Despite early success, we find that developers' intent to write smart contracts is a more noteworthy security concern because smart contracts with malicious intent have caused significant users' financial loss. Unfortunately, current approaches to identify the aforementioned malicious smart contracts rely on smart contract security audits, which entail huge manpower consumption and financial expenditure. To resolve this issue, we propose a novel deep learning-based approach, SmartIntentNN, to conduct automated smart contract intent detection. SmartIntentNN consists of three primary parts: a pre-trained sentence encoder to generate the contextual representations of smart contracts, a K-means clustering method to highlight intent-related representations, and a bidirectional LSTM-based (long-short term memory) multi-label classification network to predict the intents in smart contracts. To evaluate the performance of SmartIntentNN, we collect more than 40,000 real smart contracts and perform a series of comparison experiments with our selected baseline approaches. The experimental results demonstrate that SmartIntentNN outperforms all baselines by up to 0.8212 in terms of the f1-score metric.Comment: 12 pages, 9 figures, conferenc

    ComPtr: Towards Diverse Bi-source Dense Prediction Tasks via A Simple yet General Complementary Transformer

    Full text link
    Deep learning (DL) has advanced the field of dense prediction, while gradually dissolving the inherent barriers between different tasks. However, most existing works focus on designing architectures and constructing visual cues only for the specific task, which ignores the potential uniformity introduced by the DL paradigm. In this paper, we attempt to construct a novel \underline{ComP}lementary \underline{tr}ansformer, \textbf{ComPtr}, for diverse bi-source dense prediction tasks. Specifically, unlike existing methods that over-specialize in a single task or a subset of tasks, ComPtr starts from the more general concept of bi-source dense prediction. Based on the basic dependence on information complementarity, we propose consistency enhancement and difference awareness components with which ComPtr can evacuate and collect important visual semantic cues from different image sources for diverse tasks, respectively. ComPtr treats different inputs equally and builds an efficient dense interaction model in the form of sequence-to-sequence on top of the transformer. This task-generic design provides a smooth foundation for constructing the unified model that can simultaneously deal with various bi-source information. In extensive experiments across several representative vision tasks, i.e. remote sensing change detection, RGB-T crowd counting, RGB-D/T salient object detection, and RGB-D semantic segmentation, the proposed method consistently obtains favorable performance. The code will be available at \url{https://github.com/lartpang/ComPtr}
    • …
    corecore