1,995 research outputs found

    CityTFT: Temporal Fusion Transformer for Urban Building Energy Modeling

    Full text link
    Urban Building Energy Modeling (UBEM) is an emerging method to investigate urban design and energy systems against the increasing energy demand at urban and neighborhood levels. However, current UBEM methods are mostly physic-based and time-consuming in multiple climate change scenarios. This work proposes CityTFT, a data-driven UBEM framework, to accurately model the energy demands in urban environments. With the empowerment of the underlying TFT framework and an augmented loss function, CityTFT could predict heating and cooling triggers in unseen climate dynamics with an F1 score of 99.98 \% while RMSE of loads of 13.57 kWh

    MeInfoText 2.0: gene methylation and cancer relation extraction from biomedical literature

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>DNA methylation is regarded as a potential biomarker in the diagnosis and treatment of cancer. The relations between aberrant gene methylation and cancer development have been identified by a number of recent scientific studies. In a previous work, we used co-occurrences to mine those associations and compiled the MeInfoText 1.0 database. To reduce the amount of manual curation and improve the accuracy of relation extraction, we have now developed MeInfoText 2.0, which uses a machine learning-based approach to extract gene methylation-cancer relations.</p> <p>Description</p> <p>Two maximum entropy models are trained to predict if aberrant gene methylation is related to any type of cancer mentioned in the literature. After evaluation based on 10-fold cross-validation, the average precision/recall rates of the two models are 94.7/90.1 and 91.8/90% respectively. MeInfoText 2.0 provides the gene methylation profiles of different types of human cancer. The extracted relations with maximum probability, evidence sentences, and specific gene information are also retrievable. The database is available at <url>http://bws.iis.sinica.edu.tw:8081/MeInfoText2/</url>.</p> <p>Conclusion</p> <p>The previous version, MeInfoText, was developed by using association rules, whereas MeInfoText 2.0 is based on a new framework that combines machine learning, dictionary lookup and pattern matching for epigenetics information extraction. The results of experiments show that MeInfoText 2.0 outperforms existing tools in many respects. To the best of our knowledge, this is the first study that uses a hybrid approach to extract gene methylation-cancer relations. It is also the first attempt to develop a gene methylation and cancer relation corpus.</p

    Radiomics-Informed Deep Learning for Classification of Atrial Fibrillation Sub-Types from Left-Atrium CT Volumes

    Full text link
    Atrial Fibrillation (AF) is characterized by rapid, irregular heartbeats, and can lead to fatal complications such as heart failure. The disease is divided into two sub-types based on severity, which can be automatically classified through CT volumes for disease screening of severe cases. However, existing classification approaches rely on generic radiomic features that may not be optimal for the task, whilst deep learning methods tend to over-fit to the high-dimensional volume inputs. In this work, we propose a novel radiomics-informed deep-learning method, RIDL, that combines the advantages of deep learning and radiomic approaches to improve AF sub-type classification. Unlike existing hybrid techniques that mostly rely on na\"ive feature concatenation, we observe that radiomic feature selection methods can serve as an information prior, and propose supplementing low-level deep neural network (DNN) features with locally computed radiomic features. This reduces DNN over-fitting and allows local variations between radiomic features to be better captured. Furthermore, we ensure complementary information is learned by deep and radiomic features by designing a novel feature de-correlation loss. Combined, our method addresses the limitations of deep learning and radiomic approaches and outperforms state-of-the-art radiomic, deep learning, and hybrid approaches, achieving 86.9% AUC for the AF sub-type classification task. Code is available at https://github.com/xmed-lab/RIDL.Comment: Accepted by MICCAI2

    Identification of key bioactive anti-migraine constituents of Asari radix et rhizoma using network pharmacology and nitroglycerin-induced migraine rat model

    Get PDF
    Purpose: To elucidate the bioactive constituents of Asari radix et rhizoma (ARR) in treating migraine based on network pharmacology and nitroglycerin-induced migraine rat model. Methods: The potential bioactive constituents of ARR were identified with the aid of literature retrieval and virtual screening, and the migraine-related hub genes were identified using protein-protein interaction and topology analyses. Then, the interaction between the potential bioactive constituents and hub genes was determined with molecular docking and topology, leading to the prediction of the anti-migraine constituents of ARR. Moreover, a rat model of nitroglycerin-induced migraine was used to confirm the prediction by measuring the frequency of head-scratching and head-shaking behavior (FHHB) in the rats. In addition, levels of nitric oxide (NO) and calcitonin gene-related peptide (CGRP) in blood, norepinephrine (NE) and 5-hydroxytryptamine (5-HT) in brain were measured using appropriate commercial kits. Results: Network pharmacology revealed that naringenin-7-O-β-D-glucopyranoside and higenamine might be the key anti-migraine bioactive constituents of ARR. On addition of naringenin-7-O-β-D- glucopyranoside or higenamine to ARR, there was marked enhancement of the mitigating effect of ARR on nitroglycerin-induced abnormalities in levels of NO, CGRP, 5-HT and NE, as well as FHHB in rats (p &lt; 0.05 or 0.01). Conclusion: These findings indicate that naringenin-7-O-β-D-glucopyranoside and higenamine might be the key bioactive and anti-migraine constituents of ARR. However, in addition to naringenin-7-O-β-D- glucopyranoside and higenamine, there were many other anti-migraine constituents in ARR. Therefore, there is need for further investigations on the actual contributions of these two constituents of ARR in treating migraine

    (E)-3-Methyl-5-(4-methyl­phen­oxy)-1-phenyl-1H-pyrazole-4-carbaldehyde O-[(2-chloro-1,3-thia­zol-5-yl)meth­yl]oxime

    Get PDF
    In the title compound, C22H19ClN4O2S, the planes of the benzene ring, the substituted phenyl ring and the thia­zole ring make dihedral angles of 18.4 (3), 88.9 (2) and 63.0 (3)°, respectively, with the pyrazole ring

    Combined Scaling for Open-Vocabulary Image Classification

    Full text link
    We present a combined scaling method - named BASIC - that achieves 85.7% top-1 accuracy on the ImageNet ILSVRC-2012 validation set without learning from any labeled ImageNet example. This accuracy surpasses best published similar models - CLIP and ALIGN - by 9.3%. Our BASIC model also shows significant improvements in robustness benchmarks. For instance, on 5 test sets with natural distribution shifts such as ImageNet-{A,R,V2,Sketch} and ObjectNet, our model achieves 84.3% top-1 average accuracy, only a small drop from its original ImageNet accuracy. To achieve these results, we scale up the contrastive learning framework of CLIP and ALIGN in three dimensions: data size, model size, and batch size. Our dataset has 6.6B noisy image-text pairs, which is 4x larger than ALIGN, and 16x larger than CLIP. Our largest model has 3B weights, which is 3.75x larger in parameters and 8x larger in FLOPs than ALIGN and CLIP. Finally, our batch size is 65536 which is 2x more than CLIP and 4x more than ALIGN. We encountered two main challenges with the scaling rules of BASIC. First, the main challenge with implementing the combined scaling rules of BASIC is the limited memory of accelerators, such as GPUs and TPUs. To overcome the memory limit, we propose two simple methods which make use of gradient checkpointing and model parallelism. Second, while increasing the dataset size and the model size has been the defacto method to improve the performance of deep learning models like BASIC, the effect of a large contrastive batch size on such contrastive-trained image-text models is not well-understood. To shed light on the benefits of large contrastive batch sizes, we develop a theoretical framework which shows that larger contrastive batch sizes lead to smaller generalization gaps for image-text models such as BASIC
    corecore