726 research outputs found

    Screening for TRPV1 Temperature-Sensing Domains with Peptide Insertion

    Get PDF

    An association study of NRAMP1, VDR, MBL and their interaction with the susceptibility to tuberculosis in a Chinese population

    Get PDF
    SummaryObjectivesTo investigate natural-resistance-associated macrophage protein 1 (NRAMP1), mannose-binding lectin (MBL), vitamin D receptor (VDR) gene polymorphisms and their interaction with susceptibility to pulmonary tuberculosis (PTB) in a Chinese population.MethodsA case-control study was conducted in PTB (n=151), age- and sex- matched healthy controls (HCs) (n=453). Genetic polymorphisms of NRAMP1 (INT4, D543NA and 3′UTR), MBL (HL, PQ, XY and AB) and VDR (FokI and Taq) were analyzed by using PCR-restriction fragment length polymorphism (RFLP) and PCR- single- strand conformation polymorphism (SSCP) techniques. Multifactor dimensionality reduction (MDR) analysis was carried out to assess the effects of the interaction between SNPs.ResultsThe distribution of NRAMP1- 3′UTR (TGTG/del), MBL- HL (H/L) and FokI (F/f) were significantly different between PTB patients and HCs (p<0.05). HPYA (OR: 1.88; 95% CI: 1.22-2.91), LPXA (OR: 3.17; 95% CI: 1.69- 5.96), LQYA (OR: 3.52; 95%CI: 1.50-8.23) and LPYB (OR: 12.37; 95%CI: 3.75- 40.85) of MBL were risk haplotypes for PTB. The TGTG- H- f (OR: 1.70; 95%CI: 1.10-2.62) and del- H-f (OR: 3.48; 95% CI: 1.45-8.37) of 3′UTR- HL- FokI were also high-risk haplotypes associated with tuberculosis.ConclusionsOur study suggests that genotypes of many polymorphic genes are associated with TB, it is necessary to further explore the mechanism of genotypes and gene-gene interaction in susceptibility to tuberculosis

    Divalent cations activate TRPV1 through promoting conformational change of the extracellular region

    Get PDF
    Divalent cations Mg and Ba selectively and directly potentiate transient receptor potential vanilloid type 1 heat activation by lowering the activation threshold into the room temperature range. We found that Mg potentiates channel activation only from the extracellular side; on the intracellular side, Mg inhibits channel current. By dividing the extracellularly accessible region of the channel protein into small segments and perturbing the structure of each segment with sequence replacement mutations, we observed that the S1-S2 linker, the S3-S4 linker, and the pore turret are all required for Mg potentiation. Sequence replacements at these regions substantially reduced or eliminated Mg-induced activation at room temperature while sparing capsaicin activation. Heat activation was affected by many, but not all, of these structural alternations. These observations indicate that extracellular linkers and the turret may interact with each other. Site-directed fluorescence resonance energy transfer measurements further revealed that, like heat, Mg also induces structural changes in the pore turret. Interestingly, turret movement induced by Mg precedes channel activation, suggesting that Mg-induced conformational change in the extracellular region most likely serves as the cause of channel activation instead of a coincidental or accommodating structural adjustment

    Improving Deep Regression with Ordinal Entropy

    Full text link
    In computer vision, it is often observed that formulating regression problems as a classification task often yields better performance. We investigate this curious phenomenon and provide a derivation to show that classification, with the cross-entropy loss, outperforms regression with a mean squared error loss in its ability to learn high-entropy feature representations. Based on the analysis, we propose an ordinal entropy loss to encourage higher-entropy feature spaces while maintaining ordinal relationships to improve the performance of regression tasks. Experiments on synthetic and real-world regression tasks demonstrate the importance and benefits of increasing entropy for regression.Comment: Accepted to ICLR 2023. Project page: https://github.com/needylove/OrdinalEntrop

    VisorGPT: Learning Visual Prior via Generative Pre-Training

    Full text link
    Various stuff and things in visual data possess specific traits, which can be learned by deep neural networks and are implicitly represented as the visual prior, e.g., object location and shape, in the model. Such prior potentially impacts many vision tasks. For example, in conditional image synthesis, spatial conditions failing to adhere to the prior can result in visually inaccurate synthetic results. This work aims to explicitly learn the visual prior and enable the customization of sampling. Inspired by advances in language modeling, we propose to learn Visual prior via Generative Pre-Training, dubbed VisorGPT. By discretizing visual locations of objects, e.g., bounding boxes, human pose, and instance masks, into sequences, VisorGPT can model visual prior through likelihood maximization. Besides, prompt engineering is investigated to unify various visual locations and enable customized sampling of sequential outputs from the learned prior. Experimental results demonstrate that VisorGPT can effectively model the visual prior, which can be employed for many vision tasks, such as customizing accurate human pose for conditional image synthesis models like ControlNet. Code will be released at https://github.com/Sierkinhane/VisorGPT.Comment: Project web-page: https://sierkinhane.github.io/visor-gpt
    • …
    corecore