981 research outputs found

    CIS- AND TRANS-ACTIVATION OF HORMONE RECEPTORS: THE LH RECEPTOR

    Get PDF
    The Luteinizing hormone receptor (LHR) belongs to the G protein-coupled receptor family, asdo the other glycoprotein hormone receptors for FSH, TSH, and CG. The LHR comprises twohalves of ~350 amino acids: an extracellular hormone binding exodomain and a seventransmembrane-spanning endodomain responsible for signal generation. Hormone binds to theexodomain with high affinity, and the resulting conformational changes in thehormone/exodomain complex modulate the endodomain to generate hormone signals. Hormonebinding to an LHR produces hormonal signals (cis-activation), but it is not known whether aliganded LHR could activate other unoccupied LHRs (trans-activation). The LHR activates bothadenylyl cyclase and phospholipase C??. This dissertation shows that trans-activation of the LHRleads to the activation of adenylyl cyclase to induce cAMP but not to the activation ofphospholipase C?? to induce the inositol phosphate signaling. Trans-activation offers amechanism of signal amplification at the receptor level and also provides a mechanism ofmultiple signal generation for a liganded LHR to cis-activate phospholipase C?? and transactivateadenylyl cyclase. Also coexpression of Gi2 with a constitutively activating LHR(Asp578Gly), the most common mutation of male-limited precocious puberty, shows that Gi2could completely inhibit cAMP induction by the LHR mutant. Experiments using the carboxylterminal region of G protein ?? subunits demonstrate that LHR has overlapping binding sites forG?? subunits Gs and Gi2

    Loss-based Objective and Penalizing Priors for Model Selection Problems

    Full text link
    Many Bayesian model selection problems, such as variable selection or cluster analysis, start by setting prior model probabilities on a structured model space. Based on a chosen loss function between models, model selection is often performed with a Bayes estimator that minimizes the posterior expected loss. The prior model probabilities and the choice of loss both highly affect the model selection results, especially for data with small sample sizes, and their proper calibration and careful reflection of no prior model preference are crucial in objective Bayesian analysis. We propose risk equilibrium priors as an objective choice for prior model probabilities that only depend on the model space and the choice of loss. Under the risk equilibrium priors, the Bayes action becomes indifferent before observing data, and the family of the risk equilibrium priors includes existing popular objective priors in Bayesian variable selection problems. We generalize the result to the elicitation of objective priors for Bayesian cluster analysis with Binder's loss. We also propose risk penalization priors, where the Bayes action chooses the simplest model before seeing data. The concept of risk equilibrium and penalization priors allows us to interpret prior properties in light of the effect of loss functions, and also provides new insight into the sensitivity of Bayes estimators under the same prior but different loss. We illustrate the proposed concepts with variable selection simulation studies and cluster analysis on a galaxy dataset.Comment: 31 pages, 3 figure

    Differentiable Learning of Generalized Structured Matrices for Efficient Deep Neural Networks

    Full text link
    This paper investigates efficient deep neural networks (DNNs) to replace dense unstructured weight matrices with structured ones that possess desired properties. The challenge arises because the optimal weight matrix structure in popular neural network models is obscure in most cases and may vary from layer to layer even in the same network. Prior structured matrices proposed for efficient DNNs were mostly hand-crafted without a generalized framework to systematically learn them. To address this issue, we propose a generalized and differentiable framework to learn efficient structures of weight matrices by gradient descent. We first define a new class of structured matrices that covers a wide range of structured matrices in the literature by adjusting the structural parameters. Then, the frequency-domain differentiable parameterization scheme based on the Gaussian-Dirichlet kernel is adopted to learn the structural parameters by proximal gradient descent. Finally, we introduce an effective initialization method for the proposed scheme. Our method learns efficient DNNs with structured matrices, achieving lower complexity and/or higher performance than prior approaches that employ low-rank, block-sparse, or block-low-rank matrices

    Optimization of gear teeth in the wind turbine drive train with gear contact’s uncertainty using the reliability-based design optimization

    Get PDF
    Although gear teeth give lots of advantages, there is a high possibility of failure in gear teeth in each gear stage in the drive train system. In this research, the authors developed proper gear teeth using the basic theorem of gear failure and reliability-based design optimization. A design variable characterized by a probability distribution was applied to the static stress analysis model and the dynamics analysis model to determine an objective function and constraint equations and to solve the reliability-based design optimization. For the optimization, the authors simulated the torsional drive train system which includes rotational coordinates. First, the authors established a static stress analysis model which gives information about endurance limit and bending strength. By expressing gear mesh stiffness in terms of the Fourier series, the equations of motion including the gear mesh models and kinematical relations in the drive train system were acquired in the form of the Lagrange equations and constraint equations. For the numerical analysis, the Newmark Beta method was used to get dynamic responses including gear mesh contact forces. From the results such as the gear mesh contact force, the authors calculated the probability of failure, arranged each probability and gear teeth, and proposed a reasonable and economic design of gear teeth

    GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video and GPS data

    Full text link
    Semantic segmentation for autonomous driving should be robust against various in-the-wild environments. Nighttime semantic segmentation is especially challenging due to a lack of annotated nighttime images and a large domain gap from daytime images with sufficient annotation. In this paper, we propose a novel GPS-based training framework for nighttime semantic segmentation. Given GPS-aligned pairs of daytime and nighttime images, we perform cross-domain correspondence matching to obtain pixel-level pseudo supervision. Moreover, we conduct flow estimation between daytime video frames and apply GPS-based scaling to acquire another pixel-level pseudo supervision. Using these pseudo supervisions with a confidence map, we train a nighttime semantic segmentation network without any annotation from nighttime images. Experimental results demonstrate the effectiveness of the proposed method on several nighttime semantic segmentation datasets. Our source code is available at https://github.com/jimmy9704/GPS-GLASS.Comment: ICCVW 202

    Training IBM Watson Using Automatically Generated Question-Answer Pairs

    Get PDF
    IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of well-prepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and significantly limits the efficiency of Watson’s training. Recently, a large-scale dataset of over 30 million question-answer pairs was reported. Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy. According to our experiments, using this auto-generated dataset was effective for training Watson, complementing manually crafted question-answer pairs. To the best of the authors’ knowledge, this work is the first attempt to use a large-scale dataset of automatically generated question-answer pairs for training IBM Watson. We anticipate that the insights and lessons obtained from our experiments will be useful for researchers who want to expedite Watson training leveraged by automatically generated question-answer pairs
    • 

    corecore