407 research outputs found

    Production of membrane proteins in yeast

    Get PDF
    Background Yeast is an important and versatile organism for studying membrane proteins. It is easy to cultivate and can perform higher eukaryote-like post-translational modifications. S. cerevisiae has a fully-sequenced genome and there are several collections of deletion strains available, whilst P. pastoris can produce very high cell densities (230 g/l). Results We have used both S. cerevisiae and P. pastoris to over-produce the following His6 and His10 carboxyl terminal fused membrane proteins. CD81 – 26 kDa tetraspanin protein (TAPA-1) that may play an important role in the regulation of lymphoma cell growth and may also act as the viral receptor for Hepatitis C-Virus. CD82 – 30 kDa tetraspanin protein that associates with CD4 or CD8 cells and delivers co-stimulatory signals for the TCR/CD3 pathway. MC4R – 37 kDa seven transmembrane G-protein coupled receptor, present on neurons in the hypothalamus region of the brain and predicted to have a role in the feast or fast signalling pathway. Adt2p – 34 kDa six transmembrane protein that catalyses the exchange of ADP and ATP across the yeast mitochondrial inner membrane. Conclusion We show that yeasts are flexible production organisms for a range of different membrane proteins. The yields are such that future structure-activity relationship studies can be initiated via reconstitution, crystallization for X-ray diffraction or NMR experiments

    Evaluation of Blur and Gaussian Noise Degradation in Images Using Statistical Model of Natural Scene and Perceptual Image Quality Measure

    Get PDF
    In this paper we present new method for classification of image degradation type based on Riesz transform coefficients and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) that employs spatial coefficients. In our method we use additional statistical parameters that gives us statistically better results for blur and all tested degradations together in comparison with previous method. A new method to determine level of blur and Gaussian noise degradation in images using statistical model of natural scene is presented. We defined parameters for evaluation of level of Gaussian noise and blur degradation in images. In real world applications reference image is usually not available therefore proposed method enables classification of image degradation by type and estimation of Gaussian noise and blur levels for any degraded image

    Can 3 mg·kg−1 of Caffeine Be Used as An Effective Nutritional Supplement to Enhance the Effects of Resistance Training in Rugby Union Players?

    Get PDF
    The present study uniquely examined the effect of 3 mg·kg−1 chronic caffeine consumption on training adaptations induced by 7-weeks resistance training and assessed the potential for habituation to caffeine’s ergogenicity. Thirty non-specifically resistance-trained university standard male rugby union players (age (years): 20 ± 2; height (cm): 181 ± 7; body mass (kg): 92 ± 17) completed the study), who were moderate habitual caffeine consumers (118 ± 110 mg), completed the study. Using a within-subject double-blind, placebo-controlled experimental design, the acute effects of caffeine intake on upper and lower limb maximal voluntary concentric and eccentric torque were measured using isokinetic dynamometry (IKD) prior to and immediately following a resistance training intervention. Participants were split into strength-matched groups and completed a resistance-training program for seven weeks, consuming either caffeine or a placebo before each session. Irrespective of group, acute caffeine consumption improved peak eccentric torque of the elbow extensors (p p p p p p p < 0.037) in the total work performed in the participants that consumed caffeine across the course of the intervention. These results infer that caffeine may be beneficial to evoke acute improvements in muscular strength, with acute effects prevalent following chronic exposure to the experimental dose. However, individuals that consumed caffeine during the intervention did not elicit superior post-intervention training- induced adaptations in muscular strength

    Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

    Full text link
    As virtually all aspects of our lives are increasingly impacted by algorithmic decision making systems, it is incumbent upon us as a society to ensure such systems do not become instruments of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. We consider the problem of determining whether the decisions made by such systems are discriminatory, through the lens of causal models. We introduce two definitions of group fairness grounded in causality: fair on average causal effect (FACE), and fair on average causal effect on the treated (FACT). We use the Rubin-Neyman potential outcomes framework for the analysis of cause-effect relationships to robustly estimate FACE and FACT. We demonstrate the effectiveness of our proposed approach on synthetic data. Our analyses of two real-world data sets, the Adult income data set from the UCI repository (with gender as the protected attribute), and the NYC Stop and Frisk data set (with race as the protected attribute), show that the evidence of discrimination obtained by FACE and FACT, or lack thereof, is often in agreement with the findings from other studies. We further show that FACT, being somewhat more nuanced compared to FACE, can yield findings of discrimination that differ from those obtained using FACE.Comment: 7 pages, 2 figures, 2 tables.To appear in Proceedings of the International Conference on World Wide Web (WWW), 201

    A unified approach to quantifying algorithmic unfairness: Measuring individual &amp; group unfairness via inequality indices

    Get PDF
    Discrimination via algorithmic decision making has received considerable attention. Prior work largely focuses on defining conditions for fairness, but does not define satisfactory measures of algorithmic unfairness. In this paper, we focus on the following question: Given two unfair algorithms, how should we determine which of the two is more unfair? Our core idea is to use existing inequality indices from economics to measure how unequally the outcomes of an algorithm benefit different individuals or groups in a population. Our work offers a justified and general framework to compare and contrast the (un)fairness of algorithmic predictors. This unifying approach enables us to quantify unfairness both at the individual and the group level. Further, our work reveals overlooked tradeoffs between different fairness notions: using our proposed measures, the overall individual-level unfairness of an algorithm can be decomposed into a between-group and a within-group component. Earlier methods are typically designed to tackle only between-group unfairness, which may be justified for legal or other reasons. However, we demonstrate that minimizing exclusively the between-group component may, in fact, increase the within-group, and hence the overall unfairness. We characterize and illustrate the tradeoffs between our measures of (un)fairness and the prediction accuracy

    LiFT: A Scalable Framework for Measuring Fairness in ML Applications

    Full text link
    Many internet applications are powered by machine learned models, which are usually trained on labeled datasets obtained through either implicit / explicit user feedback signals or human judgments. Since societal biases may be present in the generation of such datasets, it is possible for the trained models to be biased, thereby resulting in potential discrimination and harms for disadvantaged groups. Motivated by the need for understanding and addressing algorithmic bias in web-scale ML systems and the limitations of existing fairness toolkits, we present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems. We highlight the key requirements in deployed settings, and present the design of our fairness measurement system. We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn. Finally, we provide open problems based on practical experience.Comment: Accepted for publication in CIKM 202
    corecore