158 research outputs found

    γ-Catenin at Adherens Junctions: Mechanism and Biologic Implications in Hepatocellular Cancer after β-Catenin Knockdown

    Get PDF
    Abstractβ-Catenin is important in liver homeostasis as a part of Wnt signaling and adherens junctions (AJs), while its aberrant activation is observed in hepatocellular carcinoma (HCC). We have reported hepatocyte-specific β-catenin knockout (KO) mice to lack adhesive defects as γ-catenin compensated at AJ. Because γ-catenin is a desmosomal protein, we asked if its increase in KO might deregulate desmosomes. No changes in desmosomal proteins or ultrastructure other than increased plakophilin-3 were observed. To further elucidate the role and regulation of γ-catenin, we contemplate an in vitro model and show γ-catenin increase in HCC cells upon β-catenin knockdown (KD). Here, γ-catenin is unable to rescue β-catenin/T cell factor (TCF) reporter activity; however, it sufficiently compensates at AJs as assessed by scratch wound assay, centrifugal assay for cell adhesion (CAFCA), and hanging drop assays. γ-Catenin increase is observed only after β-catenin protein decrease and not after blockade of its transactivation. γ-Catenin increase is associated with enhanced serine/threonine phosphorylation and abrogated by protein kinase A (PKA) inhibition. In fact, several PKA-binding sites were detected in γ-catenin by in silico analysis. Intriguingly γ-catenin KD led to increased β-catenin levels and transactivation. Thus, γ-catenin compensates for β-catenin loss at AJ without affecting desmosomes but is unable to fulfill functions in Wnt signaling. γ-Catenin stabilization after β-catenin loss is brought about by PKA. Catenin-sensing mechanism may depend on absolute β-catenin levels and not its activity. Anti-β-catenin therapies for HCC affecting total β-catenin may target aberrant Wnt signaling without negatively impacting intercellular adhesion, provided mechanisms leading to γ-catenin stabilization are spared

    Individual Differences in Learning Social and Non-Social Network Structures

    Get PDF
    How do people acquire knowledge about which individuals belong to different cliques or communities? And to what extent does this learning process differ from the process of learning higher-order information about complex associations between non-social bits of information? Here, we employ a paradigm in which the order of stimulus presentation forms temporal associations between the stimuli, collectively constituting a complex network. We examined individual differences in the ability to learn community structure of networks composed of social versus non-social stimuli. Although participants were able to learn community structure of both social and non-social networks, their performance in social network learning was uncorrelated with their performance in non-social network learning. In addition, social traits, including social orientation and perspective-taking, uniquely predicted the learning of social community structure but not the learning of non-social community structure. Taken together, our results suggest that the process of learning higher-order community structure in social networks is partially distinct from the process of learning higher-order community structure in non-social networks. Our study design provides a promising approach to identify neurophysiological drivers of social network versus non-social network learning, extending our knowledge about the impact of individual differences on these learning processes

    Molecular network analysis of phosphotyrosine and lipid metabolism in hepatic PTP1b deletion mice

    Get PDF
    Metabolic syndrome describes a set of obesity-related disorders that increase diabetes, cardiovascular, and mortality risk. Studies of liver-specific protein-tyrosine phosphatase 1b (PTP1b) deletion mice (L-PTP1b[superscript −/−]) suggest that hepatic PTP1b inhibition would mitigate metabolic-syndrome through amelioration of hepatic insulin resistance, endoplasmic-reticulum stress, and whole-body lipid metabolism. However, the altered molecular-network states underlying these phenotypes are poorly understood. We used mass spectrometry to quantify protein-phosphotyrosine network changes in L-PTP1b[superscript −/−] mouse livers relative to control mice on normal and high-fat diets. We applied a phosphosite-set-enrichment analysis to identify known and novel pathways exhibiting PTP1b- and diet-dependent phosphotyrosine regulation. Detection of a PTP1b-dependent, but functionally uncharacterized, set of phosphosites on lipid-metabolic proteins motivated global lipidomic analyses that revealed altered polyunsaturated-fatty-acid (PUFA) and triglyceride metabolism in L-PTP1b[superscript −/−] mice. To connect phosphosites and lipid measurements in a unified model, we developed a multivariate-regression framework, which accounts for measurement noise and systematically missing proteomics data. This analysis resulted in quantitative models that predict roles for phosphoproteins involved in oxidation–reduction in altered PUFA and triglyceride metabolism.Pfizer Inc. (grant)National Institutes of Health (U.S.) (grant 5R24DK090963)National Institutes of Health (U.S.) (grant U54-CA112967)National Institutes of Health (U.S.) (grant CA49152 R37)National Institutes of Health (U.S.) (grant R01-DK080756)National Mouse Metabolic Phenotyping Center at UMASS (Grant (U24-DK093000))National Science Foundation (U.S.) (Graduate Research Fellowship

    LSST Science Book, Version 2.0

    Get PDF
    A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at http://www.lsst.org/lsst/sciboo

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be 24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with δ<+34.5\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    Population Health Metrics Research Consortium gold standard verbal autopsy validation study: design, implementation, and development of analysis datasets

    Get PDF
    Background: Verbal autopsy methods are critically important for evaluating the leading causes of death in populations without adequate vital registration systems. With a myriad of analytical and data collection approaches, it is essential to create a high quality validation dataset from different populations to evaluate comparative method performance and make recommendations for future verbal autopsy implementation. This study was undertaken to compile a set of strictly defined gold standard deaths for which verbal autopsies were collected to validate the accuracy of different methods of verbal autopsy cause of death assignment.Methods: Data collection was implemented in six sites in four countries: Andhra Pradesh, India; Bohol, Philippines; Dar es Salaam, Tanzania; Mexico City, Mexico; Pemba Island, Tanzania; and Uttar Pradesh, India. The Population Health Metrics Research Consortium (PHMRC) developed stringent diagnostic criteria including laboratory, pathology, and medical imaging findings to identify gold standard deaths in health facilities as well as an enhanced verbal autopsy instrument based on World Health Organization (WHO) standards. A cause list was constructed based on the WHO Global Burden of Disease estimates of the leading causes of death, potential to identify unique signs and symptoms, and the likely existence of sufficient medical technology to ascertain gold standard cases. Blinded verbal autopsies were collected on all gold standard deaths.Results: Over 12,000 verbal autopsies on deaths with gold standard diagnoses were collected (7,836 adults, 2,075 children, 1,629 neonates, and 1,002 stillbirths). Difficulties in finding sufficient cases to meet gold standard criteria as well as problems with misclassification for certain causes meant that the target list of causes for analysis was reduced to 34 for adults, 21 for children, and 10 for neonates, excluding stillbirths. To ensure strict independence for the validation of methods and assessment of comparative performance, 500 test-train datasets were created from the universe of cases, covering a range of cause-specific compositions.Conclusions: This unique, robust validation dataset will allow scholars to evaluate the performance of different verbal autopsy analytic methods as well as instrument design. This dataset can be used to inform the implementation of verbal autopsies to more reliably ascertain cause of death in national health information systems

    Composing The Reflected Best-Self Portrait: Building Pathways For Becoming Extraordinary In Work Organizations

    Full text link

    Storylines: an alternative approach to representing uncertainty in physical aspects of climate change

    Get PDF
    As climate change research becomes increasingly applied, the need for actionable information is growing rapidly. A key aspect of this requirement is the representation of uncertainties. The conventional approach to representing uncertainty in physical aspects of climate change is probabilistic, based on ensembles of climate model simulations. In the face of deep uncertainties, the known limitations of this approach are becoming increasingly apparent. An alternative is thus emerging which may be called a ‘storyline’ approach. We define a storyline as a physically self-consistent unfolding of past events, or of plausible future events or pathways. No a priori probability of the storyline is assessed; emphasis is placed instead on understanding the driving factors involved, and the plausibility of those factors. We introduce a typology of four reasons for using storylines to represent uncertainty in physical aspects of climate change: (i) improving risk awareness by framing risk in an event-oriented rather than a probabilistic manner, which corresponds more directly to how people perceive and respond to risk; (ii) strengthening decision-making by allowing one to work backward from a particular vulnerability or decision point, combining climate change information with other relevant factors to address compound risk and develop appropriate stress tests; (iii) providing a physical basis for partitioning uncertainty, thereby allowing the use of more credible regional models in a conditioned manner and (iv) exploring the boundaries of plausibility, thereby guarding against false precision and surprise. Storylines also offer a powerful way of linking physical with human aspects of climate change
    corecore