85 research outputs found

    Relative effects on global warming of halogenated methanes and ethanes of social and industrial interest

    Get PDF
    The relative potential global warming effects for several halocarbons (chlorofluorocarbons (CFC's)-11, 12, 113, 114, and 115; hydrochlorofluorocarbons (HCFC's) 22, 123, 124, 141b, and 142b; and hydrofluorocarbons (HFC's) 125, 134a, 143a, and 152a; carbon tetrachloride; and methyl chloroform) were calculated by two atmospheric modeling groups. These calculations were based on atmospheric chemistry and radiative convective models to determine the chemical profiles and the radiative processes. The resulting relative greenhouse warming when normalized to the effect of CFC-11 agree reasonably well as long as we account for differences between modeled lifetimes. Differences among results are discussed. Sensitivity of relative warming values is determined with respect to trace gas levels assumed. Transient relative global warming effects are analyzed

    Relative effects on stratospheric ozone of halogenated methanes and ethanes of social and industrial interest

    Get PDF
    Four atmospheric modeling groups have calculated relative effects of several halocarbons (chlorofluorocarbons (CFC's)-11, 12, 113, 114, and 115; hydrochlorofluorocarbons (HCFC's) 22, 123, 124, 141b, and 142b; hydrofluorocarbons (HFC's) 125, 134a, 143a, and 152a, carbon tetrachloride; and methyl chloroform) on stratospheric ozone. Effects on stratospheric ozone were calculated for each compound and normalized relative to the effect of CFC-11. These models include the representations for homogeneous physical and chemical processes in the middle atmosphere but do no account for either heterogeneous chemistry or polar dynamics which are important in the spring time loss of ozone over Antarctica. Relative calculated effects using a range of models compare reasonably well. Within the limits of the uncertainties of these model results, compounds now under consideration as functional replacements for fully halogenated compounds have modeled stratospheric ozone reductions of 10 percent or less of that of CFC-11. Sensitivity analyses examined the sensitivity of relative calculated effects to levels of other trace gases, assumed transport in the models, and latitudinal and seasonal local dependencies. Relative effects on polar ozone are discussed in the context of evolving information on the special processes affecting ozone, especially during polar winter-springtime. Lastly, the time dependency of relative effects were calculated

    Probing the Coevolution of Supermassive Black Holes and Galaxies Using Gravitationally Lensed Quasar Hosts

    Full text link
    In the present-day universe, supermassive black hole masses (MBH) appear to be strongly correlated with their galaxy's bulge luminosity, among other properties. In this study, we explore the analogous relationship between MBH, derived using the virial method, and the stellar R-band bulge luminosity (Lr) or stellar bulge mass (M*) at epochs of 1 < z < 4.5 using a sample of 31 gravitationally lensed AGNs and 20 non-lensed AGNs. At redshifts z > 1.7 (10--12 Gyrs ago), we find that the observed MBH--Lr relation is nearly the same (to within ~0.3 mag) as it is today. When the observed Lr are corrected for luminosity evolution, this means that the black holes grew in mass faster than their hosts, with the MBH/M* mass ratio being a factor of > 4(+2)(-1) times larger at z > 1.7 than it is today. By the redshift range 1<z<1.7 (8-10 Gyrs ago), the MBH/M* ratio is at most two times higher than today, but it may be consistent with no evolution. Combining the results, we conclude that the ratio MBH/M* rises with look-back time, although it may saturate at ~6 times the local value. Scenarios in which moderately luminous quasar hosts at z>1.7 were fully formed bulges that passively faded to the present epoch are ruled out.Comment: ApJ accepted, includes Referee comments and statistics to better quantify the statistical significance of results. 23 pages, 11 figures, 4 table

    Familial history of diabetes and clinical characteristics in Greek subjects with type 2 diabetes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A lot of studies have showed an excess maternal transmission of type 2 diabetes (T2D). The aim, therefore, of the present study was to estimate the prevalence of familial history of T2D in Greek patients, and to evaluate its potential effect on the patient's metabolic control and the presence of diabetic complications.</p> <p>Methods</p> <p>A total of 1,473 T2D patients were recruited. Those with diabetic mothers, diabetic fathers, diabetic relatives other than parents and no known diabetic relatives, were considered separately.</p> <p>Results</p> <p>The prevalence of diabetes in the mother, the father and relatives other than parents, was 27.7, 11.0 and 10.7%, respectively. Patients with paternal diabetes had a higher prevalence of hypertension (64.8 vs. 57.1%, P = 0.05) and lower LDL-cholesterol levels (115.12 ± 39.76 vs. 127.13 ± 46.53 mg/dl, P = 0.006) than patients with diabetes in the mother. Patients with familial diabetes were significantly younger (P < 0.001), with lower age at diabetes diagnosis (P < 0.001) than those without diabetic relatives. Patients with a diabetic parent had higher body mass index (BMI) (31.22 ± 5.87 vs. 30.67 ± 5.35 Kg/m<sup>2</sup>, P = 0.08), higher prevalence of dyslipidemia (49.8 vs. 44.6%, P = 0.06) and retinopathy (17.9 vs. 14.5%, P = 0.08) compared with patients with no diabetic relatives. No difference in the degree of metabolic control and the prevalence of chronic complications were observed.</p> <p>Conclusion</p> <p>The present study showed an excess maternal transmission of T2D in a sample of Greek diabetic patients. However, no different influence was found between maternal and paternal diabetes on the clinical characteristics of diabetic patients except for LDL-cholesterol levels and presence of hypertension. The presence of a family history of diabetes resulted to an early onset of the disease to the offspring.</p

    Teratology Primer-2nd Edition (7/9/2010)

    Get PDF
    Foreword: What is Teratology? “What a piece of work is an embryo!” as Hamlet might have said. “In form and moving how express and admirable! In complexity how infinite!” It starts as a single cell, which by repeated divisions gives rise to many genetically identical cells. These cells receive signals from their surroundings and from one another as to where they are in this ball of cells —front or back, right or left, headwards or tailwards, and what they are destined to become. Each cell commits itself to being one of many types; the cells migrate, combine into tissues, or get out of the way by dying at predetermined times and places. The tissues signal one another to take their own pathways; they bend, twist, and form organs. An organism emerges. This wondrous transformation from single celled simplicity to myriad-celled complexity is programmed by genes that, in the greatest mystery of all, are turned on and off at specified times and places to coordinate the process. It is a wonder that this marvelously emergent operation, where there are so many opportunities for mistakes, ever produces a well-formed and functional organism. And sometimes it doesn’t. Mistakes occur. Defective genes may disturb development in ways that lead to death or to malformations. Extrinsic factors may do the same. “Teratogenic” refers to factors that cause malformations, whether they be genes or environmental agents. The word comes from the Greek “teras,” for “monster,” a term applied in ancient times to babies with severe malformations, which were considered portents or, in the Latin, “monstra.” Malformations can happen in many ways. For example, when the neural plate rolls up to form the neural tube, it may not close completely, resulting in a neural tube defect—anencephaly if the opening is in the head region, or spina bifida if it is lower down. The embryonic processes that form the face may fail to fuse, resulting in a cleft lip. Later, the shelves that will form the palate may fail to move from the vertical to the horizontal, where they should meet in the midline and fuse, resulting in a cleft palate. Or they may meet, but fail to fuse, with the same result. The forebrain may fail to induce the overlying tissue to form the eye, so there is no eye (anophthalmia). The tissues between the toes may fail to break down as they should, and the toes remain webbed. Experimental teratology flourished in the 19th century, and embryologists knew well that the development of bird and frog embryos could be deranged by environmental “insults,” such as lack of oxygen (hypoxia). But the mammalian uterus was thought to be an impregnable barrier that would protect the embryo from such threats. By exclusion, mammalian malformations must be genetic, it was thought. In the early 1940s, several events changed this view. In Australia an astute ophthalmologist, Norman Gregg, established a connection between maternal rubella (German measles) and the triad of cataracts, heart malformations, and deafness. In Cincinnati Josef Warkany, an Austrian pediatrician showed that depriving female rats of vitamin B (riboflavin) could cause malformations in their offspring— one of the early experimental demonstrations of a teratogen. Warkany was trying to produce congenital cretinism by putting the rats on an iodine deficient diet. The diet did indeed cause malformations, but not because of the iodine deficiency; depleting the diet of iodine had also depleted it of riboflavin! Several other teratogens were found in experimental animals, including nitrogen mustard (an anti cancer drug), trypan blue (a dye), and hypoxia (lack of oxygen). The pendulum was swinging back; it seemed that malformations were not genetically, but environmentally caused. In Montreal, in the early 1950s, Clarke Fraser’s group wanted to bring genetics back into the picture. They had found that treating pregnant mice with cortisone caused cleft palate in the offspring, and showed that the frequency was high in some strains and low in others. The only difference was in the genes. So began “teratogenetics,” the study of how genes influence the embryo’s susceptibility to teratogens. The McGill group went on to develop the idea that an embryo’s genetically determined, normal, pattern of development could influence its susceptibility to a teratogen— the multifactorial threshold concept. For instance, an embryo must move its palate shelves from vertical to horizontal before a certain critical point or they will not meet and fuse. A teratogen that causes cleft palate by delaying shelf movement beyond this point is more likely to do so in an embryo whose genes normally move its shelves late. As studies of the basis for abnormal development progressed, patterns began to appear, and the principles of teratology were developed. These stated, in summary, that the probability of a malformation being produced by a teratogen depends on the dose of the agent, the stage at which the embryo is exposed, and the genotype of the embryo and mother. The number of mammalian teratogens grew, and those who worked with them began to meet from time to time, to talk about what they were finding, leading, in 1960, to the formation of the Teratology Society. There were, of course, concerns about whether these experimental teratogens would be a threat to human embryos, but it was thought, by me at least, that they were all “sledgehammer blows,” that would be teratogenic in people only at doses far above those to which human embryos would be exposed. So not to worry, or so we thought. Then came thalidomide, a totally unexpected catastrophe. The discovery that ordinary doses of this supposedly “harmless” sleeping pill and anti-nauseant could cause severe malformations in human babies galvanized this new field of teratology. Scientists who had been quietly working in their laboratories suddenly found themselves spending much of their time in conferences and workshops, sitting on advisory committees, acting as consultants for pharmaceutical companies, regulatory agencies, and lawyers, as well as redesigning their research plans. The field of teratology and developmental toxicology expanded rapidly. The following pages will show how far we have come, and how many important questions still remain to be answered. A lot of effort has gone into developing ways to predict how much of a hazard a particular experimental teratogen would be to the human embryo (chapters 9–19). It was recognized that animal studies might not prove a drug was “safe” for the human embryo (in spite of great pressure from legislators and the public to do so), since species can vary in their responses to teratogenic exposures. A number of human teratogens have been identified, and some, suspected of teratogenicity, have been exonerated—at least of a detectable risk (chapters 21–32). Regulations for testing drugs before market release have greatly improved (chapter 14). Other chapters deal with how much such things as population studies (chapter 11), post-marketing surveillance (chapter 13), and systems biology (chapter 16) add to our understanding. And, in a major advance, the maternal role of folate in preventing neural tube defects and other birth defects is being exploited (chapter 32). Encouraging women to take folic acid supplements and adding folate to flour have produced dramatic falls in the frequency of neural tube defects in many parts of the world. Progress has been made not only in the use of animal studies to predict human risks, but also to illumine how, and under what circumstances, teratogens act to produce malformations (chapters 2–8). These studies have contributed greatly to our knowledge of abnormal and also normal development. Now we are beginning to see exactly when and where the genes turn on and off in the embryo, to appreciate how they guide development and to gain exciting new insights into how genes and teratogens interact. The prospects for progress in the war on birth defects were never brighter. F. Clarke Fraser McGill University (Emeritus) Montreal, Quebec, Canad

    Long-term exposure to hypoxia inhibits tumor progression of lung cancer in rats and mice

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Hypoxia has been identified as a major negative factor for tumor progression in clinical observations and in animal studies. However, the precise role of hypoxia in tumor progression has not been fully explained. In this study, we extensively investigated the effect of long-term exposure to hypoxia on tumor progression <it>in vivo.</it></p> <p>Methods</p> <p>Rats bearing transplanted tumors consisting of A549 human lung cancer cells (lung cancer tumor) were exposed to hypoxia for different durations and different levels of oxygen. The tumor growth and metastasis were evaluated. We also treated A549 lung cancer cells (A549 cells) with chronic hypoxia and then implanted the hypoxia-pretreated cancer cells into mice. The effect of exposure to hypoxia on metastasis of Lewis lung carcinoma in mice was also investigated.</p> <p>Results</p> <p>We found that long-term exposure to hypoxia a) significantly inhibited lung cancer tumor growth in xenograft and orthotopic models in rats, b) significantly reduced lymphatic metastasis of the lung cancer in rats and decreased lung metastasis of Lewis lung carcinoma in mice, c) reduced lung cancer cell proliferation and cell cycle progression <it>in vitro</it>, d) decreased growth of the tumors from hypoxia-pretreated A549 cells, e) decreased Na<sup>+</sup>-K<sup>+ </sup>ATPase α1 expression in hypoxic lung cancer tumors, and f) increased expression of hypoxia inducible factors (HIF1α and HIF2α) but decreased microvessel density in the lung cancer tumors. In contrast to lung cancer, the growth of tumor from HCT116 human colon cancer cells (colon cancer tumor) was a) significantly enhanced in the same hypoxia conditions, accompanied by b) no significant change in expression of Na<sup>+</sup>-K<sup>+ </sup>ATPase α1, c) increased HIF1α expression (no HIF2α was detected) and d) increased microvessel density in the tumor tissues.</p> <p>Conclusions</p> <p>This study demonstrated that long-term exposure to hypoxia repressed tumor progression of the lung cancer from A549 cells and that decreased expression of Na<sup>+</sup>-K<sup>+ </sup>ATPase was involved in hypoxic inhibition of tumor progression. The results from this study provide new insights into the role of hypoxia in tumor progression and therapeutic strategies for cancer treatment.</p

    Positional Cloning of “Lisch-like”, a Candidate Modifier of Susceptibility to Type 2 Diabetes in Mice

    Get PDF
    In 404 Lepob/ob F2 progeny of a C57BL/6J (B6) x DBA/2J (DBA) intercross, we mapped a DBA-related quantitative trait locus (QTL) to distal Chr1 at 169.6 Mb, centered about D1Mit110, for diabetes-related phenotypes that included blood glucose, HbA1c, and pancreatic islet histology. The interval was refined to 1.8 Mb in a series of B6.DBA congenic/subcongenic lines also segregating for Lepob. The phenotypes of B6.DBA congenic mice include reduced β-cell replication rates accompanied by reduced β-cell mass, reduced insulin/glucose ratio in blood, reduced glucose tolerance, and persistent mild hypoinsulinemic hyperglycemia. Nucleotide sequence and expression analysis of 14 genes in this interval identified a predicted gene that we have designated “Lisch-like” (Ll) as the most likely candidate. The gene spans 62.7 kb on Chr1qH2.3, encoding a 10-exon, 646–amino acid polypeptide, homologous to Lsr on Chr7qB1 and to Ildr1 on Chr16qB3. The largest isoform of Ll is predicted to be a transmembrane molecule with an immunoglobulin-like extracellular domain and a serine/threonine-rich intracellular domain that contains a 14-3-3 binding domain. Morpholino knockdown of the zebrafish paralog of Ll resulted in a generalized delay in endodermal development in the gut region and dispersion of insulin-positive cells. Mice segregating for an ENU-induced null allele of Ll have phenotypes comparable to the B.D congenic lines. The human ortholog, C1orf32, is in the middle of a 30-Mb region of Chr1q23-25 that has been repeatedly associated with type 2 diabetes

    Measuring the burden of arboviral diseases: the spectrum of morbidity and mortality from four prevalent infections

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Globally, arthropod-borne virus infections are increasingly common causes of severe febrile disease that can progress to long-term physical or cognitive impairment or result in early death. Because of the large populations at risk, it has been suggested that these outcomes represent a substantial health deficit not captured by current global disease burden assessments.</p> <p>Methods</p> <p>We reviewed newly available data on disease incidence and outcomes to critically evaluate the disease burden (as measured by disability-adjusted life years, or DALYs) caused by yellow fever virus (YFV), Japanese encephalitis virus (JEV), chikungunya virus (CHIKV), and Rift Valley fever virus (RVFV). We searched available literature and official reports on these viruses combined with the terms "outbreak(s)," "complication(s)," "disability," "quality of life," "DALY," and "QALY," focusing on reports since 2000. We screened 210 published studies, with 38 selected for inclusion. Data on average incidence, duration, age at onset, mortality, and severity of acute and chronic outcomes were used to create DALY estimates for 2005, using the approach of the current Global Burden of Disease framework.</p> <p>Results</p> <p>Given the limitations of available data, nondiscounted, unweighted DALYs attributable to YFV, JEV, CHIKV, and RVFV were estimated to fall between 300,000 and 5,000,000 for 2005. YFV was the most prevalent infection of the four viruses evaluated, although a higher proportion of the world's population lives in countries at risk for CHIKV and JEV. Early mortality and long-term, related chronic conditions provided the largest DALY components for each disease. The better known, short-term viral febrile syndromes caused by these viruses contributed relatively lower proportions of the overall DALY scores.</p> <p>Conclusions</p> <p>Limitations in health systems in endemic areas undoubtedly lead to underestimation of arbovirus incidence and related complications. However, improving diagnostics and better understanding of the late secondary results of infection now give a first approximation of the current disease burden from these widespread serious infections. Arbovirus control and prevention remains a high priority, both because of the current disease burden and the significant threat of the re-emergence of these viruses among much larger groups of susceptible populations.</p

    Creative destruction in science

    Get PDF
    Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents\u2019 reasoning about day care options, and gender discrimination in hiring decisions. Significance statement It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void\u2014 reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. Scientific transparency statement The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article
    corecore