71 research outputs found

    Leukemia Inhibitory Factor in Rat Fetal Lung Development: Expression and Functional Studies

    Get PDF
    Background: Leukemia inhibitory factor (LIF) and interleukin-6 (IL-6) are members of the family of the glycoprotein 130 (gp130)-type cytokines. These cytokines share gp130 as a common signal transducer, which explains why they show some functional redundancy. Recently, it was demonstrated that IL-6 promotes fetal lung branching. Additionally, LIF has been implicated in developmental processes of some branching organs. Thus, in this study LIF expression pattern and its effects on fetal rat lung morphogenesis were assessed. Methodology/Principal Findings: LIF and its subunit receptor LIFRa expression levels were evaluated by immunohistochemistry and western blot in fetal rat lungs of different gestational ages, ranging from 13.5 to 21.5 days post-conception. Throughout all gestational ages studied, LIF was constitutively expressed in pulmonary epithelium, whereas LIFRa was first mainly expressed in the mesenchyme, but after pseudoglandular stage it was also observed in epithelial cells. These results point to a LIF epithelium-mesenchyme cross-talk, which is known to be important for lung branching process. Regarding functional studies, fetal lung explants were cultured with increasing doses of LIF or LIF neutralizing antibodies during 4 days. MAPK, AKT, and STAT3 phosphorylation in the treated lung explants was analyzed. LIF supplementation significantly inhibited lung growth in spite of an increase in p44/42 phosphorylation. On the other hand, LIF inhibition significantly stimulated lung growth via p38 and Akt pathways

    Adaptation in integrated assessment modeling: where do we stand?

    Get PDF
    Adaptation is an important element on the climate change policy agenda. Integrated assessment models, which are key tools to assess climate change policies, have begun to address adaptation, either by including it implicitly in damage cost estimates, or by making it an explicit control variable. We analyze how modelers have chosen to describe adaptation within an integrated framework, and suggest many ways they could improve the treatment of adaptation by considering more of its bottom-up characteristics. Until this happens, we suggest, models may be too optimistic about the net benefits adaptation can provide, and therefore may underestimate the amount of mitigation they judge to be socially optimal. Under some conditions, better modeling of adaptation costs and benefits could have important implications for defining mitigation targets. © Springer Science+Business Media B.V. 2009

    Awareness in Practice: Tensions in Access to Sensitive Attribute Data for Antidiscrimination

    Full text link
    Organizations cannot address demographic disparities that they cannot see. Recent research on machine learning and fairness has emphasized that awareness of sensitive attributes, such as race and sex, is critical to the development of interventions. However, on the ground, the existence of these data cannot be taken for granted. This paper uses the domains of employment, credit, and healthcare in the United States to surface conditions that have shaped the availability of sensitive attribute data. For each domain, we describe how and when private companies collect or infer sensitive attribute data for antidiscrimination purposes. An inconsistent story emerges: Some companies are required by law to collect sensitive attribute data, while others are prohibited from doing so. Still others, in the absence of legal mandates, have determined that collection and imputation of these data are appropriate to address disparities. This story has important implications for fairness research and its future applications. If companies that mediate access to life opportunities are unable or hesitant to collect or infer sensitive attribute data, then proposed techniques to detect and mitigate bias in machine learning models might never be implemented outside the lab. We conclude that today's legal requirements and corporate practices, while highly inconsistent across domains, offer lessons for how to approach the collection and inference of sensitive data in appropriate circumstances. We urge stakeholders, including machine learning practitioners, to actively help chart a path forward that takes both policy goals and technical needs into account

    Measurement of the charge asymmetry in top-quark pair production in the lepton-plus-jets final state in pp collision data at s=8TeV\sqrt{s}=8\,\mathrm TeV{} with the ATLAS detector

    Get PDF

    ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider

    Get PDF

    The Objective Function: Science and Society in the Age of Machine Intelligence

    Full text link
    Machine intelligence, or the use of complex computational and statistical practices to make predictions and classifications based on data representations of phenomena, has been applied to domains as disparate as criminal justice, commerce, medicine, media and the arts, mechanical engineering, among others. How has machine intelligence become able to glide so freely across, and to make such waves for, these domains? In this dissertation, I take up that question by ethnographically engaging with how the authority of machine learning has been constructed such that it can influence so many domains, and I investigate what the consequences are of it being able to do so. By examining the workplace practices of the applied machine learning researchers who produce machine intelligence, those they work with, and the artifacts they produce—algorithmic systems, public demonstrations of machine intelligence, academic research articles, and conference presentations—a wider set of implications about the legacies of positivism and objectivity, the construction of expertise, and the exercise of power takes shape. The dissertation begins by arguing that machine intelligence proceeds from a “naïve” form of empiricism with ties to positivist intellectual traditions of the 17th and 18th centuries. This naïve empiricism eschews other forms of knowledge and theory formation in order for applied machine learning researchers to enact data performances that bring objects of analysis into existence as entities capable of being subjected to machine intelligence. By “data performances,” I mean generative enactments which bring into existence that which machine intelligence purports to analyze or describe. The enactment of data performances is analyzed as an agential cut into a representational field that produces both stable claims about the world and the interpretive frame in which those claims can hold true. The dissertation also examines how machine intelligence depends upon a range of accommodations from other institutions and organizations, from data collection and processing to organizational commitments to support the work of applied machine learning researchers. Throughout the dissertation, methods are developed for analyzing the expert practices of machine learning researchers to transform situated, positional knowledge into machine intelligence and re-present it as objective knowledge. These methods trace the chains of dependencies between data collection, processing, and analysis to reveal where and how hidden assumptions about the phenomena being analyzed are advanced. The second half of the dissertation focuses on how the authority of machine intelligence to control or ensure compliance is developed. This authority rests not only on applications of machine intelligence which constrain the freedom of others to act in accordance with their own desires, but also on the ways in which attempts to critique or curtail the authority of machine intelligence are assimilated into the logics and practices of machine intelligence itself. Attempts to limit the authority of machine intelligence, particularly AI ethics and algorithmic fairness are explored ethnographically to conclude that even in recognizing and attempting to take responsibility for the harms it risks producing in the world, machine intelligence nevertheless remains resistant to forms of accountability that are external to its own practices. This ensures that machine intelligence remains a deeply conservative project, contrary to its presentation as futuristic or transformative, that conserves the power of those who already wield it
    corecore