7,826 research outputs found

    Twenty-five and Still Going Strong: An Inside Look at the Hamel Center for Undergraduate Research

    Get PDF

    Constructing effective ethical frameworks for biobanking

    Get PDF
    This paper is about the actual and potential development of an ethics that is appropriate to the practices and institutions of biobanking, the question being how best to develop a framework within which the relevant ethical questions are first identified and then addressed in the right ways. It begins with ways in which a standard approach in bioethics – namely upholding a principle of individual autonomy via the practice of gaining donors’ informed consent – is an inadequate ethical framework for biobanking. In donating material to a biobank, the individual donor relinquishes a degree of control and knowledge over the way their material is used in large-scale and typically open ended projects; and the identifying nature of genetic material means that third parties have rights and interests which must be taken into account as well as those of the individual donor. After discussing the problems for informed consent in the biobanking context, the paper then considers three emerging alternative approaches which, broadly speaking, conceptualize the subject of biobanking ethics in communal or co-operative terms: one version sees participants in biobanking research as ‘shareholders’ whilst the other expands on the notion of participation to include the wider public beneficiaries of biobanking as ‘stakeholders’. It concludes by outlining a third view, on which the biobanking institution itself is conceived as an ethical subject whose defining function can do useful normative work in guiding and evaluating its activities

    Gene expression in large pedigrees: analytic approaches.

    Get PDF
    BackgroundWe currently have the ability to quantify transcript abundance of messenger RNA (mRNA), genome-wide, using microarray technologies. Analyzing genotype, phenotype and expression data from 20 pedigrees, the members of our Genetic Analysis Workshop (GAW) 19 gene expression group published 9 papers, tackling some timely and important problems and questions. To study the complexity and interrelationships of genetics and gene expression, we used established statistical tools, developed newer statistical tools, and developed and applied extensions to these tools.MethodsTo study gene expression correlations in the pedigree members (without incorporating genotype or trait data into the analysis), 2 papers used principal components analysis, weighted gene coexpression network analysis, meta-analyses, gene enrichment analyses, and linear mixed models. To explore the relationship between genetics and gene expression, 2 papers studied expression quantitative trait locus allelic heterogeneity through conditional association analyses, and epistasis through interaction analyses. A third paper assessed the feasibility of applying allele-specific binding to filter potential regulatory single-nucleotide polymorphisms (SNPs). Analytic approaches included linear mixed models based on measured genotypes in pedigrees, permutation tests, and covariance kernels. To incorporate both genotype and phenotype data with gene expression, 4 groups employed linear mixed models, nonparametric weighted U statistics, structural equation modeling, Bayesian unified frameworks, and multiple regression.Results and discussionRegarding the analysis of pedigree data, we found that gene expression is familial, indicating that at least 1 factor for pedigree membership or multiple factors for the degree of relationship should be included in analyses, and we developed a method to adjust for familiality prior to conducting weighted co-expression gene network analysis. For SNP association and conditional analyses, we found FaST-LMM (Factored Spectrally Transformed Linear Mixed Model) and SOLAR-MGA (Sequential Oligogenic Linkage Analysis Routines -Major Gene Analysis) have similar type 1 and type 2 errors and can be used almost interchangeably. To improve the power and precision of association tests, prior knowledge of DNase-I hypersensitivity sites or other relevant biological annotations can be incorporated into the analyses. On a biological level, eQTL (expression quantitative trait loci) are genetically complex, exhibiting both allelic heterogeneity and epistasis. Including both genotype and phenotype data together with measurements of gene expression was found to be generally advantageous in terms of generating improved levels of significance and in providing more interpretable biological models.ConclusionsPedigrees can be used to conduct analyses of and enhance gene expression studies

    Improvements in turfgrass color and density resulting from comprehensive soil diagnostics

    Get PDF
    There are roughly 220 golf courses in Arkansas, and as many as 50% of these courses were constructed using common bermudagrass fairways. Although resilient, common bermudagrass loses density and quality over time. In this experiment physical and chemical properties of the soil were analyzed to determine the causes of decline in turf quality observed on several fairways of a local golf course. Once a particular fairway was selected for study and preliminary soil sampling conducted, GS+, a geostatistical computer program, was used to map the location of certain chemical deficiencies. A moderate to severe Mg deficiency was detected throughout the fairway. Twelve different fertility treatments were designed to enhance the overall density, texture, and color of the turf. Magnesium sulfate (MgSO4), Primo™ (a plant growth regulator), and Nitron (an organic nitrogen source) all showed significant improvements in turf quality. Extensive and comprehensive soil testing was found to be very beneficial; “hidden” nutrient deficiencies were discovered, which allowed site-specific treatments to be included in the test

    Collateral damage: Sizing and assessing the subprime CDO crisis

    Get PDF
    This paper conducts an in-depth analysis of structured finance asset-backed securities collateralized debt obligations (SF ABS CDOs), the subset of CDOs that traded on the ABS CDO desks at the major investment banks and were a major contributor to the global financial panic of August 2007. Despite their importance, we have yet to determine the exact size and composition of the SF ABS CDO market or get a good sense of the write-downs these CDOs will generate. In this paper the authors identify these SF ABS CDOs with data from Intex©, the source data and valuation software for the universe of publicly traded ABS/MBS securities and SF ABS CDOs. They estimate that 727 publicly traded SF ABS CDOs were issued between 1999 and 2007, totaling 641billion.Onceidentified,theydescribehowandwhymultisectorstructuredfinanceCDOsbecamesubprimeCDOs,andshowwhytheyweresosusceptibletocatastrophiclosses.TheauthorsthentracktheflowsofsubprimebondsintoCDOstodocumenttheenormouscrossreferencingofsubprimesecuritiesintoCDOs.Theycalculatethat641 billion. Once identified, they describe how and why multisector structured finance CDOs became subprime CDOs, and show why they were so susceptible to catastrophic losses. The authors then track the flows of subprime bonds into CDOs to document the enormous cross-referencing of subprime securities into CDOs. They calculate that 201 billion of the underlying collateral of these CDOs was referenced by synthetic credit default swaps (CDSs) and show how some 5,500 BBB-rated subprime bonds were placed or referenced into these CDOs some 37,000 times, transforming 64billionofBBBsubprimebondsinto64 billion of BBB subprime bonds into 140 billion of CDO assets. For the valuation exercise, the authors estimate that total write-downs on SF ABS CDOs will be $420 billion, 65 percent of original issuance balance, with over 70 percent of these losses having already been incurred. They then extend the work of Barnett-Hart (2009) to analyze the determinants of expected losses on the deals and AAA bonds and examine the performance of the dealers, collateral managers, and rating agencies. Finally, the authors discuss the implications of their findings for the “subprime CDO crisis” and discuss the many areas for future work.Debt ; Securities ; Asset-backed financing ; Banks and banking

    The trust preferred CDO market: from start to (expected) finish

    Get PDF
    This paper investigates the development, issuance, structuring, and expected performance of the trust preferred securities collateralized debt obligation (TruPS CDO) market. Developed as a way to provide capital markets access to smaller banks, thrifts, insurance companies, and real estate investment trusts (REITs) by pooling the issuance of TruPS into marketable CDOs, the market grew to $60 billion of issuance from its inception in 2000 through its abrupt halt in 2007. As evidenced by rating agency downgrades, current performance, and estimates from the authors' own model, TruPS CDOs are likely to perform poorly. Using data and valuation software from the leading provider of such information, they estimate that large numbers of the subordinated bonds and some senior bonds will be either fully or partially written down, even if no further defaults occur going forward. The primary reason for these losses is that the underlying collateral of TruPS CDOs is small, unrated banks whose primary asset is commercial real estate (CRE). During their years of greatest issuance from 2003 to 2007, the booming real estate market and record low number of bank failures masked the underlying risks that are now manifest. Another reason for the poor performance of bank TruPS CDOs is that smaller banks became a primary investor in the mezzanine tranches of bank TruPS CDOs, something that is also complicating regulators' resolutions of failed banks. To understand how this came about, the authors explore in detail the symbiotic relationship between dealers and rating agencies and how they modeled and sold TruPS CDOs. In their concluding comments, the authors provide several lessons learned for policymakers, regulators, and market participants.Asset-backed financing

    "Q i-jtb the Raven": Taking Dirty OCR Seriously

    Get PDF
    This article argues that scholars must understand mass digitized texts as assemblages of new editions, subsidiary editions, and impressions of their historical sources, and that these various parts require sustained bibliographic analysis and description. To adequately theorize any research conducted in large-scale text archives—including research that includes primary or secondary sources discovered through keyword search—we must avoid the myth of surrogacy proffered by page images and instead consider directly the text files they overlay. Focusing on the OCR (optical character recognition) from which most large-scale historical text data derives, this article argues that the results of this "automatic" process are in fact new editions of their source texts that offer unique insights into both the historical texts they remediate and the more recent era of their remediation. The constitution and provenance of digitized archives are, to some extent at least, knowable and describable. Just as details of type, ink, or paper, or paratext such as printer's records can help us establish the histories under which a printed book was created, details of format, interface, and even grant proposals can help us establish the histories of corpora created under conditions of mass digitization

    Phosphorus, food and 'messy' problems: A systemic inquiry into the management of a critical global resource

    Full text link
    This paper presents a process of systemic inquiry into the roles, relationships and perceptions in the management of phosphorus resources in the context of global food security. Phosphorus, like water, energy and nitrogen, is critical for food production. All modern food production and consumption systems are dependent on continual inputs of phosphate fertilizers derived from phosphate rock. Yet phosphate rock is a finite resource under the control of only a handful of countries - mainly China, Morocco and the US. Production of current global phosphate reserves could peak in 30 years, within decades of peak oil. Given this situation it is surprising that phosphorus is not considered a priority in the dominant discourses on global food security or global environmental change. Checkland's Soft Systems Methodology offers a framework to guide an inquiry or 'learning process' into the nature of the problem situation and system failure, incorporating results of an analysis of stakeholder interviews, a substance flows analysis and an institutional analysis. The soft systems inquiry reveals that not only is there no stakeholder consensus on the nature of the problem, there are no international institutional arrangements, much less an international organisation, responsible for monitoring and facilitating the long-term sustainability of phosphorus resources for food production. Further, without such an actor and associated institutional arrangements, there is no 'feedback loop' that can correct the system. Given the critical nature of phosphorus to all modern economies, this is a concerning finding and warrants further analysis, deliberation and enabling of change
    corecore