141 research outputs found

    Behavior of externally bonded fiber reinforced polymer systems for strengthening concrete girders in shear

    Get PDF
    Deficiencies in shear resistance are a primary concern in concrete members due to the sudden and unpredictable nature of shear failures. Shear deficiencies in concrete structures can arise due to improper design, long-term deterioration, man-made damages, increases in loads, or as a result of over strengthening in flexure. A number of shear strengthening techniques offer a cost effective means for restoring or enhancing the shear capacity of a concrete member. The use of externally bonded fiber reinforced polymers (FRP) is one such technique that has gained recent recognition for its high strength-to-weight ratio and simplicity of application. The development of this technique relies on experimental testing to better understand the behavior and failure mechanisms. Previous experimental investigations and analytical models/design guidelines were studied to identify and understand the parameters influencing the shear strengthening effect of externally bonded fiber reinforced polymers (FRP). The knowledge acquired from the literature was used to design and carryout a full-scale experimental investigation for further evaluating the effectiveness of externally bonded FRP for shear strengthening. Test specimens consisted of reinforced concrete (RC) and prestressed concrete (PC) girders. Parameters of interest included the effects of: pre-existing damage (cracks), transverse steel (stirrup) reinforcement ratio, FRP strengthening scheme, and methods of FRP anchorage. The experimental results were compared with predictions from existing analytical models and design guidelines. Lastly, an alternative analytical approach was developed which takes into consideration multiple parameters shown to have influence on the FRP shear strengthening effectiveness, but which have not been collectively incorporated in previous models --Abstract, page iii

    Electronic health records: high-quality electronic data for higher-quality clinical research

    Get PDF
    In the decades prior to the introduction of electronic health records (EHRs), the best source of electronic information to support clinical research was claims data. The use of claims data in research has been criticised for capturing only demographics, diagnoses and procedures recorded for billing purposes that may not fully reflect the patient's condition. Many important details of the patient's clinical status are not recorded. EHRs can overcome many limitations of claims data in research, by capturing a more complete picture of the observations and actions of a clinician recorded when patients are seen. EHRs can provide important details about vital signs, diagnostic test results, social and family history, prescriptions and physical examination findings. As a result, EHRs present a new opportunity to use data collected through the routine operation of a clinical practice to generate and test hypotheses about the relationships among patients, diseases, practice styles, therapeutic modalities and clinical outcomes. This article describes the clinical research information infrastructure at four institutions: the University of Pennsylvania, Regenstrief Institute/Indiana University, Partners Healthcare System and the University of Virginia. We present models for applying EHR data successfully within the clinical research enterprise

    Combining clinical and genomics queries using i2b2 – Three methods

    Get PDF
    We are fortunate to be living in an era of twin biomedical data surges: a burgeoning representation of human phenotypes in the medical records of our healthcare systems, and high-throughput sequencing making rapid technological advances. The difficulty representing genomic data and its annotations has almost by itself led to the recognition of a biomedical “Big Data” challenge, and the complexity of healthcare data only compounds the problem to the point that coherent representation of both systems on the same platform seems insuperably difficult. We investigated the capability for complex, integrative genomic and clinical queries to be supported in the Informatics for Integrating Biology and the Bedside (i2b2) translational software package. Three different data integration approaches were developed: The first is based on Sequence Ontology, the second is based on the tranSMART engine, and the third on CouchDB. These novel methods for representing and querying complex genomic and clinical data on the i2b2 platform are available today for advancing precision medicine

    iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    Get PDF
    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu

    DNA Polymerase Epsilon Deficiency Causes IMAGe Syndrome with Variable Immunodeficiency.

    Get PDF
    During genome replication, polymerase epsilon (Pol Δ) acts as the major leading-strand DNA polymerase. Here we report the identification of biallelic mutations in POLE, encoding the Pol Δ catalytic subunit POLE1, in 15 individuals from 12 families. Phenotypically, these individuals had clinical features closely resembling IMAGe syndrome (intrauterine growth restriction [IUGR], metaphyseal dysplasia, adrenal hypoplasia congenita, and genitourinary anomalies in males), a disorder previously associated with gain-of-function mutations in CDKN1C. POLE1-deficient individuals also exhibited distinctive facial features and variable immune dysfunction with evidence of lymphocyte deficiency. All subjects shared the same intronic variant (c.1686+32C>G) as part of a common haplotype, in combination with different loss-of-function variants in trans. The intronic variant alters splicing, and together the biallelic mutations lead to cellular deficiency of Pol Δ and delayed S-phase progression. In summary, we establish POLE as a second gene in which mutations cause IMAGe syndrome. These findings add to a growing list of disorders due to mutations in DNA replication genes that manifest growth restriction alongside adrenal dysfunction and/or immunodeficiency, consolidating these as replisome phenotypes and highlighting a need for future studies to understand the tissue-specific development roles of the encoded proteins

    The National COVID Cohort Collaborative (N3C): Rationale, design, infrastructure, and deployment.

    Get PDF
    OBJECTIVE: Coronavirus disease 2019 (COVID-19) poses societal challenges that require expeditious data and knowledge sharing. Though organizational clinical data are abundant, these are largely inaccessible to outside researchers. Statistical, machine learning, and causal analyses are most successful with large-scale data beyond what is available in any given organization. Here, we introduce the National COVID Cohort Collaborative (N3C), an open science community focused on analyzing patient-level data from many centers. MATERIALS AND METHODS: The Clinical and Translational Science Award Program and scientific community created N3C to overcome technical, regulatory, policy, and governance barriers to sharing and harmonizing individual-level clinical data. We developed solutions to extract, aggregate, and harmonize data across organizations and data models, and created a secure data enclave to enable efficient, transparent, and reproducible collaborative analytics. RESULTS: Organized in inclusive workstreams, we created legal agreements and governance for organizations and researchers; data extraction scripts to identify and ingest positive, negative, and possible COVID-19 cases; a data quality assurance and harmonization pipeline to create a single harmonized dataset; population of the secure data enclave with data, machine learning, and statistical analytics tools; dissemination mechanisms; and a synthetic data pilot to democratize data access. CONCLUSIONS: The N3C has demonstrated that a multisite collaborative learning health network can overcome barriers to rapidly build a scalable infrastructure incorporating multiorganizational clinical data for COVID-19 analytics. We expect this effort to save lives by enabling rapid collaboration among clinicians, researchers, and data scientists to identify treatments and specialized care and thereby reduce the immediate and long-term impacts of COVID-19

    The James Webb Space Telescope Mission

    Full text link
    Twenty-six years ago a small committee report, building on earlier studies, expounded a compelling and poetic vision for the future of astronomy, calling for an infrared-optimized space telescope with an aperture of at least 4m4m. With the support of their governments in the US, Europe, and Canada, 20,000 people realized that vision as the 6.5m6.5m James Webb Space Telescope. A generation of astronomers will celebrate their accomplishments for the life of the mission, potentially as long as 20 years, and beyond. This report and the scientific discoveries that follow are extended thank-you notes to the 20,000 team members. The telescope is working perfectly, with much better image quality than expected. In this and accompanying papers, we give a brief history, describe the observatory, outline its objectives and current observing program, and discuss the inventions and people who made it possible. We cite detailed reports on the design and the measured performance on orbit.Comment: Accepted by PASP for the special issue on The James Webb Space Telescope Overview, 29 pages, 4 figure

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants
    • 

    corecore