105 research outputs found

    An Assessment to Benchmark the Seismic Performance of a Code-Conforming Reinforced-Concrete Moment-Frame Building

    Get PDF
    This report describes a state-of-the-art performance-based earthquake engineering methodology that is used to assess the seismic performance of a four-story reinforced concrete (RC) office building that is generally representative of low-rise office buildings constructed in highly seismic regions of California. This “benchmark” building is considered to be located at a site in the Los Angeles basin, and it was designed with a ductile RC special moment-resisting frame as its seismic lateral system that was designed according to modern building codes and standards. The building’s performance is quantified in terms of structural behavior up to collapse, structural and nonstructural damage and associated repair costs, and the risk of fatalities and their associated economic costs. To account for different building configurations that may be designed in practice to meet requirements of building size and use, eight structural design alternatives are used in the performance assessments. Our performance assessments account for important sources of uncertainty in the ground motion hazard, the structural response, structural and nonstructural damage, repair costs, and life-safety risk. The ground motion hazard characterization employs a site-specific probabilistic seismic hazard analysis and the evaluation of controlling seismic sources (through disaggregation) at seven ground motion levels (encompassing return periods ranging from 7 to 2475 years). Innovative procedures for ground motion selection and scaling are used to develop acceleration time history suites corresponding to each of the seven ground motion levels. Structural modeling utilizes both “fiber” models and “plastic hinge” models. Structural modeling uncertainties are investigated through comparison of these two modeling approaches, and through variations in structural component modeling parameters (stiffness, deformation capacity, degradation, etc.). Structural and nonstructural damage (fragility) models are based on a combination of test data, observations from post-earthquake reconnaissance, and expert opinion. Structural damage and repair costs are modeled for the RC beams, columns, and slabcolumn connections. Damage and associated repair costs are considered for some nonstructural building components, including wallboard partitions, interior paint, exterior glazing, ceilings, sprinkler systems, and elevators. The risk of casualties and the associated economic costs are evaluated based on the risk of structural collapse, combined with recent models on earthquake fatalities in collapsed buildings and accepted economic modeling guidelines for the value of human life in loss and cost-benefit studies. The principal results of this work pertain to the building collapse risk, damage and repair cost, and life-safety risk. These are discussed successively as follows. When accounting for uncertainties in structural modeling and record-to-record variability (i.e., conditional on a specified ground shaking intensity), the structural collapse probabilities of the various designs range from 2% to 7% for earthquake ground motions that have a 2% probability of exceedance in 50 years (2475 years return period). When integrated with the ground motion hazard for the southern California site, the collapse probabilities result in mean annual frequencies of collapse in the range of [0.4 to 1.4]x10 -4 for the various benchmark building designs. In the development of these results, we made the following observations that are expected to be broadly applicable: (1) The ground motions selected for performance simulations must consider spectral shape (e.g., through use of the epsilon parameter) and should appropriately account for correlations between motions in both horizontal directions; (2) Lower-bound component models, which are commonly used in performance-based assessment procedures such as FEMA 356, can significantly bias collapse analysis results; it is more appropriate to use median component behavior, including all aspects of the component model (strength, stiffness, deformation capacity, cyclic deterioration, etc.); (3) Structural modeling uncertainties related to component deformation capacity and post-peak degrading stiffness can impact the variability of calculated collapse probabilities and mean annual rates to a similar degree as record-to-record variability of ground motions. Therefore, including the effects of such structural modeling uncertainties significantly increases the mean annual collapse rates. We found this increase to be roughly four to eight times relative to rates evaluated for the median structural model; (4) Nonlinear response analyses revealed at least six distinct collapse mechanisms, the most common of which was a story mechanism in the third story (differing from the multi-story mechanism predicted by nonlinear static pushover analysis); (5) Soil-foundation-structure interaction effects did not significantly affect the structural response, which was expected given the relatively flexible superstructure and stiff soils. The potential for financial loss is considerable. Overall, the calculated expected annual losses (EAL) are in the range of 52,000to52,000 to 97,000 for the various code-conforming benchmark building designs, or roughly 1% of the replacement cost of the building (8.8M).Theselossesaredominatedbytheexpectedrepaircostsofthewallboardpartitions(includinginteriorpaint)andbythestructuralmembers.Lossestimatesaresensitivetodetailsofthestructuralmodels,especiallytheinitialstiffnessofthestructuralelements.Lossesarealsofoundtobesensitivetostructuralmodelingchoices,suchasignoringthetensilestrengthoftheconcrete(40EAL)orthecontributionofthegravityframestooverallbuildingstiffnessandstrength(15changeinEAL).Althoughthereareanumberoffactorsidentifiedintheliteratureaslikelytoaffecttheriskofhumaninjuryduringseismicevents,thecasualtymodelinginthisstudyfocusesonthosefactors(buildingcollapse,buildingoccupancy,andspatiallocationofbuildingoccupants)thatdirectlyinformthebuildingdesignprocess.Theexpectedannualnumberoffatalitiesiscalculatedforthebenchmarkbuilding,assumingthatanearthquakecanoccuratanytimeofanydaywithequalprobabilityandusingfatalityprobabilitiesconditionedonstructuralcollapseandbasedonempiricaldata.Theexpectedannualnumberoffatalitiesforthecodeconformingbuildingsrangesbetween0.05102and0.21102,andisequalto2.30102foranoncodeconformingdesign.Theexpectedlossoflifeduringaseismiceventisperhapsthedecisionvariablethatownersandpolicymakerswillbemostinterestedinmitigating.Thefatalityestimationcarriedoutforthebenchmarkbuildingprovidesamethodologyforcomparingthisimportantvalueforvariousbuildingdesigns,andenablesinformeddecisionmakingduringthedesignprocess.Theexpectedannuallossassociatedwithfatalitiescausedbybuildingearthquakedamageisestimatedbyconvertingtheexpectedannualnumberoffatalitiesintoeconomicterms.Assumingthevalueofahumanlifeis8.8M). These losses are dominated by the expected repair costs of the wallboard partitions (including interior paint) and by the structural members. Loss estimates are sensitive to details of the structural models, especially the initial stiffness of the structural elements. Losses are also found to be sensitive to structural modeling choices, such as ignoring the tensile strength of the concrete (40% change in EAL) or the contribution of the gravity frames to overall building stiffness and strength (15% change in EAL). Although there are a number of factors identified in the literature as likely to affect the risk of human injury during seismic events, the casualty modeling in this study focuses on those factors (building collapse, building occupancy, and spatial location of building occupants) that directly inform the building design process. The expected annual number of fatalities is calculated for the benchmark building, assuming that an earthquake can occur at any time of any day with equal probability and using fatality probabilities conditioned on structural collapse and based on empirical data. The expected annual number of fatalities for the code-conforming buildings ranges between 0.05*10 -2 and 0.21*10 -2 , and is equal to 2.30*10 -2 for a non-code conforming design. The expected loss of life during a seismic event is perhaps the decision variable that owners and policy makers will be most interested in mitigating. The fatality estimation carried out for the benchmark building provides a methodology for comparing this important value for various building designs, and enables informed decision making during the design process. The expected annual loss associated with fatalities caused by building earthquake damage is estimated by converting the expected annual number of fatalities into economic terms. Assuming the value of a human life is 3.5M, the fatality rate translates to an EAL due to fatalities of 3,500to3,500 to 5,600 for the code-conforming designs, and 79,800forthenoncodeconformingdesign.ComparedtotheEALduetorepaircostsofthecodeconformingdesigns,whichareontheorderof79,800 for the non-code conforming design. Compared to the EAL due to repair costs of the code-conforming designs, which are on the order of 66,000, the monetary value associated with life loss is small, suggesting that the governing factor in this respect will be the maximum permissible life-safety risk deemed by the public (or its representative government) to be appropriate for buildings. Although the focus of this report is on one specific building, it can be used as a reference for other types of structures. This report is organized in such a way that the individual core chapters (4, 5, and 6) can be read independently. Chapter 1 provides background on the performance-based earthquake engineering (PBEE) approach. Chapter 2 presents the implementation of the PBEE methodology of the PEER framework, as applied to the benchmark building. Chapter 3 sets the stage for the choices of location and basic structural design. The subsequent core chapters focus on the hazard analysis (Chapter 4), the structural analysis (Chapter 5), and the damage and loss analyses (Chapter 6). Although the report is self-contained, readers interested in additional details can find them in the appendices

    Mapping gene associations in human mitochondria using clinical disease phenotypes

    Get PDF
    Nuclear genes encode most mitochondrial proteins, and their mutations cause diverse and debilitating clinical disorders. To date, 1,200 of these mitochondrial genes have been recorded, while no standardized catalog exists of the associated clinical phenotypes. Such a catalog would be useful to develop methods to analyze human phenotypic data, to determine genotype-phenotype relations among many genes and diseases, and to support the clinical diagnosis of mitochondrial disorders. Here we establish a clinical phenotype catalog of 174 mitochondrial disease genes and study associations of diseases and genes. Phenotypic features such as clinical signs and symptoms were manually annotated from full-text medical articles and classified based on the hierarchical MeSH ontology. This classification of phenotypic features of each gene allowed for the comparison of diseases between different genes. In turn, we were then able to measure the phenotypic associations of disease genes for which we calculated a quantitative value that is based on their shared phenotypic features. The results showed that genes sharing more similar phenotypes have a stronger tendency for functional interactions, proving the usefulness of phenotype similarity values in disease gene network analysis. We then constructed a functional network of mitochondrial genes and discovered a higher connectivity for non-disease than for disease genes, and a tendency of disease genes to interact with each other. Utilizing these differences, we propose 168 candidate genes that resemble the characteristic interaction patterns of mitochondrial disease genes. Through their network associations, the candidates are further prioritized for the study of specific disorders such as optic neuropathies and Parkinson disease. Most mitochondrial disease phenotypes involve several clinical categories including neurologic, metabolic, and gastrointestinal disorders, which might indicate the effects of gene defects within the mitochondrial system. The accompanying knowledgebase (http://www.mitophenome.org/) supports the study of clinical diseases and associated genes

    Mortality in Pharmacologically Treated Older Adults with Diabetes: The Cardiovascular Health Study, 1989–2001

    Get PDF
    BACKGROUND: Diabetes mellitus (DM) confers an increased risk of mortality in young and middle-aged individuals and in women. It is uncertain, however, whether excess DM mortality continues beyond age 75 years, is related to type of hypoglycemic therapy, and whether women continue to be disproportionately affected by DM into older age. METHODS AND FINDINGS: From the Cardiovascular Health Study, a prospective study of 5,888 adults, we examined 5,372 participants aged 65 y or above without DM (91.2%), 322 with DM treated with oral hypoglycemic agents (OHGAs) (5.5%), and 194 with DM treated with insulin (3.3%). Participants were followed (1989–2001) for total, cardiovascular disease (CVD), coronary heart disease (CHD), and non-CVD/noncancer mortality. Compared with non-DM participants, those treated with OHGAs or insulin had adjusted hazard ratios (HRs) for total mortality of 1.33 (95% confidence interval [CI], 1.10 to 1.62) and 2.04 (95% CI, 1.62 to 2.57); CVD mortality, 1.99 (95% CI, 1.54 to 2.57) and 2.16 (95% CI, 1.54 to 3.03); CHD mortality, 2.47 (95% CI, 1.89 to 3.24) and 2.75 (95% CI, 1.95 to 3.87); and infectious and renal mortality, 1.35 (95% CI, 0.70 to 2.59) and 6.55 (95% CI, 4.18 to 10.26), respectively. The interaction of age (65–74 y versus ≥75 y) with DM was not significant. Women treated with OHGAs had a similar HR for total mortality to men, but a higher HR when treated with insulin. CONCLUSIONS: DM mortality risk remains high among older adults in the current era of medical care. Mortality risk and type of mortality differ between OHGA and insulin treatment. Women treated with insulin therapy have an especially high mortality risk. Given the high absolute CVD mortality in older people, those with DM warrant aggressive CVD risk factor reduction

    Improving the sensitivity to gravitational-wave sources by modifying the input-output optics of advanced interferometers

    Get PDF
    We study frequency dependent (FD) input-output schemes for signal-recycling interferometers, the baseline design of Advanced LIGO and the current configuration of GEO 600. Complementary to a recent proposal by Harms et al. to use FD input squeezing and ordinary homodyne detection, we explore a scheme which uses ordinary squeezed vacuum, but FD readout. Both schemes, which are sub-optimal among all possible input-output schemes, provide a global noise suppression by the power squeeze factor, while being realizable by using detuned Fabry-Perot cavities as input/output filters. At high frequencies, the two schemes are shown to be equivalent, while at low frequencies our scheme gives better performance than that of Harms et al., and is nearly fully optimal. We then study the sensitivity improvement achievable by these schemes in Advanced LIGO era (with 30-m filter cavities and current estimates of filter-mirror losses and thermal noise), for neutron star binary inspirals, and for narrowband GW sources such as low-mass X-ray binaries and known radio pulsars. Optical losses are shown to be a major obstacle for the actual implementation of these techniques in Advanced LIGO. On time scales of third-generation interferometers, like EURO/LIGO-III (~2012), with kilometer-scale filter cavities, a signal-recycling interferometer with the FD readout scheme explored in this paper can have performances comparable to existing proposals. [abridged]Comment: Figs. 9 and 12 corrected; Appendix added for narrowband data analysi

    Upper limits on the strength of periodic gravitational waves from PSR J1939+2134

    Get PDF
    The first science run of the LIGO and GEO gravitational wave detectors presented the opportunity to test methods of searching for gravitational waves from known pulsars. Here we present new direct upper limits on the strength of waves from the pulsar PSR J1939+2134 using two independent analysis methods, one in the frequency domain using frequentist statistics and one in the time domain using Bayesian inference. Both methods show that the strain amplitude at Earth from this pulsar is less than a few times 102210^{-22}.Comment: 7 pages, 1 figure, to appear in the Proceedings of the 5th Edoardo Amaldi Conference on Gravitational Waves, Tirrenia, Pisa, Italy, 6-11 July 200

    Pan-Cancer Analysis of lncRNA Regulation Supports Their Targeting of Cancer Genes in Each Tumor Context

    Get PDF
    Long noncoding RNAs (lncRNAs) are commonly dys-regulated in tumors, but only a handful are known toplay pathophysiological roles in cancer. We inferredlncRNAs that dysregulate cancer pathways, onco-genes, and tumor suppressors (cancer genes) bymodeling their effects on the activity of transcriptionfactors, RNA-binding proteins, and microRNAs in5,185 TCGA tumors and 1,019 ENCODE assays.Our predictions included hundreds of candidateonco- and tumor-suppressor lncRNAs (cancerlncRNAs) whose somatic alterations account for thedysregulation of dozens of cancer genes and path-ways in each of 14 tumor contexts. To demonstrateproof of concept, we showed that perturbations tar-geting OIP5-AS1 (an inferred tumor suppressor) andTUG1 and WT1-AS (inferred onco-lncRNAs) dysre-gulated cancer genes and altered proliferation ofbreast and gynecologic cancer cells. Our analysis in-dicates that, although most lncRNAs are dysregu-lated in a tumor-specific manner, some, includingOIP5-AS1, TUG1, NEAT1, MEG3, and TSIX, synergis-tically dysregulate cancer pathways in multiple tumorcontexts

    Genomic, Pathway Network, and Immunologic Features Distinguishing Squamous Carcinomas

    Get PDF
    This integrated, multiplatform PanCancer Atlas study co-mapped and identified distinguishing molecular features of squamous cell carcinomas (SCCs) from five sites associated with smokin

    Pan-cancer Alterations of the MYC Oncogene and Its Proximal Network across the Cancer Genome Atlas

    Get PDF
    Although theMYConcogene has been implicated incancer, a systematic assessment of alterations ofMYC, related transcription factors, and co-regulatoryproteins, forming the proximal MYC network (PMN),across human cancers is lacking. Using computa-tional approaches, we define genomic and proteo-mic features associated with MYC and the PMNacross the 33 cancers of The Cancer Genome Atlas.Pan-cancer, 28% of all samples had at least one ofthe MYC paralogs amplified. In contrast, the MYCantagonists MGA and MNT were the most frequentlymutated or deleted members, proposing a roleas tumor suppressors.MYCalterations were mutu-ally exclusive withPIK3CA,PTEN,APC,orBRAFalterations, suggesting that MYC is a distinct onco-genic driver. Expression analysis revealed MYC-associated pathways in tumor subtypes, such asimmune response and growth factor signaling; chro-matin, translation, and DNA replication/repair wereconserved pan-cancer. This analysis reveals insightsinto MYC biology and is a reference for biomarkersand therapeutics for cancers with alterations ofMYC or the PMN

    Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images

    Get PDF
    Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumorinfiltrating lymphocytes (TILs) based on H&E images from 13 TCGA tumor types. These TIL maps are derived through computational staining using a convolutional neural network trained to classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and correlation with overall survival. TIL map structural patterns were grouped using standard histopathological parameters. These patterns are enriched in particular T cell subpopulations derived from molecular measures. TIL densities and spatial structure were differentially enriched among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for the TCGA image archives with insights into the tumor-immune microenvironment

    State of the climate in 2018

    Get PDF
    In 2018, the dominant greenhouse gases released into Earth’s atmosphere—carbon dioxide, methane, and nitrous oxide—continued their increase. The annual global average carbon dioxide concentration at Earth’s surface was 407.4 ± 0.1 ppm, the highest in the modern instrumental record and in ice core records dating back 800 000 years. Combined, greenhouse gases and several halogenated gases contribute just over 3 W m−2 to radiative forcing and represent a nearly 43% increase since 1990. Carbon dioxide is responsible for about 65% of this radiative forcing. With a weak La Niña in early 2018 transitioning to a weak El Niño by the year’s end, the global surface (land and ocean) temperature was the fourth highest on record, with only 2015 through 2017 being warmer. Several European countries reported record high annual temperatures. There were also more high, and fewer low, temperature extremes than in nearly all of the 68-year extremes record. Madagascar recorded a record daily temperature of 40.5°C in Morondava in March, while South Korea set its record high of 41.0°C in August in Hongcheon. Nawabshah, Pakistan, recorded its highest temperature of 50.2°C, which may be a new daily world record for April. Globally, the annual lower troposphere temperature was third to seventh highest, depending on the dataset analyzed. The lower stratospheric temperature was approximately fifth lowest. The 2018 Arctic land surface temperature was 1.2°C above the 1981–2010 average, tying for third highest in the 118-year record, following 2016 and 2017. June’s Arctic snow cover extent was almost half of what it was 35 years ago. Across Greenland, however, regional summer temperatures were generally below or near average. Additionally, a satellite survey of 47 glaciers in Greenland indicated a net increase in area for the first time since records began in 1999. Increasing permafrost temperatures were reported at most observation sites in the Arctic, with the overall increase of 0.1°–0.2°C between 2017 and 2018 being comparable to the highest rate of warming ever observed in the region. On 17 March, Arctic sea ice extent marked the second smallest annual maximum in the 38-year record, larger than only 2017. The minimum extent in 2018 was reached on 19 September and again on 23 September, tying 2008 and 2010 for the sixth lowest extent on record. The 23 September date tied 1997 as the latest sea ice minimum date on record. First-year ice now dominates the ice cover, comprising 77% of the March 2018 ice pack compared to 55% during the 1980s. Because thinner, younger ice is more vulnerable to melting out in summer, this shift in sea ice age has contributed to the decreasing trend in minimum ice extent. Regionally, Bering Sea ice extent was at record lows for almost the entire 2017/18 ice season. For the Antarctic continent as a whole, 2018 was warmer than average. On the highest points of the Antarctic Plateau, the automatic weather station Relay (74°S) broke or tied six monthly temperature records throughout the year, with August breaking its record by nearly 8°C. However, cool conditions in the western Bellingshausen Sea and Amundsen Sea sector contributed to a low melt season overall for 2017/18. High SSTs contributed to low summer sea ice extent in the Ross and Weddell Seas in 2018, underpinning the second lowest Antarctic summer minimum sea ice extent on record. Despite conducive conditions for its formation, the ozone hole at its maximum extent in September was near the 2000–18 mean, likely due to an ongoing slow decline in stratospheric chlorine monoxide concentration. Across the oceans, globally averaged SST decreased slightly since the record El Niño year of 2016 but was still far above the climatological mean. On average, SST is increasing at a rate of 0.10° ± 0.01°C decade−1 since 1950. The warming appeared largest in the tropical Indian Ocean and smallest in the North Pacific. The deeper ocean continues to warm year after year. For the seventh consecutive year, global annual mean sea level became the highest in the 26-year record, rising to 81 mm above the 1993 average. As anticipated in a warming climate, the hydrological cycle over the ocean is accelerating: dry regions are becoming drier and wet regions rainier. Closer to the equator, 95 named tropical storms were observed during 2018, well above the 1981–2010 average of 82. Eleven tropical cyclones reached Saffir–Simpson scale Category 5 intensity. North Atlantic Major Hurricane Michael’s landfall intensity of 140 kt was the fourth strongest for any continental U.S. hurricane landfall in the 168-year record. Michael caused more than 30 fatalities and 25billion(U.S.dollars)indamages.InthewesternNorthPacific,SuperTyphoonMangkhutledto160fatalitiesand25 billion (U.S. dollars) in damages. In the western North Pacific, Super Typhoon Mangkhut led to 160 fatalities and 6 billion (U.S. dollars) in damages across the Philippines, Hong Kong, Macau, mainland China, Guam, and the Northern Mariana Islands. Tropical Storm Son-Tinh was responsible for 170 fatalities in Vietnam and Laos. Nearly all the islands of Micronesia experienced at least moderate impacts from various tropical cyclones. Across land, many areas around the globe received copious precipitation, notable at different time scales. Rodrigues and Réunion Island near southern Africa each reported their third wettest year on record. In Hawaii, 1262 mm precipitation at Waipā Gardens (Kauai) on 14–15 April set a new U.S. record for 24-h precipitation. In Brazil, the city of Belo Horizonte received nearly 75 mm of rain in just 20 minutes, nearly half its monthly average. Globally, fire activity during 2018 was the lowest since the start of the record in 1997, with a combined burned area of about 500 million hectares. This reinforced the long-term downward trend in fire emissions driven by changes in land use in frequently burning savannas. However, wildfires burned 3.5 million hectares across the United States, well above the 2000–10 average of 2.7 million hectares. Combined, U.S. wildfire damages for the 2017 and 2018 wildfire seasons exceeded $40 billion (U.S. dollars)
    corecore