842 research outputs found

    An Assessment to Benchmark the Seismic Performance of a Code-Conforming Reinforced-Concrete Moment-Frame Building

    Get PDF
    This report describes a state-of-the-art performance-based earthquake engineering methodology that is used to assess the seismic performance of a four-story reinforced concrete (RC) office building that is generally representative of low-rise office buildings constructed in highly seismic regions of California. This “benchmark” building is considered to be located at a site in the Los Angeles basin, and it was designed with a ductile RC special moment-resisting frame as its seismic lateral system that was designed according to modern building codes and standards. The building’s performance is quantified in terms of structural behavior up to collapse, structural and nonstructural damage and associated repair costs, and the risk of fatalities and their associated economic costs. To account for different building configurations that may be designed in practice to meet requirements of building size and use, eight structural design alternatives are used in the performance assessments. Our performance assessments account for important sources of uncertainty in the ground motion hazard, the structural response, structural and nonstructural damage, repair costs, and life-safety risk. The ground motion hazard characterization employs a site-specific probabilistic seismic hazard analysis and the evaluation of controlling seismic sources (through disaggregation) at seven ground motion levels (encompassing return periods ranging from 7 to 2475 years). Innovative procedures for ground motion selection and scaling are used to develop acceleration time history suites corresponding to each of the seven ground motion levels. Structural modeling utilizes both “fiber” models and “plastic hinge” models. Structural modeling uncertainties are investigated through comparison of these two modeling approaches, and through variations in structural component modeling parameters (stiffness, deformation capacity, degradation, etc.). Structural and nonstructural damage (fragility) models are based on a combination of test data, observations from post-earthquake reconnaissance, and expert opinion. Structural damage and repair costs are modeled for the RC beams, columns, and slabcolumn connections. Damage and associated repair costs are considered for some nonstructural building components, including wallboard partitions, interior paint, exterior glazing, ceilings, sprinkler systems, and elevators. The risk of casualties and the associated economic costs are evaluated based on the risk of structural collapse, combined with recent models on earthquake fatalities in collapsed buildings and accepted economic modeling guidelines for the value of human life in loss and cost-benefit studies. The principal results of this work pertain to the building collapse risk, damage and repair cost, and life-safety risk. These are discussed successively as follows. When accounting for uncertainties in structural modeling and record-to-record variability (i.e., conditional on a specified ground shaking intensity), the structural collapse probabilities of the various designs range from 2% to 7% for earthquake ground motions that have a 2% probability of exceedance in 50 years (2475 years return period). When integrated with the ground motion hazard for the southern California site, the collapse probabilities result in mean annual frequencies of collapse in the range of [0.4 to 1.4]x10 -4 for the various benchmark building designs. In the development of these results, we made the following observations that are expected to be broadly applicable: (1) The ground motions selected for performance simulations must consider spectral shape (e.g., through use of the epsilon parameter) and should appropriately account for correlations between motions in both horizontal directions; (2) Lower-bound component models, which are commonly used in performance-based assessment procedures such as FEMA 356, can significantly bias collapse analysis results; it is more appropriate to use median component behavior, including all aspects of the component model (strength, stiffness, deformation capacity, cyclic deterioration, etc.); (3) Structural modeling uncertainties related to component deformation capacity and post-peak degrading stiffness can impact the variability of calculated collapse probabilities and mean annual rates to a similar degree as record-to-record variability of ground motions. Therefore, including the effects of such structural modeling uncertainties significantly increases the mean annual collapse rates. We found this increase to be roughly four to eight times relative to rates evaluated for the median structural model; (4) Nonlinear response analyses revealed at least six distinct collapse mechanisms, the most common of which was a story mechanism in the third story (differing from the multi-story mechanism predicted by nonlinear static pushover analysis); (5) Soil-foundation-structure interaction effects did not significantly affect the structural response, which was expected given the relatively flexible superstructure and stiff soils. The potential for financial loss is considerable. Overall, the calculated expected annual losses (EAL) are in the range of 52,000to52,000 to 97,000 for the various code-conforming benchmark building designs, or roughly 1% of the replacement cost of the building (8.8M).Theselossesaredominatedbytheexpectedrepaircostsofthewallboardpartitions(includinginteriorpaint)andbythestructuralmembers.Lossestimatesaresensitivetodetailsofthestructuralmodels,especiallytheinitialstiffnessofthestructuralelements.Lossesarealsofoundtobesensitivetostructuralmodelingchoices,suchasignoringthetensilestrengthoftheconcrete(40EAL)orthecontributionofthegravityframestooverallbuildingstiffnessandstrength(15changeinEAL).Althoughthereareanumberoffactorsidentifiedintheliteratureaslikelytoaffecttheriskofhumaninjuryduringseismicevents,thecasualtymodelinginthisstudyfocusesonthosefactors(buildingcollapse,buildingoccupancy,andspatiallocationofbuildingoccupants)thatdirectlyinformthebuildingdesignprocess.Theexpectedannualnumberoffatalitiesiscalculatedforthebenchmarkbuilding,assumingthatanearthquakecanoccuratanytimeofanydaywithequalprobabilityandusingfatalityprobabilitiesconditionedonstructuralcollapseandbasedonempiricaldata.Theexpectedannualnumberoffatalitiesforthecodeconformingbuildingsrangesbetween0.05102and0.21102,andisequalto2.30102foranoncodeconformingdesign.Theexpectedlossoflifeduringaseismiceventisperhapsthedecisionvariablethatownersandpolicymakerswillbemostinterestedinmitigating.Thefatalityestimationcarriedoutforthebenchmarkbuildingprovidesamethodologyforcomparingthisimportantvalueforvariousbuildingdesigns,andenablesinformeddecisionmakingduringthedesignprocess.Theexpectedannuallossassociatedwithfatalitiescausedbybuildingearthquakedamageisestimatedbyconvertingtheexpectedannualnumberoffatalitiesintoeconomicterms.Assumingthevalueofahumanlifeis8.8M). These losses are dominated by the expected repair costs of the wallboard partitions (including interior paint) and by the structural members. Loss estimates are sensitive to details of the structural models, especially the initial stiffness of the structural elements. Losses are also found to be sensitive to structural modeling choices, such as ignoring the tensile strength of the concrete (40% change in EAL) or the contribution of the gravity frames to overall building stiffness and strength (15% change in EAL). Although there are a number of factors identified in the literature as likely to affect the risk of human injury during seismic events, the casualty modeling in this study focuses on those factors (building collapse, building occupancy, and spatial location of building occupants) that directly inform the building design process. The expected annual number of fatalities is calculated for the benchmark building, assuming that an earthquake can occur at any time of any day with equal probability and using fatality probabilities conditioned on structural collapse and based on empirical data. The expected annual number of fatalities for the code-conforming buildings ranges between 0.05*10 -2 and 0.21*10 -2 , and is equal to 2.30*10 -2 for a non-code conforming design. The expected loss of life during a seismic event is perhaps the decision variable that owners and policy makers will be most interested in mitigating. The fatality estimation carried out for the benchmark building provides a methodology for comparing this important value for various building designs, and enables informed decision making during the design process. The expected annual loss associated with fatalities caused by building earthquake damage is estimated by converting the expected annual number of fatalities into economic terms. Assuming the value of a human life is 3.5M, the fatality rate translates to an EAL due to fatalities of 3,500to3,500 to 5,600 for the code-conforming designs, and 79,800forthenoncodeconformingdesign.ComparedtotheEALduetorepaircostsofthecodeconformingdesigns,whichareontheorderof79,800 for the non-code conforming design. Compared to the EAL due to repair costs of the code-conforming designs, which are on the order of 66,000, the monetary value associated with life loss is small, suggesting that the governing factor in this respect will be the maximum permissible life-safety risk deemed by the public (or its representative government) to be appropriate for buildings. Although the focus of this report is on one specific building, it can be used as a reference for other types of structures. This report is organized in such a way that the individual core chapters (4, 5, and 6) can be read independently. Chapter 1 provides background on the performance-based earthquake engineering (PBEE) approach. Chapter 2 presents the implementation of the PBEE methodology of the PEER framework, as applied to the benchmark building. Chapter 3 sets the stage for the choices of location and basic structural design. The subsequent core chapters focus on the hazard analysis (Chapter 4), the structural analysis (Chapter 5), and the damage and loss analyses (Chapter 6). Although the report is self-contained, readers interested in additional details can find them in the appendices

    Choice of autogenous conduit for lower extremity vein graft revisions

    Get PDF
    AbstractBackground: Surgical revision to repair stenosis is necessary in about 20% of lower extremity vein grafts (LEVGs). Alternate conduit, especially arm vein, is often necessary to achieve a policy of all-autogenous revisions. Although basilic vein harvest necessitates deep exposure in proximity to major nerves, it typically uses a large vein unaffected by prior intravenous lines and as such appears ideally suited for revisions in which a segmental interposition conduit is needed for revision within the graft or for extension to a more proximal inflow or distal outflow site. In this report, we describe our experience with the use of the basilic vein for LEVG revisions compared with other sources of autogenous conduit. Methods: All patients who underwent LEVG were placed in a duplex scan surveillance program. LEVGs that developed a focal area of increased velocity or uniformly low velocities throughout the graft with appropriate lesions confirmed with angiography were candidates for revision. All patients who underwent graft revision with basilic vein segments from January 1, 1990, to September 1, 2001, were identified, and their courses were reviewed for subsequent adverse events (further revision or occlusion) and complications of harvest. These revisions were compared with revisions in which cephalic and saphenous vein were used. Results: One hundred thirty basilic veins were used to revise 122 LEVGs. The mean follow-up period after revision was 28 ± 27 months. Ninety-three grafts (71%) remained patent with no further revision, and 37 grafts (29%) either needed additional revisions (22 grafts) or were occluded (15 grafts). Only four of these adverse events (11%) were directly attributed to the basilic vein segment. Ten of 43 grafts revised with cephalic vein (23%) were either revised or occluded, of which three were related to the cephalic vein segment (P = not significant, compared with basilic vein). Twenty-four of 81 grafts revised with saphenous vein (30%) were either revised or occluded, of which 11 were attributed to the saphenous vein segment (P < .01, compared with basilic vein). Two patients (1.5%) had complications from basilic vein harvest (one hematoma, one arterial injury). No neurologic injuries resulted from basilic vein harvest. Conclusion: The basilic vein is a reliable and durable conduit when used to segmentally revise LEVGs. Stenoses rarely occur within interposed basilic vein segments, and excellent freedom from subsequent revision or occlusion is possible. We conclude the basilic vein can be safely harvested with minimal complications and is ideally suited for use as a short segment interposition graft for LEVG revision. (J Vasc Surg 2002;36:238-44.

    Accretion vs colliding wind models for the gamma-ray binary LS I +61 303: an assessment

    Get PDF
    LS I +61 303 is a puzzling Be/X-ray binary with variable gamma-ray emission at up TeV energies. The nature of the compact object and the origin of the high-energy emission are unclear. One family of models invokes particle acceleration in shocks from the collision between the B-star wind and a relativistic pulsar wind, while another centers on a relativistic jet powered by accretion. Recent high-resolution radio observations showing a putative "cometary tail" pointing away from the Be star near periastron have been cited as support for the pulsar-wind model. We wish here to carry out a quantitative assessment of these competing models for this extraordinary source. We apply a 3D SPH code for dynamical simulations of both the pulsar-wind-interaction and accretion-jet models. The former yields a description of the shape of the wind-wind interaction surface. The latter provides an estimation of the accretion rate. The results allow critical evaluation of how the two distinct models confront the data in various wavebands under a range of conditions. When one accounts for the 3D dynamical wind interaction under realistic constraints for the relative strength of the B-star and pulsar winds, the resulting form of the interaction front does not match the putative "cometary tail" claimed from radio observations. On the other hand, dynamical simulations of the accretion-jet model indicate that the orbital phase variation of accretion power includes a secondary broad peak well away from periastron, thus providing a plausible way to explain the observed TeV gamma ray emission toward apastron. We conclude that the colliding-wind model is not clearly established for LS I +61 303, while the accretion-jet model can reproduce many key characteristics of the observed TeV gamma-ray emission.Comment: Accepted for publication in A&A. The resolution of the figures is lower than in the journal paper to minimize file sizes. Seven pages, 5 figure

    Comparison of axillofemoral and aortofemoral bypass for aortoiliac occlusive disease

    Get PDF
    AbstractPurpose: A comparison of aortofemoral bypass grafting (AOFBG) and axillofemoral bypass grafting (AXFBG) for occlusive disease performed by the same surgeons during a defined interval forms the basis for this report.Methods: Data regarding all patients who underwent AOFBG or AXFBG for lower-extremity ischemia caused by aortoiliac occlusive disease were prospectively entered into a computerized vascular registry. The decision to perform AOFBG rather than AXFBG was based on assessment of surgical risk and the surgeon's preference. This report describes results for surgical morbidity, mortality, patency, limb salvage, and patient survival for procedures performed from January 1988 through December 1993.Results: We performed 108 AXFBGs and 139 AOFBGs. AXFBG patients were older (mean age, 68 years compared with 58 years for AOFBG, p < 0.001), more often had heart disease (84% compared with 38%, p < 0.001), and more often underwent surgery for limb-salvage indications (80% compared with 42%, p < 0.001). No significant differences were found in operative mortality (AXFBG, 3.4%; AOFBG, <1.0%, p = NS), but major postoperative complications occurred more frequently after AOFBG (AXFBG, 9.2%; AOFBG, 19.4%; p < 0.05). Follow-up ranged from 1 to 83 months (mean, 27 months). Five-year life-table primary patency, limb salvage, and survival rates were 74%, 89%, and 45% for AXFBG and 80%, 79%, and 72% for AOFBG, respectively. Although the patient survival rate was statistically lower with AXFBG, primary patency and limb salvage rates did not differ when compared with AOFBG.Conclusion: When reserved for high-risk patients with limited life expectancy, the patency and limb salvage results of AXFBG are equivalent to those of AOFBG. (J VASC SURG 1996;23:263-71.

    Pecos River Watershed Protection Plan Update

    Get PDF
    Implementation of the Pecos River Watershed Protection Plan (WPP) began in November 2009 upon acceptance of the WPP by EPA. The primary goals of implementing the plan are to improve the health of the Pecos River watershed and instream water quality in the river and its tributaries. Considerable implementation progress has been made across the watershed; however, the need for continued implementation remains. The Pecos River WPP Update is a document that is developed and approved to be published. This report will contain updates on tracking the progress of implementation, saltcedar eradication efforts, education and outreach activities, and water quality monitoring in the watershed. This report will document and provide updates and any issues or adaptive management decisions on all of the measures within the WPP and any modifications to the goals and strategies identified in the WPP

    B843: The Ecology, Economics, and Management of Potato Cropping Systems: A Report of the First Four Years of the Maine Potato Ecosystem Project

    Get PDF
    The bulletin reports on the first four years of the Maine Potato Ecosystem Project, a long-term, multidisciplinary study of alternative crop management strategies. The study site is a 15-acre tract on the northern boundary of the University of Maine\u27s Aroostook Farm in Presque Isle, Maine, divided into 96 main plots that are grouped into four blocks. Each block is an area where soil survey data show similar soil characteristics. Thus, given the same production inputs, the crop output is expected to be the same on each plot within a block. Within each block there are 24 plots to which the different treatments have been randomly assigned. A treatment is a particular combination of the following factors: (1) pest management—conventiorial, reduced input, or biological; (2) potato variety—Atlantic or Superior; and (3) soil management—amended or unamended.https://digitalcommons.library.umaine.edu/aes_bulletin/1025/thumbnail.jp

    Peculiar Motions in the Region of the Ursa Major Supercluster of Galaxies

    Full text link
    We have investigated the peculiar motions of clusters of galaxies in the Ursa Major (UMa) supercluster and its neighborhood. Based on SDSS (Sloan Digital Sky Survey) data, we have compiled a sample of early-type galaxies and used their fundamental plane to determine the cluster distances and peculiar velocities. The samples of early-type galaxies in the central regions (within R_200) of 12 UMa clusters of galaxies, in three main subsystems of the supercluster -- the filamentary structures connecting the clusters, and in nine clusters from the nearest UMa neighborhood have similar parameters. The fairly high overdensity (3 by the galaxy number and 15 by the cluster number) suggests that the supercluster as a whole is gravitationally bound, while no significant peculiar motions have been found: the peculiar velocities do not exceed the measurement errors by more than a factor of 1.5-2. The mean random peculiar velocities of clusters and the systematic deviations from the overall Hubble expansion in the supercluster are consistent with theoretical estimates. For the possible approach of the three UMa subsystems to be confirmed, the measurement accuracy must be increased by a factor of 2-3.Comment: 21 pages, 4 tables, 7 figure

    Spatial correlations in attribute communities

    Get PDF
    Community detection is an important tool for exploring and classifying the properties of large complex networks and should be of great help for spatial networks. Indeed, in addition to their location, nodes in spatial networks can have attributes such as the language for individuals, or any other socio-economical feature that we would like to identify in communities. We discuss in this paper a crucial aspect which was not considered in previous studies which is the possible existence of correlations between space and attributes. Introducing a simple toy model in which both space and node attributes are considered, we discuss the effect of space-attribute correlations on the results of various community detection methods proposed for spatial networks in this paper and in previous studies. When space is irrelevant, our model is equivalent to the stochastic block model which has been shown to display a detectability-non detectability transition. In the regime where space dominates the link formation process, most methods can fail to recover the communities, an effect which is particularly marked when space-attributes correlations are strong. In this latter case, community detection methods which remove the spatial component of the network can miss a large part of the community structure and can lead to incorrect results.Comment: 10 pages and 7 figure

    Competing biosecurity and risk rationalities in the Chittagong poultry commodity chain, Bangladesh

    Get PDF
    This paper anthropologically explores how key actors in the Chittagong live bird trading network perceive biosecurity and risk in relation to avian influenza between production sites, market maker scenes and outlets. They pay attention to the past and the present, rather than the future, downplaying the need for strict risk management, as outbreaks have not been reported frequently for a number of years. This is analysed as ‘temporalities of risk perception regarding biosecurity’, through Black Swan theory, the idea that unexpected events with major effects are often inappropriately rationalized (Taleb in The Black Swan. The impact of the highly improbable, Random House, New York, 2007). This incorporates a sociocultural perspective on risk, emphasizing the contexts in which risk is understood, lived, embodied and experienced. Their risk calculation is explained in terms of social consent, practical intelligibility and convergence of constraints and motivation. The pragmatic and practical orientation towards risk stands in contrast to how risk is calculated in the avian influenza preparedness paradigm. It is argued that disease risk on the ground has become a normalized part of everyday business, as implied in Black Swan theory. Risk which is calculated retrospectively is unlikely to encourage investment in biosecurity and, thereby, points to the danger of unpredictable outlier events

    The Hunt for Exomoons with Kepler (HEK): I. Description of a New Observational Project

    Full text link
    Two decades ago, empirical evidence concerning the existence and frequency of planets around stars, other than our own, was absent. Since this time, the detection of extrasolar planets from Jupiter-sized to most recently Earth-sized worlds has blossomed and we are finally able to shed light on the plurality of Earth-like, habitable planets in the cosmos. Extrasolar moons may also be frequent habitable worlds but their detection or even systematic pursuit remains lacking in the current literature. Here, we present a description of the first systematic search for extrasolar moons as part of a new observational project called "The Hunt for Exomoons with Kepler" (HEK). The HEK project distills the entire list of known transiting planet candidates found by Kepler (2326 at the time of writing) down to the most promising candidates for hosting a moon. Selected targets are fitted using a multimodal nested sampling algorithm coupled with a planet-with-moon light curve modelling routine. By comparing the Bayesian evidence of a planet-only model to that of a planet-with-moon, the detection process is handled in a Bayesian framework. In the case of null detections, upper limits derived from posteriors marginalised over the entire prior volume will be provided to inform the frequency of large moons around viable planetary hosts, eta-moon. After discussing our methodologies for target selection, modelling, fitting and vetting, we provide two example analyses.Comment: 21 pages, 8 figures, 4 tables, accepted in Ap
    corecore