10,766 research outputs found
An Assessment to Benchmark the Seismic Performance of a Code-Conforming Reinforced-Concrete Moment-Frame Building
This report describes a state-of-the-art performance-based earthquake engineering methodology
that is used to assess the seismic performance of a four-story reinforced concrete (RC) office
building that is generally representative of low-rise office buildings constructed in highly seismic
regions of California. This âbenchmarkâ building is considered to be located at a site in the Los
Angeles basin, and it was designed with a ductile RC special moment-resisting frame as its
seismic lateral system that was designed according to modern building codes and standards. The
buildingâs performance is quantified in terms of structural behavior up to collapse, structural and
nonstructural damage and associated repair costs, and the risk of fatalities and their associated
economic costs. To account for different building configurations that may be designed in
practice to meet requirements of building size and use, eight structural design alternatives are
used in the performance assessments.
Our performance assessments account for important sources of uncertainty in the ground
motion hazard, the structural response, structural and nonstructural damage, repair costs, and
life-safety risk. The ground motion hazard characterization employs a site-specific probabilistic
seismic hazard analysis and the evaluation of controlling seismic sources (through
disaggregation) at seven ground motion levels (encompassing return periods ranging from 7 to
2475 years). Innovative procedures for ground motion selection and scaling are used to develop
acceleration time history suites corresponding to each of the seven ground motion levels.
Structural modeling utilizes both âfiberâ models and âplastic hingeâ models. Structural
modeling uncertainties are investigated through comparison of these two modeling approaches,
and through variations in structural component modeling parameters (stiffness, deformation
capacity, degradation, etc.). Structural and nonstructural damage (fragility) models are based on
a combination of test data, observations from post-earthquake reconnaissance, and expert
opinion. Structural damage and repair costs are modeled for the RC beams, columns, and slabcolumn connections. Damage and associated repair costs are considered for some nonstructural
building components, including wallboard partitions, interior paint, exterior glazing, ceilings,
sprinkler systems, and elevators. The risk of casualties and the associated economic costs are
evaluated based on the risk of structural collapse, combined with recent models on earthquake
fatalities in collapsed buildings and accepted economic modeling guidelines for the value of
human life in loss and cost-benefit studies.
The principal results of this work pertain to the building collapse risk, damage and repair
cost, and life-safety risk. These are discussed successively as follows.
When accounting for uncertainties in structural modeling and record-to-record variability
(i.e., conditional on a specified ground shaking intensity), the structural collapse probabilities of
the various designs range from 2% to 7% for earthquake ground motions that have a 2%
probability of exceedance in 50 years (2475 years return period). When integrated with the
ground motion hazard for the southern California site, the collapse probabilities result in mean
annual frequencies of collapse in the range of [0.4 to 1.4]x10
-4
for the various benchmark
building designs. In the development of these results, we made the following observations that
are expected to be broadly applicable:
(1) The ground motions selected for performance simulations must consider spectral
shape (e.g., through use of the epsilon parameter) and should appropriately account for
correlations between motions in both horizontal directions;
(2) Lower-bound component models, which are commonly used in performance-based
assessment procedures such as FEMA 356, can significantly bias collapse analysis results; it is
more appropriate to use median component behavior, including all aspects of the component
model (strength, stiffness, deformation capacity, cyclic deterioration, etc.);
(3) Structural modeling uncertainties related to component deformation capacity and
post-peak degrading stiffness can impact the variability of calculated collapse probabilities and
mean annual rates to a similar degree as record-to-record variability of ground motions.
Therefore, including the effects of such structural modeling uncertainties significantly increases
the mean annual collapse rates. We found this increase to be roughly four to eight times relative
to rates evaluated for the median structural model;
(4) Nonlinear response analyses revealed at least six distinct collapse mechanisms, the
most common of which was a story mechanism in the third story (differing from the multi-story
mechanism predicted by nonlinear static pushover analysis);
(5) Soil-foundation-structure interaction effects did not significantly affect the structural
response, which was expected given the relatively flexible superstructure and stiff soils.
The potential for financial loss is considerable. Overall, the calculated expected annual
losses (EAL) are in the range of 97,000 for the various code-conforming benchmark
building designs, or roughly 1% of the replacement cost of the building (3.5M, the fatality rate translates to an EAL due to
fatalities of 5,600 for the code-conforming designs, and 66,000, the monetary value associated with life loss is small,
suggesting that the governing factor in this respect will be the maximum permissible life-safety
risk deemed by the public (or its representative government) to be appropriate for buildings.
Although the focus of this report is on one specific building, it can be used as a reference
for other types of structures. This report is organized in such a way that the individual core
chapters (4, 5, and 6) can be read independently. Chapter 1 provides background on the
performance-based earthquake engineering (PBEE) approach. Chapter 2 presents the
implementation of the PBEE methodology of the PEER framework, as applied to the benchmark
building. Chapter 3 sets the stage for the choices of location and basic structural design. The subsequent core chapters focus on the hazard analysis (Chapter 4), the structural analysis
(Chapter 5), and the damage and loss analyses (Chapter 6). Although the report is self-contained,
readers interested in additional details can find them in the appendices
PEER Testbed Study on a Laboratory Building: Exercising Seismic Performance Assessment
From 2002 to 2004 (years five and six of a ten-year funding cycle), the PEER Center organized
the majority of its research around six testbeds. Two buildings and two bridges, a campus, and a
transportation network were selected as case studies to âexerciseâ the PEER performance-based
earthquake engineering methodology. All projects involved interdisciplinary teams of
researchers, each producing data to be used by other colleagues in their research. The testbeds
demonstrated that it is possible to create the data necessary to populate the PEER performancebased framing equation, linking the hazard analysis, the structural analysis, the development of
damage measures, loss analysis, and decision variables.
This report describes one of the building testbedsâthe UC Science Building. The project
was chosen to focus attention on the consequences of losses of laboratory contents, particularly
downtime. The UC Science testbed evaluated the earthquake hazard and the structural
performance of a well-designed recently built reinforced concrete laboratory building using the
OpenSees platform. Researchers conducted shake table tests on samples of critical laboratory
contents in order to develop fragility curves used to analyze the probability of losses based on
equipment failure. The UC Science testbed undertook an extreme case in performance
assessmentâlinking performance of contents to operational failure. The research shows the
interdependence of building structure, systems, and contents in performance assessment, and
highlights where further research is needed.
The Executive Summary provides a short description of the overall testbed research
program, while the main body of the report includes summary chapters from individual
researchers. More extensive research reports are cited in the reference section of each chapter
In vino veritas: Theory and evidence on social drinking
It is a persistent phenomenon in many societies that a large proportion of alcohol consumption takes place in company of other people. While the phenomenon of social or public drinking is well discussed in disciplines as social psychology and anthropology, economists have paid little attention to the social environment of alcohol consumption. This paper tries to close this gap and explains social drinking as a trust facilitating device. Since alcohol consumption tends to make some people (unwillingly) tell the truth, social drinking can eventually serve as a signaling device in social contact games. Empirical support is obtained from a cross-country analysis of trust and a newly developed index of moderate alcohol consumption. --social and public drinking,alcohol consumption,social contact games,trust,signaling
PowerPack: Energy Profiling and Analysis of High-Performance Systems and Applications
Energy efficiency is a major concern in modern high-performance computing system design. In the past few years, there has been mounting evidence that power usage limits system scale and computing density, and thus, ultimately system performance. However, despite the impact of power and energy on the computer systems community, few studies provide insight to where and how power is consumed on high-performance systems and applications. In previous work, we designed a framework called PowerPack that was the first tool to isolate the power consumption of devices including disks, memory, NICs, and processors in a high-performance cluster and correlate these measurements to application functions. In this work, we extend our framework to support systems with multicore, multiprocessor-based nodes, and then provide in-depth analyses of the energy consumption of parallel applications on clusters of these systems. These analyses include the impacts of chip multiprocessing on power and energy efficiency, and its interaction with application executions. In addition, we use PowerPack to study the power dynamics and energy efficiencies of dynamic voltage and frequency scaling (DVFS) techniques on clusters. Our experiments reveal conclusively how intelligent DVFS scheduling can enhance system energy efficiency while maintaining performance
Dematerialization and capital maintenance: Two sides of the sustainability coin
The reductionist trend of equalizing sustainable development with CO2control needs to be reversed - notwithstanding the significance of climate change. Conventional, 'compartmentalized' data systems impede an integrated vision andtreatment of the paradigm. New accounts and balances focus on the interactionbetween environment and economy. 'Greened' national accounts measure economic sustainability in terms of (produced and natural) capital maintenance; balances of material flows assess ecological sustainability as the dematerializationof production and consumption. Both concepts aim to preserve environmentalassets. They differ however with regard to the scope, strength and evaluation ofsustainability. First results for Germany indicate weak sustainability of theeconomy, owing to an increasing capital base. Strong sustainability is not in sight,though, since material throughput has not been reduced sufficiently. An 'Alliancefor Sustainable Development' is proposed to implement and sustain the paradigm. --Dematerialization,capital maintenance,sustainability,environmental accounting,eco-tax,alliance for sustainable development
INFORMAL SECTOR AND SOCIAL POLICY Compendium of Personal and Technical Reflections Cornell-SEWA-WIEGO Exposure and Dialogue Program Oaxaca, Mexico
Food Security and Poverty, International Development, International Relations/Trade,
Sharing Social Network Data: Differentially Private Estimation of Exponential-Family Random Graph Models
Motivated by a real-life problem of sharing social network data that contain
sensitive personal information, we propose a novel approach to release and
analyze synthetic graphs in order to protect privacy of individual
relationships captured by the social network while maintaining the validity of
statistical results. A case study using a version of the Enron e-mail corpus
dataset demonstrates the application and usefulness of the proposed techniques
in solving the challenging problem of maintaining privacy \emph{and} supporting
open access to network data to ensure reproducibility of existing studies and
discovering new scientific insights that can be obtained by analyzing such
data. We use a simple yet effective randomized response mechanism to generate
synthetic networks under -edge differential privacy, and then use
likelihood based inference for missing data and Markov chain Monte Carlo
techniques to fit exponential-family random graph models to the generated
synthetic networks.Comment: Updated, 39 page
- âŠ