16,897 research outputs found
PEER Testbed Study on a Laboratory Building: Exercising Seismic Performance Assessment
From 2002 to 2004 (years five and six of a ten-year funding cycle), the PEER Center organized
the majority of its research around six testbeds. Two buildings and two bridges, a campus, and a
transportation network were selected as case studies to “exercise” the PEER performance-based
earthquake engineering methodology. All projects involved interdisciplinary teams of
researchers, each producing data to be used by other colleagues in their research. The testbeds
demonstrated that it is possible to create the data necessary to populate the PEER performancebased framing equation, linking the hazard analysis, the structural analysis, the development of
damage measures, loss analysis, and decision variables.
This report describes one of the building testbeds—the UC Science Building. The project
was chosen to focus attention on the consequences of losses of laboratory contents, particularly
downtime. The UC Science testbed evaluated the earthquake hazard and the structural
performance of a well-designed recently built reinforced concrete laboratory building using the
OpenSees platform. Researchers conducted shake table tests on samples of critical laboratory
contents in order to develop fragility curves used to analyze the probability of losses based on
equipment failure. The UC Science testbed undertook an extreme case in performance
assessment—linking performance of contents to operational failure. The research shows the
interdependence of building structure, systems, and contents in performance assessment, and
highlights where further research is needed.
The Executive Summary provides a short description of the overall testbed research
program, while the main body of the report includes summary chapters from individual
researchers. More extensive research reports are cited in the reference section of each chapter
Recommended from our members
Guidelines for Statistical Testing
This document provides an introduction to statistical testing. Statistical testing of software is here defined as testing in which the test cases are produced by a random process meant to produce different test cases with the same probabilities with which they would arise in actual use of the software. Statistical testing of software has these main advantages: for the purpose of reliability assessment and product acceptance, it supports directly estimates of reliability, and thus decisions on whether the software is ready for delivery or for use in a specific system. This feature is unique to statistical testing; for the purpose of improving the software, it tends to discover defects which would cause failures with the higher frequencies before those that would cause less frequent failures, thus focusing correction efforts in the most cost-effective way and delivering better software for a given debugging effort. Statistical testing has been reported to achieve dramatic improvements; from the point of view of costs, it facilitates the automation of the test process, thus allowing more testing at acceptable cost than manual testing would allow. This document explains the basic theory underlying statistical testing and provides guidance for its application. The material is organised to facilitate use both as an introduction for software engineers who are new to this approach to testing, and as a reference source during application. Statistical testing is applicable to practically all kinds of software, so this document is not markedly specialised for space applications, though the examples are mostly space-related and the discussion of the software lifecycle is meant to apply to common practice among ESA suppliers
Estimating the reproducibility of social learning research published between 1955 and 2018
Reproducibility is integral to science, but difficult to achieve. Previous research has quantified low rates of data availability and results reproducibility across the biological and behavioural sciences. Here, we surveyed 560 empirical publications, published between 1955 and 2018 in the social learning literature, a research topic that spans animal behaviour, behavioural ecology, cultural evolution and evolutionary psychology. Data were recoverable online or through direct data requests for 30% of this sample. Data recovery declines exponentially with time since publication, halving every 6 years, and up to every 9 years for human experimental data. When data for a publication can be recovered, we estimate a high probability of subsequent data usability (87%), analytical clarity (97%) and agreement of published results with reproduced findings (96%). This corresponds to an overall rate of recovering data and reproducing results of 23%, largely driven by the unavailability or incompleteness of data. We thus outline clear measures to improve the reproducibility of research on the ecology and evolution of social behaviour
Seismic Performance of Anchored Brick Veneer
A study was conducted on the out-of-plane seismic performance of anchored brick veneer
with wood-frame backup wall systems, to evaluate prescriptive design requirements and
current construction practices. Prescriptive requirements for the design and construction
of anchored brick veneer are currently provided by the Masonry Standards Joint
Committee (MSJC) Building Code, the International Residential Code (IRC) for Oneand
Two-Family Dwellings, and the Brick Industry Association (BIA) Technical Notes.
Laboratory tests were conducted on brick-tie-wood subassemblies, comprising two bricks
with a corrugated sheet metal tie either nail- or screw-attached to a wood stud, permitting
an evaluation of the stiffness, strength, and failure modes for a local portion of a veneer
wall system, rather than just of a single tie by itself. Then, full-scale brick veneer wall
specimens (two one-story solid walls, as well as a one-and-a-half story wall with a
window opening and a gable region) were tested under static and dynamic out-of-plane
loading on a shake table. The shake table tests captured the performance of brick veneer
wall systems, including interaction and load-sharing between the brick veneer, corrugated
sheet metal ties, and wood-frame backup. Finally, all of these test results were used to
develop finite element models of brick veneer wall systems, including nonlinear inelastic
properties for the tie connections. The experimental and analytical studies showed that
the out-of-plane seismic performance of residential anchored brick veneer walls is
generally governed by: tensile stiffness and strength properties of the tie connections, as
controlled by tie installation details; overall grid spacing of the tie connections, especially
for tie installation along the edges and in the upper regions of walls; and, overall wall
geometric variations. Damage limit states for single-story residential brick veneer wall
systems were established from the experimental and analytical studies as a function of
tensile failure of key tie connections, and the seismic fragility of this form of construction
was then evaluated. Based on the overall findings, it is recommended that codes
incorporate specific requirements for tie connection installation along all brick veneer
wall edges, as well as for tie connection installation at reduced spacings in the upper
regions of wall panels and near stiffer regions of the backup. Residential anchored brick
veneer construction should as a minimum be built in accordance with the current
prescriptive code requirements and recommendations, throughout low to moderate
seismicity regions of the central and eastern U.S., whereas non-compliant methods of
construction commonly substituted in practice are generally not acceptable.published or submitted for publicatio
Reliability Guided Resource Allocation for Large-scale Supercomputing Systems
In high performance computing systems, parallel applications request a large number of resources for long time periods. In this scenario, if a resource fails during the application runtime, it would cause all applications using this resource to fail. The probability of application failure is tied to the inherent reliability of resources used by the application. Our investigation of high performance computing systems operating in the field has revealed a significant difference in the measured operational reliability of individual computing nodes. By adding awareness of the individual system nodes\u27 reliability to the scheduler along with the predicted reliability needs of parallel applications, reliable resources can be matched with the most demanding applications to reduce the probability of application failure arising from resource failure. In this thesis, the researcher describes a new approach developed for resource allocation that can enhance the reliability and reduce the costs of failures of large-scale parallel applications that use high performance computing systems. This approach is based on a multi-class Erlang loss system that allows us to partition system resources based on predicted resource reliability, and to size each of these partitions to bound the probability of blocking requests to each partition while simultaneously improving the reliability of the most demanding parallel applications running on the system. Using this model, the partition mean time to failure (MTTF) is maximized and the probability of blocking of resource requests directed to each partition by a scheduling system can be controlled. This new technique can be used to determine the size of the system, to service peak loads with a bounded probability of blocking to resource requests. This approach would be useful for high performance computing system operators seeking to improve the reliability, efficiency and cost-effectiveness of their systems
- …