1,179 research outputs found

    Supporting the Everyday Work of Scientists: Automating Scientific Workflows

    Get PDF
    This paper describes an action research project that we undertook with National Research Council Canada (NRC) scientists. Based on discussions about their \ud difficulties in using software to collect data and manage processes, we identified three requirements for increasing research productivity: ease of use for end- \ud users; managing scientific workflows; and facilitating software interoperability. Based on these requirements, we developed a software framework, Sweet, to \ud assist in the automation of scientific workflows. \ud \ud Throughout the iterative development process, and through a series of structured interviews, we evaluated how the framework was used in practice, and identified \ud increases in productivity and effectiveness and their causes. While the framework provides resources for writing application wrappers, it was easier to code the applications’ functionality directly into the framework using OSS components. Ease of use for the end-user and flexible and fully parameterized workflow representations were key elements of the framework’s success. \u

    An Assessment to Benchmark the Seismic Performance of a Code-Conforming Reinforced-Concrete Moment-Frame Building

    Get PDF
    This report describes a state-of-the-art performance-based earthquake engineering methodology that is used to assess the seismic performance of a four-story reinforced concrete (RC) office building that is generally representative of low-rise office buildings constructed in highly seismic regions of California. This “benchmark” building is considered to be located at a site in the Los Angeles basin, and it was designed with a ductile RC special moment-resisting frame as its seismic lateral system that was designed according to modern building codes and standards. The building’s performance is quantified in terms of structural behavior up to collapse, structural and nonstructural damage and associated repair costs, and the risk of fatalities and their associated economic costs. To account for different building configurations that may be designed in practice to meet requirements of building size and use, eight structural design alternatives are used in the performance assessments. Our performance assessments account for important sources of uncertainty in the ground motion hazard, the structural response, structural and nonstructural damage, repair costs, and life-safety risk. The ground motion hazard characterization employs a site-specific probabilistic seismic hazard analysis and the evaluation of controlling seismic sources (through disaggregation) at seven ground motion levels (encompassing return periods ranging from 7 to 2475 years). Innovative procedures for ground motion selection and scaling are used to develop acceleration time history suites corresponding to each of the seven ground motion levels. Structural modeling utilizes both “fiber” models and “plastic hinge” models. Structural modeling uncertainties are investigated through comparison of these two modeling approaches, and through variations in structural component modeling parameters (stiffness, deformation capacity, degradation, etc.). Structural and nonstructural damage (fragility) models are based on a combination of test data, observations from post-earthquake reconnaissance, and expert opinion. Structural damage and repair costs are modeled for the RC beams, columns, and slabcolumn connections. Damage and associated repair costs are considered for some nonstructural building components, including wallboard partitions, interior paint, exterior glazing, ceilings, sprinkler systems, and elevators. The risk of casualties and the associated economic costs are evaluated based on the risk of structural collapse, combined with recent models on earthquake fatalities in collapsed buildings and accepted economic modeling guidelines for the value of human life in loss and cost-benefit studies. The principal results of this work pertain to the building collapse risk, damage and repair cost, and life-safety risk. These are discussed successively as follows. When accounting for uncertainties in structural modeling and record-to-record variability (i.e., conditional on a specified ground shaking intensity), the structural collapse probabilities of the various designs range from 2% to 7% for earthquake ground motions that have a 2% probability of exceedance in 50 years (2475 years return period). When integrated with the ground motion hazard for the southern California site, the collapse probabilities result in mean annual frequencies of collapse in the range of [0.4 to 1.4]x10 -4 for the various benchmark building designs. In the development of these results, we made the following observations that are expected to be broadly applicable: (1) The ground motions selected for performance simulations must consider spectral shape (e.g., through use of the epsilon parameter) and should appropriately account for correlations between motions in both horizontal directions; (2) Lower-bound component models, which are commonly used in performance-based assessment procedures such as FEMA 356, can significantly bias collapse analysis results; it is more appropriate to use median component behavior, including all aspects of the component model (strength, stiffness, deformation capacity, cyclic deterioration, etc.); (3) Structural modeling uncertainties related to component deformation capacity and post-peak degrading stiffness can impact the variability of calculated collapse probabilities and mean annual rates to a similar degree as record-to-record variability of ground motions. Therefore, including the effects of such structural modeling uncertainties significantly increases the mean annual collapse rates. We found this increase to be roughly four to eight times relative to rates evaluated for the median structural model; (4) Nonlinear response analyses revealed at least six distinct collapse mechanisms, the most common of which was a story mechanism in the third story (differing from the multi-story mechanism predicted by nonlinear static pushover analysis); (5) Soil-foundation-structure interaction effects did not significantly affect the structural response, which was expected given the relatively flexible superstructure and stiff soils. The potential for financial loss is considerable. Overall, the calculated expected annual losses (EAL) are in the range of 52,000to52,000 to 97,000 for the various code-conforming benchmark building designs, or roughly 1% of the replacement cost of the building (8.8M).Theselossesaredominatedbytheexpectedrepaircostsofthewallboardpartitions(includinginteriorpaint)andbythestructuralmembers.Lossestimatesaresensitivetodetailsofthestructuralmodels,especiallytheinitialstiffnessofthestructuralelements.Lossesarealsofoundtobesensitivetostructuralmodelingchoices,suchasignoringthetensilestrengthoftheconcrete(40EAL)orthecontributionofthegravityframestooverallbuildingstiffnessandstrength(15changeinEAL).Althoughthereareanumberoffactorsidentifiedintheliteratureaslikelytoaffecttheriskofhumaninjuryduringseismicevents,thecasualtymodelinginthisstudyfocusesonthosefactors(buildingcollapse,buildingoccupancy,andspatiallocationofbuildingoccupants)thatdirectlyinformthebuildingdesignprocess.Theexpectedannualnumberoffatalitiesiscalculatedforthebenchmarkbuilding,assumingthatanearthquakecanoccuratanytimeofanydaywithequalprobabilityandusingfatalityprobabilitiesconditionedonstructuralcollapseandbasedonempiricaldata.Theexpectedannualnumberoffatalitiesforthecodeconformingbuildingsrangesbetween0.05102and0.21102,andisequalto2.30102foranoncodeconformingdesign.Theexpectedlossoflifeduringaseismiceventisperhapsthedecisionvariablethatownersandpolicymakerswillbemostinterestedinmitigating.Thefatalityestimationcarriedoutforthebenchmarkbuildingprovidesamethodologyforcomparingthisimportantvalueforvariousbuildingdesigns,andenablesinformeddecisionmakingduringthedesignprocess.Theexpectedannuallossassociatedwithfatalitiescausedbybuildingearthquakedamageisestimatedbyconvertingtheexpectedannualnumberoffatalitiesintoeconomicterms.Assumingthevalueofahumanlifeis8.8M). These losses are dominated by the expected repair costs of the wallboard partitions (including interior paint) and by the structural members. Loss estimates are sensitive to details of the structural models, especially the initial stiffness of the structural elements. Losses are also found to be sensitive to structural modeling choices, such as ignoring the tensile strength of the concrete (40% change in EAL) or the contribution of the gravity frames to overall building stiffness and strength (15% change in EAL). Although there are a number of factors identified in the literature as likely to affect the risk of human injury during seismic events, the casualty modeling in this study focuses on those factors (building collapse, building occupancy, and spatial location of building occupants) that directly inform the building design process. The expected annual number of fatalities is calculated for the benchmark building, assuming that an earthquake can occur at any time of any day with equal probability and using fatality probabilities conditioned on structural collapse and based on empirical data. The expected annual number of fatalities for the code-conforming buildings ranges between 0.05*10 -2 and 0.21*10 -2 , and is equal to 2.30*10 -2 for a non-code conforming design. The expected loss of life during a seismic event is perhaps the decision variable that owners and policy makers will be most interested in mitigating. The fatality estimation carried out for the benchmark building provides a methodology for comparing this important value for various building designs, and enables informed decision making during the design process. The expected annual loss associated with fatalities caused by building earthquake damage is estimated by converting the expected annual number of fatalities into economic terms. Assuming the value of a human life is 3.5M, the fatality rate translates to an EAL due to fatalities of 3,500to3,500 to 5,600 for the code-conforming designs, and 79,800forthenoncodeconformingdesign.ComparedtotheEALduetorepaircostsofthecodeconformingdesigns,whichareontheorderof79,800 for the non-code conforming design. Compared to the EAL due to repair costs of the code-conforming designs, which are on the order of 66,000, the monetary value associated with life loss is small, suggesting that the governing factor in this respect will be the maximum permissible life-safety risk deemed by the public (or its representative government) to be appropriate for buildings. Although the focus of this report is on one specific building, it can be used as a reference for other types of structures. This report is organized in such a way that the individual core chapters (4, 5, and 6) can be read independently. Chapter 1 provides background on the performance-based earthquake engineering (PBEE) approach. Chapter 2 presents the implementation of the PBEE methodology of the PEER framework, as applied to the benchmark building. Chapter 3 sets the stage for the choices of location and basic structural design. The subsequent core chapters focus on the hazard analysis (Chapter 4), the structural analysis (Chapter 5), and the damage and loss analyses (Chapter 6). Although the report is self-contained, readers interested in additional details can find them in the appendices

    Transactional Economics: Victor Goldberg\u27s \u3ci\u3eFraming Contract Law\u3c/i\u3e

    Get PDF
    Professor Mark Gergen: Thank you. It is an honor to speak to this group and to be on a panel with Stewart Macaulay, Keith Rowley, and Victor Goldberg. I have an enormous amount of respect for the three. Keith had the misfortune of being a student of mine in Federal Income Tax. Framing Contract Law offers a wealth of information about familiar cases. Victor argues that in construing contracts, courts should be attentive to how people engineer contracts to minimize transaction costs. He shows that courts often err in this regard, imposing unnecessary costs. To make his case, Victor delves deeply into the background of cases, many that will be familiar to anyone who has taught contracts, and turns up much that is new and interesting. I am going to follow Victor\u27s lead by focusing on two cases that he discusses. I will briefly summarize what he says about the cases. I will then use the cases as a springboard to make my points, which are different from Victor\u27s points

    The Pleistocene Glacial Record at Two Quarries in Decatur County, Iowa

    Get PDF
    The Pleistocene stratigraphy and sedimentology of two quarry exposures near Grand River and Decatur City in Decatur County, Iowa document a sequence of Pleistocene sediments overlying striated Pennsylvanian limestone which represent at least two pre-Illinoian glacial advances into the ancestral Grand River valley. Two pre-Illinoian diarnictons separated by a clast pavement were observed at the Decatur City quarry; a single diarnicton was present at the Grand River quarry. At both quarries, the diarnictons exhibit comparable lithologic properties and are genetically interpreted as basal tills. The pre-Illinoian tills are tentatively correlated with the Alburnett Formation in eastern Iowa, primarily on the basis of clay mineralogy data. Fluvial erosional and depositional processes succeeded till deposition at both quarry sites. The tills are overlain by a fining-upward fluvial sequence upon which a well developed Yarmouth-Sangamon paleosol is developed. Sangamon Soil developed upon a pebbly diarnicton overlies the fluvial sediments. The pebbly diarnicton probably originated as colluvium from pre-Illinoian tills at higher landscape positions during late Sangamonian pedimentation. Lastly, periglacial conditions during mid-to-late Wisconsinan time resulted in multiple episodes of loess deposition corresponding to, in ascending order, the Pisgah Formation, Farmdale Soil and Peoria Loess, all Wisconsinan stratigraphic units

    Wideband TV white space transceiver design and implementation

    Get PDF
    For transceivers operating in television white space (TVWS), frequency agility and strict spectral mask fulfilments are vital. In the UK, TVWS covers a 320 MHz wide frequency band in the UHF range, and the aim of this paper is to present a wideband digital up- and down converter for this scenario. Sampling at radio frequency (RF), a two stage digital conversion is presented, which consists of a polyphase filter for implicit upsampling and decimation, and a filter bank-based multicarrier approach to resolve the 8MHz channels within the TVWS band. We demonstrate that the up- and down-conversion of 40 such channels is hardly more costly than that of a single channel. Appropriate filter design can satisfy the mandated spectral mask and control the reconstruction error. An FPGA implementation is discussed, capable of running the wideband transceiver on a single Virtex-7 device with sufficient word length to preserve the spectral mask requirements of the system

    Rapid Independent Trait Evolution despite a Strong Pleiotropic Genetic Correlation

    Get PDF
    This is the publisher's version. It can also be found here:http://dx.doi.org/10.1086/661907Genetic correlations are the most commonly studied of all potential constraints on adaptive evolution. We present a comprehensive test of constraints caused by genetic correlation, comparing empirical results to predictions from theory. The additive genetic correlation between the filament and the corolla tube in wild radish flowers is very high in magnitude, is estimated with good precision, and is caused by pleiotropy. Thus, evolutionary changes in the relative lengths of these two traits should be constrained. Still, artificial selection produced rapid evolution of these traits in opposite directions, so that in one replicate relative to controls, the difference between them increased by six standard deviations in only nine generations. This would result in a 54% increase in relative fitness on the basis of a previous estimate of natural selection in this population, and it would produce the phenotypes found in the most extreme species in the family Brassicaceae in less than 100 generations. These responses were within theoretical expectations and were much slower than if the genetic correlation was zero; thus, there was evidence for constraint. These results, coupled with comparable results from other species, show that evolution can be rapid despite the constraints caused by genetic correlations

    Towards a Singularity-Free Inflationary Universe?

    Get PDF
    We consider the problem of constructing a non-singular inflationary universe in stringy gravity via branch changing, from a previously superexponentially expanding phase to an FRW-like phase. Our approach is based on the phase space analysis of the dynamics, and we obtain a no-go theorem which rules out the efficient scenario of branch changing catalyzed by dilaton potential and stringy fluid sources. We furthermore consider the effects of string-loop corrections to the gravitational action in the form recently suggested by Damour and Polyakov. These corrections also fail to produce the desired branch change. However, focusing on the possibility that these corrections may decouple the dilaton, we deduce that they may lead to an inflationary expansion in the presence of a cosmological constant, which asymptotically approaches Einstein-deSitter solution.Comment: 11 pages, latex, eight uuencoded ps figures included, replaced abstract in tex

    Structure and spectroscopy of CuH prepared via borohydride reduction

    Get PDF
    Copper(I) hydride (cuprous hydride, CuH) was the first binary metal hydride to be discovered (in 1844) and is singular in that it is synthesized in solution, at ambient temperature. There are several synthetic paths to CuH, one of which involves reduction of an aqueous solution of CuSO(4)·5H(2)O by borohydride ions. The product from this procedure has not been extensively characterized. Using a combination of diffraction methods (X-ray and neutron) and inelastic neutron scattering spectroscopy, we show that the CuH from the borohydride route has the same bulk structure as CuH produced by other routes. Our work shows that the product consists of a core of CuH with a shell of water and that this may be largely replaced by ethanol. This offers the possibility of modifying the properties of CuH produced by aqueous routes

    The Samurai Project: verifying the consistency of black-hole-binary waveforms for gravitational-wave detection

    Get PDF
    We quantify the consistency of numerical-relativity black-hole-binary waveforms for use in gravitational-wave (GW) searches with current and planned ground-based detectors. We compare previously published results for the (=2,m=2)(\ell=2,| m | =2) mode of the gravitational waves from an equal-mass nonspinning binary, calculated by five numerical codes. We focus on the 1000M (about six orbits, or 12 GW cycles) before the peak of the GW amplitude and the subsequent ringdown. We find that the phase and amplitude agree within each code's uncertainty estimates. The mismatch between the (=2,m=2)(\ell=2,| m| =2) modes is better than 10310^{-3} for binary masses above 60M60 M_{\odot} with respect to the Enhanced LIGO detector noise curve, and for masses above 180M180 M_{\odot} with respect to Advanced LIGO, Virgo and Advanced Virgo. Between the waveforms with the best agreement, the mismatch is below 2×1042 \times 10^{-4}. We find that the waveforms would be indistinguishable in all ground-based detectors (and for the masses we consider) if detected with a signal-to-noise ratio of less than 14\approx14, or less than 25\approx25 in the best cases.Comment: 17 pages, 9 figures. Version accepted by PR
    corecore