261 research outputs found

    Effects of selenium and thyroid hormone deficiency on peritoneal macrophages adhesion and occurrence of natural IGM antibodies in juvenile rats

    Get PDF
    Both selenium, as an effector and regulator of antioxidative enzymes activity, and thyroid hormones are potent immunomodulators. Besides, selenium incorporated into iodothyronine deiodinases is involved in the thyroid function and thus indirectly regulates the immune response. Studies of the mutual infl uence of selenium and thyroid hormones on the immune response are scarce, hence we analyzed the effects of an iodothyronine deiodinases blocker, propylthiouracil (PTU), and selenium defi ciency on the function of peritoneal macrophages, and titer of naturally occurring anti-sheep red blood cells (SRBC) IgM antibodies in juvenile rats. The experiment was carried out on 64 Wistar male rats allotted to 4 groups: controlselenium adequate PTU-group; selenium adequate, PTU+ group; selenium defi cient, PTU-group; and selenium defi cient, PTU+. The selenium adequate and selenium defi cient groups were fed a diet containing 0.334 and 0.031 mg Se/kg, respectively. PTU+ groups received PTU (150 mg/L) in drinking water. After 3 weeks, thyroxine (T-4), triiodothyronine (T-3), and thyroid stimulating hormone (TSH) were determined. The animals having "intermediate" concentrations of T-3 (1.56-1.69 nmol/L) and T 4 (41-50 nmol/L) were excluded from further analysis. Thus, PTU+ groups included hypothyroid animals (T-3 lt = 1.55 nmol/L; T-4 lt = 40 nmol/L), while PTU-groups included euthyroid rats (T-3 lt = 1.70 nmol/L; T-4 lt = 50 nmol/L). Both groups of selenium defi cient rats had, when compared to the control group, a signifi cantly lower activity of glutathione peroxidase GPx1 and GPx3. Neither selenium defi ciency nor PTU infl uenced the adherence of peritoneal macrophages. Selenium defi ciency signifi cantly decreased the peroxide synthesis in macrophages and signifi cantly increased the titer of anti-SRBC IgM. Hypotyroidism alone or in combination with selenium defi ciency had no infl uence on these parameters

    Bayesian Updating of Earthquake Vulnerability Functions with Application to Mortality Rates

    Get PDF
    Vulnerability functions often rely on data from expert opinion, post-earthquake investigations, or analytical simulations. Combining the information can be particularly challenging. In this paper a Bayesian statistical framework is presented to combining disparate information. The framework is illustrated through application to earthquake mortality data obtained from the 2005 Pakistan earthquake and from PAGER. Three different models are tested including an exponential, a combination of Bernoulli and exponential and Bernoulli and gamma fit to model respectively zero and non-zero mortality rates. A novel Bayesian model for the Bernoulli exponential and Bernoulli-gamma probability densities is introduced. It is found that the exponential distribution represents the zero casualties very poorly. The Bernoulli-exponential and Bernoulli-gamma models capture the data for both the zero and non-zero mortality rates. It is also shown that the Bernoulli-gamma model fits the 2005 Pakistan data the best and has uncertainties that are smaller than either the ones from the 2005 Pakistan data or the PAGER data.This research was partially supported by the Global Earthquake Model, by the National Science Foundation Grant CMMI 1233694 and the Shah Family Graduate Fellowship

    Assembly-Based Vulnerability of Buildings and Its Use in Performance Evaluation

    Get PDF
    Assembly-based vulnerability (ABV) is a framework for evaluating the seismic vulnerability and performance of buildings on a building-specific basis. It utilizes the damage to individual building components and accounts for the building's seismic setting, structural and nonstructural design and use. A simulation approach to implementing ABV first applies a ground motion time history to a structural model to determine structural response. The response is applied to assembly fragility functions to simulate damage to each structural and nonstructural element in the building, and to its contents. Probabilistic construction cost estimation and scheduling are used to estimate repair cost and loss-of-use duration as random variables. It also provides a framework for accumulating post-earthquake damage observations in a statistically systematic and consistent manner. The framework and simulation approach are novel in that they are fully probabilistic, address damage at a highly detailed and building-specific level, and do not rely extensively on expert opinion. ABV is illustrated using an example pre-Northridge welded-steel-moment-frame office building

    Benefit-Cost Analysis of FEMA Hazard Mitigation Grants

    Get PDF
    Mitigation ameliorates the impact of natural hazards on communities by reducing loss of life and injury, property and environmental damage, and social and economic disruption. The potential to reduce these losses brings many benefits, but every mitigation activity has a cost that must be considered in our world of limited resources. In principle benefit-cost analysis (BCA) can be used to assess a mitigation activity’s expected net benefits (discounted future benefits less discounted costs), but in practice this often proves difficult. This paper reports on a study that refined BCA methodologies and applied them to a national statistical sample of FEMA mitigation activities over a ten-year period for earthquake, flood, and wind hazards. The results indicate that the overall benefit-cost ratio for FEMA mitigation grants is about 4 to 1, though the ratio varies according to hazard and mitigation type.

    The occupation of a box as a toy model for the seismic cycle of a fault

    Full text link
    We illustrate how a simple statistical model can describe the quasiperiodic occurrence of large earthquakes. The model idealizes the loading of elastic energy in a seismic fault by the stochastic filling of a box. The emptying of the box after it is full is analogous to the generation of a large earthquake in which the fault relaxes after having been loaded to its failure threshold. The duration of the filling process is analogous to the seismic cycle, the time interval between two successive large earthquakes in a particular fault. The simplicity of the model enables us to derive the statistical distribution of its seismic cycle. We use this distribution to fit the series of earthquakes with magnitude around 6 that occurred at the Parkfield segment of the San Andreas fault in California. Using this fit, we estimate the probability of the next large earthquake at Parkfield and devise a simple forecasting strategy.Comment: Final version of the published paper, with an erratum and an unpublished appendix with some proof

    Prediction of Large Events on a Dynamical Model of a Fault

    Full text link
    We present results for long term and intermediate term prediction algorithms applied to a simple mechanical model of a fault. We use long term prediction methods based, for example, on the distribution of repeat times between large events to establish a benchmark for predictability in the model. In comparison, intermediate term prediction techniques, analogous to the pattern recognition algorithms CN and M8 introduced and studied by Keilis-Borok et al., are more effective at predicting coming large events. We consider the implications of several different quality functions Q which can be used to optimize the algorithms with respect to features such as space, time, and magnitude windows, and find that our results are not overly sensitive to variations in these algorithm parameters. We also study the intrinsic uncertainties associated with seismicity catalogs of restricted lengths.Comment: 33 pages, plain.tex with special macros include

    Embedding damage detection algorithms in a wireless sensing unit for operational power efficiency

    Full text link
    A low-cost wireless sensing unit is designed and fabricated for deployment as the building block of wireless structural health monitoring systems. Finite operational lives of portable power supplies, such as batteries, necessitate optimization of the wireless sensing unit design to attain overall energy efficiency. This is in conflict with the need for wireless radios that have far-reaching communication ranges that require significant amounts of power. As a result, a penalty is incurred by transmitting raw time-history records using scarce system resources such as battery power and bandwidth. Alternatively, a computational core that can accommodate local processing of data is designed and implemented in the wireless sensing unit. The role of the computational core is to perform interrogation tasks of collected raw time-history data and to transmit via the wireless channel the analysis results rather than time-history records. To illustrate the ability of the computational core to execute such embedded engineering analyses, a two-tiered time-series damage detection algorithm is implemented as an example. Using a lumped-mass laboratory structure, local execution of the embedded damage detection method is shown to save energy by avoiding utilization of the wireless channel to transmit raw time-history data.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/49012/2/sms4_4_018.pd
    • 

    corecore