696 research outputs found

    Induction of cellular immunity by immunization with novel hybrid peptides complexed to heat shock protein 70

    Get PDF

    Big-Bang Nucleosynthesis with Unstable Gravitino and Upper Bound on the Reheating Temperature

    Full text link
    We study the effects of the unstable gravitino on the big-bang nucleosynthesis. If the gravitino mass is smaller than \sim 10 TeV, primordial gravitinos produced after the inflation are likely to decay after the big-bang nucleosynthesis starts, and the light element abundances may be significantly affected by the hadro- and photo-dissociation processes as well as by the p n conversion process. We calculate the light element abundances and derived upper bound on the reheating temperature after the inflation. In our analysis, we calculate the decay parameters of the gravitino (i.e., lifetime and branching ratios) in detail. In addition, we performed a systematic study of the hadron spectrum produced by the gravitino decay, taking account of all the hadrons produced by the decay products of the gravitino (including the daughter superparticles). We discuss the model-dependence of the upper bound on the reheating temperature.Comment: 32 pages, 11 figure

    Impact of Seismic Risk on Lifetime Property Values

    Get PDF
    This report presents a methodology for establishing the uncertain net asset value, NAV, of a real-estate investment opportunity considering both market risk and seismic risk for the property. It also presents a decision-making procedure to assist in making real-estate investment choices under conditions of uncertainty and risk-aversion. It is shown that that market risk, as measured by the coefficient of variation of NAV, is at least 0.2 and may exceed 1.0. In a situation of such high uncertainty, where potential gains and losses are large relative to a decision-maker's risk tolerance, it is appropriate to adopt a decision-analysis approach to real-estate investment decision-making. A simple equation for doing so is presented. The decision-analysis approach uses the certainty equivalent, CE, as opposed to NAV as the basis for investment decision-making. That is, when faced with multiple investment alternatives, one should choose the alternative that maximizes CE. It is shown that CE is less than the expected value of NAV by an amount proportional to the variance of NAV and the inverse of the decision-maker's risk tolerance, [rho]. The procedure for establishing NAV and CE is illustrated in parallel demonstrations by CUREE and Kajima research teams. The CUREE demonstration is performed using a real 1960s-era hotel building in Van Nuys, California. The building, a 7-story non-ductile reinforced-concrete moment-frame building, is analyzed using the assembly-based vulnerability (ABV) method, developed in Phase III of the CUREE-Kajima Joint Research Program. The building is analyzed three ways: in its condition prior to the 1994 Northridge Earthquake, with a hypothetical shearwall upgrade, and with earthquake insurance. This is the first application of ABV to a real building, and the first time ABV has incorporated stochastic structural analyses that consider uncertainties in the mass, damping, and force-deformation behavior of the structure, along with uncertainties in ground motion, component damageability, and repair costs. New fragility functions are developed for the reinforced concrete flexural members using published laboratory test data, and new unit repair costs for these components are developed by a professional construction cost estimator. Four investment alternatives are considered: do not buy; buy; buy and retrofit; and buy and insure. It is found that the best alternative for most reasonable values of discount rate, risk tolerance, and market risk is to buy and leave the building as-is. However, risk tolerance and market risk (variability of income) both materially affect the decision. That is, for certain ranges of each parameter, the best investment alternative changes. This indicates that expected-value decision-making is inappropriate for some decision-makers and investment opportunities. It is also found that the majority of the economic seismic risk results from shaking of S[subscript a] < 0.3g, i.e., shaking with return periods on the order of 50 to 100 yr that cause primarily architectural damage, rather than from the strong, rare events of which common probable maximum loss (PML) measurements are indicative. The Kajima demonstration is performed using three Tokyo buildings. A nine-story, steel-reinforced-concrete building built in 1961 is analyzed as two designs: as-is, and with a steel-braced-frame structural upgrade. The third building is 29-story, 1999 steel-frame structure. The three buildings are intended to meet collapse-prevention, life-safety, and operational performance levels, respectively, in shaking with 10%exceedance probability in 50 years. The buildings are assessed using levels 2 and 3 of Kajima's three-level analysis methodology. These are semi-assembly based approaches, which subdivide a building into categories of components, estimate the loss of these component categories for given ground motions, and combine the losses for the entire building. The two methods are used to estimate annualized losses and to create curves that relate loss to exceedance probability. The results are incorporated in the input to a sophisticated program developed by the Kajima Corporation, called Kajima D, which forecasts cash flows for office, retail, and residential projects for purposes of property screening, due diligence, negotiation, financial structuring, and strategic planning. The result is an estimate of NAV for each building. A parametric study of CE for each building is presented, along with a simplified model for calculating CE as a function of mean NAV and coefficient of variation of NAV. The equation agrees with that developed in parallel by the CUREE team. Both the CUREE and Kajima teams collaborated with a number of real-estate investors to understand their seismic risk-management practices, and to formulate and to assess the viability of the proposed decision-making methodologies. Investors were interviewed to elicit their risk-tolerance, r, using scripts developed and presented here in English and Japanese. Results of 10 such interviews are presented, which show that a strong relationship exists between a decision-maker's annual revenue, R, and his or her risk tolerance, [rho is approximately equal to] 0.0075R[superscript 1.34]. The interviews show that earthquake risk is a marginal consideration in current investment practice. Probable maximum loss (PML) is the only earthquake risk parameter these investors consider, and they typically do not use seismic risk at all in their financial analysis of an investment opportunity. For competitive reasons, a public investor interviewed here would not wish to account for seismic risk in his financial analysis unless rating agencies required him to do so or such consideration otherwise became standard practice. However, in cases where seismic risk is high enough to significantly reduce return, a private investor expressed the desire to account for seismic risk via expected annualized loss (EAL) if it were inexpensive to do so, i.e., if the cost of calculating the EAL were not substantially greater than that of PML alone. The study results point to a number of interesting opportunities for future research, namely: improve the market-risk stochastic model, including comparison of actual long-term income with initial income projections; improve the risk-attitude interview; account for uncertainties in repair method and in the relationship between repair cost and loss; relate the damage state of structural elements with points on the force-deformation relationship; examine simpler dynamic analysis as a means to estimate vulnerability; examine the relationship between simplified engineering demand parameters and performance; enhance category-based vulnerability functions by compiling a library of building-specific ones; and work with lenders and real-estate industry analysts to determine the conditions under which seismic risk should be reflected in investors' financial analyses

    Vacuum Stability Bound on Extended GMSB Models

    Full text link
    Extensions of GMSB models were explored to explain the recent reports of the Higgs boson mass around 124-126 GeV. Some models predict a large mu term, which can spoil the vacuum stability of the universe. We study two GMSB extensions: i) the model with a large trilinear coupling of the top squark, and ii) that with extra vector-like matters. In both models, the vacuum stability condition provides upper bounds on the gluino mass if combined with the muon g-2. The whole parameter region is expected to be covered by LHC at sqrt{s} = 14 TeV. The analysis is also applied to the mSUGRA models with the vector-like matters.Comment: 22 pages, 4 figure

    Surface Potential of MIBC at Air/Water Interface: a Molecular Dynamics Study

    Get PDF
    The interfacial behavior of alcohols is significantly different from bulk due to amphiphilic structures. Such behavior can dramatically change the interfacial properties within the nano-scale of interfacial layer and have significant applications in industrial processes such as mineral flotation. In this study, the adsorption of MIBC (methyl isobutyl carbinol), a popular frother, was investigated by molecular dynamics. Surface potential was obtained at different surface concentration and compared to experimental data. The simulations results compared well with theoretical data using a single adjustable parameter. It has been found that the disordered water molecules contribute to surface potential more than MIBC molecules. The study demonstrates the application of MD in investigating the efficiency of frother systems

    Muon g-2, Dark Matter Detection and Accelerator Physics

    Get PDF
    We examine the recently observed deviation of the muon g - 2 from the Standard Model prediction within the framework of gravity mediated SUGRA models with R parity invariance. Universal soft breaking (mSUGRA) models, and models with non-universal Higgs and third generation squark/slepton masses at M_G are considered. All relic density constraints from stau-neutralino co-annihilation and large \tan\beta NLO corrections for b \to s\gamma decay are included, and we consider two possibilities for the light Higgs: m_h > 114 GeV and m_h > 120 GeV. The combined m_h, b \to s\gamma and a_{\mu} bounds give rise to lower bounds on \tan\beta and m_{1/2}, while the lower bound on a_{\mu} gives rise to an upper bounds on m_{1/2}. These bounds are sensitive to A_0, e.g. for m_h > 114 GeV, the 95% C.L. is \tan\beta > 7(5) for A_0 = 0(-4m_{1/2}), and for m_h > 120 GeV, \tan\beta > 15(10). The positive sign of the a_{\mu} deviation implies \mu > 0, eliminating the extreme cancellations in the dark matter neutralino-proton detection cross section so that almost all the SUSY parameter space should be accessible to future planned detectors. Most of the allowed parts of parameter space occur in the co-annihilation region where m_0 is strongly correlated with m_{1/2}. The lower bound on a_{\mu} then greatly reduces the allowed parameter space. Thus using 90% C. L. bounds on a_{\mu} we find for A_0 = 0 that \tan\beta \geq 10 and for \tan\beta \leq 40 that m_{1/2} = (290 - 550) GeV and m_0 = (70 - 300) GeV. Then the tri-lepton signal and other SUSY signals would be beyond the Tevatron Run II (except for the light Higgs), only the \tilde{\tau}_1 and h and (and for part of the parameter space) the \tilde{e}_1 will be accessible to a 500 GeV NLC, while the LHC would be able to see the full SUSY mass spectrum.Comment: 10 pages, latex, 6 figure

    Leptogenesis from N~\widetilde{N}-dominated early universe

    Full text link
    We investigate in detail the leptogenesis by the decay of coherent right-handed sneutrino N~\widetilde{N} having dominated the energy density of the early universe, which was originally proposed by HM and TY. Once the N~\widetilde{N} dominant universe is realized, the amount of the generated lepton asymmetry (and hence baryon asymmetry) is determined only by the properties of the right-handed neutrino, regardless of the history before it dominates the universe. Moreover, thanks to the entropy production by the decay of the right-handed sneutrino, thermally produced relics are sufficiently diluted. In particular, the cosmological gravitino problem can be avoided even when the reheating temperature of the inflation is higher than 10^{10}\GeV, in a wide range of the gravitino mass m_{3/2}\simeq 10\MeV--100\TeV. If the gravitino mass is in the range m_{3/2}\simeq 10\MeV--1\GeV as in the some gauge-mediated supersymmetry breaking models, the dark matter in our universe can be dominantly composed of the gravitino. Quantum fluctuation of the N~\widetilde{N} during inflation causes an isocurvature fluctuation which may be detectable in the future.Comment: 16 page
    • 

    corecore