422 research outputs found
Comparative pulse shape discrimination study for Ca(Br, I) scintillators using machine learning and conventional methods
In particle physics experiments, pulse shape discrimination (PSD) is a
powerful tool for eliminating the major background from signals. However, the
analysis methods have been a bottleneck to improving PSD performance. In this
study, two machine learning methods -- multilayer perceptron and convolutional
neural network -- were applied to PSD, and their PSD performance was compared
with that of conventional analysis methods. Three calcium-based halide
scintillators were grown using the vertical Bridgman--Stockbarger method and
used for the evaluation of PSD. Compared with conventional analysis methods,
the machine learning methods achieved better PSD performance for all the
scintillators. For scintillators with low light output, the machine learning
methods were more effective for PSD accuracy than the conventional methods in
the low-energy region.Comment: 9 pages, 9 figure
Impact of Seismic Risk on Lifetime Property Values
This report presents a methodology for establishing the uncertain net asset value, NAV, of a real-estate investment opportunity considering both market risk and seismic risk for the property. It also presents a decision-making procedure to assist in making real-estate investment choices under conditions of uncertainty and risk-aversion. It is shown that that market risk, as measured by the coefficient of variation of NAV, is at least 0.2 and may exceed 1.0. In a situation of such high uncertainty, where potential gains and losses are large relative to a decision-maker's risk tolerance, it is appropriate to adopt a decision-analysis approach to real-estate investment decision-making. A simple equation for doing so is presented. The decision-analysis approach uses the certainty equivalent, CE, as opposed to NAV as the basis for investment decision-making. That is, when faced with multiple investment alternatives, one should choose the alternative that maximizes CE. It is shown that CE is less than the expected value of NAV by an amount proportional to the variance of NAV and the inverse of the decision-maker's risk tolerance, [rho].
The procedure for establishing NAV and CE is illustrated in parallel demonstrations by CUREE and Kajima research teams. The CUREE demonstration is performed using a real 1960s-era hotel building in Van Nuys, California. The building, a 7-story non-ductile reinforced-concrete moment-frame building, is analyzed using the assembly-based vulnerability (ABV) method, developed in Phase III of the CUREE-Kajima Joint Research Program. The building is analyzed three ways: in its condition prior to the 1994 Northridge Earthquake, with a hypothetical shearwall upgrade, and with earthquake insurance. This is the first application of ABV to a real building, and the first time ABV has incorporated stochastic structural analyses that consider uncertainties in the mass, damping, and force-deformation behavior of the structure, along with uncertainties in ground motion, component damageability, and repair costs. New fragility functions are developed for the reinforced concrete flexural members using published laboratory test data, and new unit repair costs for these components are developed by a professional construction cost estimator. Four investment alternatives are considered: do not buy; buy; buy and retrofit; and buy and insure. It is found that the best alternative for most reasonable values of discount rate, risk tolerance, and market risk is to buy and leave the building as-is. However, risk tolerance and market risk (variability of income) both materially affect the decision. That is, for certain ranges of each parameter, the best investment alternative changes. This indicates that expected-value decision-making is inappropriate for some decision-makers and investment opportunities. It is also found that the majority of the economic seismic risk results from shaking of S[subscript a] < 0.3g, i.e., shaking with return periods on the order of 50 to 100 yr that cause primarily architectural damage, rather than from the strong, rare events of which common probable maximum loss (PML) measurements are indicative.
The Kajima demonstration is performed using three Tokyo buildings. A nine-story, steel-reinforced-concrete building built in 1961 is analyzed as two designs: as-is, and with a steel-braced-frame structural upgrade. The third building is 29-story, 1999 steel-frame structure. The three buildings are intended to meet collapse-prevention, life-safety, and operational performance levels, respectively, in shaking with 10%exceedance probability in 50 years. The buildings are assessed using levels 2 and 3 of Kajima's three-level analysis methodology. These are semi-assembly based approaches, which subdivide a building into categories of components, estimate the loss of these component categories for given ground motions, and combine the losses for the entire building. The two methods are used to estimate annualized losses and to create curves that relate loss to exceedance probability. The results are incorporated in the input to a sophisticated program developed by the Kajima Corporation, called Kajima D, which forecasts cash flows for office, retail, and residential projects for purposes of property screening, due diligence, negotiation, financial structuring, and strategic planning. The result is an estimate of NAV for each building. A parametric study of CE for each building is presented, along with a simplified model for calculating CE as a function of mean NAV and coefficient of variation of NAV. The equation agrees with that developed in parallel by the CUREE team.
Both the CUREE and Kajima teams collaborated with a number of real-estate investors to understand their seismic risk-management practices, and to formulate and to assess the viability of the proposed decision-making methodologies. Investors were interviewed to elicit their risk-tolerance, r, using scripts developed and presented here in English and Japanese. Results of 10 such interviews are presented, which show that a strong relationship exists between a decision-maker's annual revenue, R, and his or her risk tolerance, [rho is approximately equal to] 0.0075R[superscript 1.34]. The interviews show that earthquake risk is a marginal consideration in current investment practice. Probable maximum loss (PML) is the only earthquake risk parameter these investors consider, and they typically do not use seismic risk at all in their financial analysis of an investment opportunity. For competitive reasons, a public investor interviewed here would not wish to account for seismic risk in his financial analysis unless rating agencies required him to do so or such consideration otherwise became standard practice. However, in cases where seismic risk is high enough to significantly reduce return, a private investor expressed the desire to account for seismic risk via expected annualized loss (EAL) if it were inexpensive to do so, i.e., if the cost of calculating the EAL were not substantially greater than that of PML alone.
The study results point to a number of interesting opportunities for future research, namely: improve the market-risk stochastic model, including comparison of actual long-term income with initial income projections; improve the risk-attitude interview; account for uncertainties in repair method and in the relationship between repair cost and loss; relate the damage state of structural elements with points on the force-deformation relationship; examine simpler dynamic analysis as a means to estimate vulnerability; examine the relationship between simplified engineering demand parameters and performance; enhance category-based vulnerability functions by compiling a library of building-specific ones; and work with lenders and real-estate industry analysts to determine the conditions under which seismic risk should be reflected in investors' financial analyses
Detection of soluble interleukin-2 receptor and soluble intercellular adhesion molecule-1 in the effusion of otitis media with effusion
We measured sIL-2R, TNF-α and sICAM-1 in the sera and middle ear effusions (MEEs) of patients with otitis media with effusion (OME). Although there was no signmcant difference between the sIL-2R levels of the serous and mucoid MEEs, they were significantly higher than serum sIL-2R levels of OME patients and healthy controls. TNF-α levels of the mucoid MEEs were significantly higher than those of the serous type. However, TNF-α was rarely detected in the sera of OME patients or healthy controls. We observed significant differences between the serous and mucoid MEEs with respect to their sICAM-1 levels, which were also higher than serum slCAM-1 levels of OME patients and healthy controls. Our findings suggested that IL-2, TNF-α and ICAM-1 could be significantly involved in the pathogenesis of OME through the cytokine network
Run Scenarios for the Linear Collider
Scenarios are developed for runs at a Linear Collider, in the case that there
is a rich program of new physics.Comment: 12 pages, 10 tables, Latex; Snowmass 2001 plenary repor
CP Violation Beyond the Standard Model and Tau Pair Production in Collisions
We show that the CP-violating dipole form factors of the tau lepton can be of
the order of in units of the length scale set by the inverse
boson mass. We propose a few observables which are sensitive to these form
factors at LEP2 and higher e^+e^- collision energies.Comment: 11 pages LaTeX + 2 figure
Bounds on second generation scalar leptoquarks from the anomalous magnetic moment of the muon
We calculate the contribution of second generation scalar leptoquarks to the
anomalous magnetic moment of the muon (AMMM). In the near future, E-821 at
Brookhaven will reduce the experimental error on this parameter to , an improvement of 20 over its current value.
With this new experimental limit we obtain a lower mass limit of
\ GeV for the second generation scalar leptoquark, when its
Yukawa-like coupling \ to quarks and leptons is taken to be
of the order of the electroweak coupling .Comment: 5 pages, plain tex, 1 figure (not included available under request
Sneutrino Mass Measurements at e+e- Linear Colliders
It is generally accepted that experiments at an e+e- linear colliders will be
able to extract the masses of the selectron as well as the associated
sneutrinos with a precision of ~ 1% by determining the kinematic end points of
the energy spectrum of daughter electrons produced in their two body decays to
a lighter neutralino or chargino. Recently, it has been suggested that by
studying the energy dependence of the cross section near the production
threshold, this precision can be improved by an order of magnitude, assuming an
integrated luminosity of 100 fb^-1. It is further suggested that these
threshold scans also allow the masses of even the heavier second and third
generation sleptons and sneutrinos to be determined to better than 0.5%. We
re-examine the prospects for determining sneutrino masses. We find that the
cross sections for the second and third generation sneutrinos are too small for
a threshold scan to be useful. An additional complication arises because the
cross section for sneutrino pair to decay into any visible final state(s)
necessarily depends on an unknown branching fraction, so that the overall
normalization in unknown. This reduces the precision with which the sneutrino
mass can be extracted. We propose a different strategy to optimize the
extraction of m(\tilde{\nu}_\mu) and m(\tilde{\nu}_\tau) via the energy
dependence of the cross section. We find that even with an integrated
luminosity of 500 fb^-1, these can be determined with a precision no better
than several percent at the 90% CL. We also examine the measurement of
m(\tilde{\nu}_e) and show that it can be extracted with a precision of about
0.5% (0.2%) with an integrated luminosity of 120 fb^-1 (500 fb^-1).Comment: RevTex, 46 pages, 15 eps figure
Probing Slepton Mass Non-Universality at e^+e^- Linear Colliders
There are many models with non-universal soft SUSY breaking sfermion mass
parameters at the grand unification scale. Even in the mSUGRA model scalar mass
unification might occur at a scale closer to M_Planck, and renormalization
effects would cause a mass splitting at M_GUT. We identify an experimentally
measurable quantity Delta that correlates strongly with delta m^2 =
m^2_{selectron_R}(M_GUT) - m^2_{selectron_L}(M_GUT), and which can be measured
at electron-positron colliders provided both selectrons and the chargino are
kinematically accessible. We show that if these sparticle masses can be
measured with a precision of 1% at a 500 GeV linear collider, the resulting
precision in the determination of Delta may allow experiments to distinguish
between scalar mass unification at the GUT scale from the corresponding
unification at Q ~ M_Planck. Experimental determination of Delta would also
provide a distinction between the mSUGRA model and the recently proposed
gaugino-mediation model. Moreover, a measurement of Delta (or a related
quantity Delta') would allow for a direct determination of delta m^2.Comment: 15 pages, RevTeX, 4 postscript figure
Testing Color Evaporation in Photon-Photon Production of J/Psi at CERN LEP II
The DELPHI Collaboration has recently reported the measurement of J/Psi
production in photon-photon collisions at LEP II. These newly available data
provide an additional proof of the importance of colored c bar{c} pairs for the
production of charmonium because these data can only be explained by
considering resolved photon processes. We show here that the inclusion of color
octet contributions to the J/Psi production in the framework of the color
evaporation model is able to reproduce this data. In particular, the
transverse-momentum distribution of the J/Psi mesons is well described by this
model.Comment: 10 pages, 5 Figures, Revtex
Tau-Sleptons and Tau-Sneutrino in the MSSM with Complex Parameters
We present a phenomenological study of tau-sleptons stau_1,2 and
tau-sneutrino in the Minimal Supersymmetric Standard Model with complex
parameters A_tau, mu and M_1. We analyse production and decays of stau_1,2 and
tau-sneutrino at a future e^+ e^- collider. We present numerical predictions
for the important decay rates, paying particular attention to their dependence
on the complex parameters. The branching ratios of the fermionic decays of
stau_1 and tau-sneutrino show a significant phase dependence for tan(beta) <
10. For tan(beta) > 10 the branching ratios for the stau_2 decays into Higgs
bosons depend very sensitively on the phases. We show how information on the
phase phi(A_tau) and the other fundamental stau parameters can be obtained from
measurements of the stau masses, polarized cross sections and bosonic and
fermionic decay branching ratios, for small and large tan(beta) values. We
estimate the expected errors for these parameters. Given favorable conditions,
the error of A_tau is about 10% to 20%, while the errors of the remaining stau
parameters are in the range of approximately 1% to 3%. We also show that the
induced electric dipole moment of the tau-lepton is well below the current
experimental limit.Comment: LaTex, 25 pages, 11 figures (included); v2: extended discussion on
error determination, version to appear in Phys.Rev.
- …