40,089 research outputs found

    E2/M1 ratio from the Mainz p(γ,p)π0p(\vec \gamma, p)\pi^0 data

    Full text link
    We suggest that the error quoted in the Mainz determination of the E2/M1 ratio (at the resonance energy) should be enlarged. A term dropped in expressions used by this group could be significant.Comment: 2 pages of text. Submitted to the Comments section of PR

    Selfsimilar time dependent shock structures

    Get PDF
    Diffusive shock acceleration as an astrophysical mechanism for accelerating charged particles has the advantage of being highly efficient. This means however that the theory is of necessity nonlinear; the reaction of the accelerated particles on the shock structure and the acceleration process must be self-consistently included in any attempt to develop a complete theory of diffusive shock acceleration. Considerable effort has been invested in attempting, at least partially, to do this and it has become clear that in general either the maximum particle energy must be restricted by introducing additional loss processes into the problem or the acceleration must be treated as a time dependent problem (Drury, 1984). It is concluded that stationary modified shock structures can only exist for strong shocks if additional loss processes limit the maximum energy a particle can attain. This is certainly possible and if it occurs the energy loss from the shock will lead to much greater shock compressions. It is however equally possible that no such processes exist and we must then ask what sort of nonstationary shock structure develops. The ame argument which excludes stationary structures also rules out periodic solutions and indeed any solution where the width of the shock remains bounded. It follows that the width of the shock must increase secularly with time and it is natural to examine the possibility of selfsimilar time dependent solutions

    Accounting for Seismic Risk in Financial Analysis of Property Investment

    Get PDF
    A methodology is presented for making property investment decisions using loss analysis and the principles of decision analysis. It proposes that the investor choose among competing investment alternatives on the basis of the certainty equivalent of their net asset value which depends on the uncertain discounted future net income, uncertain discounted future earthquake losses, initial equity and the investor’s risk tolerance. The earthquake losses are modelled using a seismic vulnerability function, the site seismic hazard function, and an assumption that strong shaking at a site follows a Poisson process. A building-specific vulnerability approach, called assembly-based vulnerability, or ABV, is used. ABV involves a simulation approach that includes dynamic structural analyses and damage analyses using fragility functions and probability distributions on unit repair costs and downtimes for all vulnerable structural and nonstructural components in a building. The methodology is demonstrated using some results from a seven-storey reinforced-concrete hotel in Los Angeles

    Estimator Selection: End-Performance Metric Aspects

    Full text link
    Recently, a framework for application-oriented optimal experiment design has been introduced. In this context, the distance of the estimated system from the true one is measured in terms of a particular end-performance metric. This treatment leads to superior unknown system estimates to classical experiment designs based on usual pointwise functional distances of the estimated system from the true one. The separation of the system estimator from the experiment design is done within this new framework by choosing and fixing the estimation method to either a maximum likelihood (ML) approach or a Bayesian estimator such as the minimum mean square error (MMSE). Since the MMSE estimator delivers a system estimate with lower mean square error (MSE) than the ML estimator for finite-length experiments, it is usually considered the best choice in practice in signal processing and control applications. Within the application-oriented framework a related meaningful question is: Are there end-performance metrics for which the ML estimator outperforms the MMSE when the experiment is finite-length? In this paper, we affirmatively answer this question based on a simple linear Gaussian regression example.Comment: arXiv admin note: substantial text overlap with arXiv:1303.428

    On the convergence rate of distributed gradient methods for finite-sum optimization under communication delays

    Full text link
    Motivated by applications in machine learning and statistics, we study distributed optimization problems over a network of processors, where the goal is to optimize a global objective composed of a sum of local functions. In these problems, due to the large scale of the data sets, the data and computation must be distributed over processors resulting in the need for distributed algorithms. In this paper, we consider a popular distributed gradient-based consensus algorithm, which only requires local computation and communication. An important problem in this area is to analyze the convergence rate of such algorithms in the presence of communication delays that are inevitable in distributed systems. We prove the convergence of the gradient-based consensus algorithm in the presence of uniform, but possibly arbitrarily large, communication delays between the processors. Moreover, we obtain an upper bound on the rate of convergence of the algorithm as a function of the network size, topology, and the inter-processor communication delays

    Dosimetry for radiobiological studies of the human hematopoietic system

    Get PDF
    A system for estimating individual bone marrow doses in therapeutic radiation exposures of leukemia patients was studied. These measurements are used to make dose response correlations and to study the effect of dose protraction on peripheral blood cell levels. Three irradiators designed to produce a uniform field of high energy gamma radiation for total body exposures of large animals and man are also used for radiobiological studies

    Cost-Effectiveness of Stronger Woodframe Buildings

    Get PDF
    We examine the cost-effectiveness of improvements in woodframe buildings. These include retrofits, redesign measures, and improved quality in 19 hypothetical woodframe dwellings. We estimated cost-effectiveness for each improvement and each zip code in California. The dwellings were designed under the CUREE-Caltech Woodframe Project. Costs and seismic vulnerability were determined on a component-by-component basis using the Assembly Based Vulnerability method, within a nonlinear time-history structural-analysis framework and using full-size test specimen data. Probabilistic site hazard was calculated by zip code, considering site soil classification, and integrated with vulnerability to determine expected annualized repair cost. The approach provides insight into uncertainty of loss at varying shaking levels. We calculated present value of benefit to determine cost-effectiveness in terms of benefit-cost ratio (BCR). We find that one retrofit exhibits BCRs as high as 8, and is in excess of 1 in half of California zip codes. Four retrofit or redesign measures are cost-effective in at least some locations. Higher quality is estimated to save thousands of dollars per house. Results are illustrated by maps for the Los Angeles and San Francisco regions and are available for every zip code in California

    Statistics of football dynamics

    Full text link
    We investigate the dynamics of football matches. Our goal is to characterize statistically the temporal sequence of ball movements in this collective sport game, searching for traits of complex behavior. Data were collected over a variety of matches in South American, European and World championships throughout 2005 and 2006. We show that the statistics of ball touches presents power-law tails and can be described by qq-gamma distributions. To explain such behavior we propose a model that provides information on the characteristics of football dynamics. Furthermore, we discuss the statistics of duration of out-of-play intervals, not directly related to the previous scenario.Comment: 7 page
    corecore