26,822 research outputs found

    Fair value accounting and financial stability.

    Get PDF
    Market prices give timely signals that can aid decision making. However, in the presence of distorted incentives and illiquid markets, there are other less benign effects that inject artifi cial volatility to prices that distorts real decisions. In a world of marking-to-market, asset price changes show up immediately on the balance sheets of financial intermediaries and elicit responses from them. Banks and other intermediaries have always responded to changes in economic environment, but marking-to-market sharpens and synchronises their responses, adding impetus to the feedback effects in financial markets. For junior assets trading in liquid markets (such as traded stocks), marking-to-market is superior to historical cost in terms of the trade-offs. But for senior, long-lived and illiquid assets and liabilities (such as bank loans and insurance liabilities), the harm caused by distortions can outweigh the benefi ts. We review the competing effects and weigh the arguments.

    Optimal design and use of retry in fault tolerant real-time computer systems

    Get PDF
    A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults

    Integrated analysis of error detection and recovery

    Get PDF
    An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms

    Analysis of backward error recovery for concurrent processes with recovery blocks

    Get PDF
    Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated

    Modeling and measurement of fault-tolerant multiprocessors

    Get PDF
    The workload effects on computer performance are addressed first for a highly reliable unibus multiprocessor used in real-time control. As an approach to studing these effects, a modified Stochastic Petri Net (SPN) is used to describe the synchronous operation of the multiprocessor system. From this model the vital components affecting performance can be determined. However, because of the complexity in solving the modified SPN, a simpler model, i.e., a closed priority queuing network, is constructed that represents the same critical aspects. The use of this model for a specific application requires the partitioning of the workload into job classes. It is shown that the steady state solution of the queuing model directly produces useful results. The use of this model in evaluating an existing system, the Fault Tolerant Multiprocessor (FTMP) at the NASA AIRLAB, is outlined with some experimental results. Also addressed is the technique of measuring fault latency, an important microscopic system parameter. Most related works have assumed no or a negligible fault latency and then performed approximate analyses. To eliminate this deficiency, a new methodology for indirectly measuring fault latency is presented

    Gauge potential singularities and the gluon condensate at finite temperatures

    Get PDF
    The continuum limit of SU(2) lattice gauge theory is carefully investigated at zero and at finite temperatures. It is found that the continuum gauge field has singularities originating from center degrees of freedom being discovered in Landau gauge. Our numerical results show that the density of these singularities properly extrapolates to a non-vanishing continuum limit. The action density of the non-trivial Z_2 links is tentatively identified with the gluon condensate. We find for temperatures larger than the deconfinement temperature that the thermal fluctuations of the embedded Z_2 gauge theory result in an increase of the gluon condensate with increasing temperature.Comment: 3 pages, 2 figures, talk presented by K. Langfeld at the 19th International Symposium on Lattice Field Theory (LATTICE2001), Berlin, 19.-24.8.2001, to appear in the proceeding

    Dynamical evolution of the mass function and radial profile of the Galactic globular cluster system

    Full text link
    Evolution of the mass function (MF) and radial distribution (RD) of the Galactic globular cluster (GC) system is calculated using an advanced and a realistic Fokker-Planck (FP) model that considers dynamical friction, disc/bulge shocks and eccentric cluster orbits. We perform hundreds of FP calculations with different initial cluster conditions, and then search a wide-parameter space for the best-fitting initial GC MF and RD that evolves into the observed present-day Galactic GC MF and RD. By allowing both MF and RD of the initial GC system to vary, which is attempted for the first time in the present Letter, we find that our best-fitting models have a higher peak mass for a lognormal initial MF and a higher cut-off mass for a power-law initial MF than previous estimates, but our initial total masses in GCs, M_{T,i} = 1.5-1.8x10^8 Msun, are comparable to previous results. Significant findings include that our best-fitting lognormal MF shifts downward by 0.35 dex during the period of 13 Gyr, and that our power-law initial MF models well-fit the observed MF and RD only when the initial MF is truncated at >~10^5 Msun. We also find that our results are insensitive to the initial distribution of orbit eccentricity and inclination, but are rather sensitive to the initial concentration of the clusters and to how the initial tidal radius is defined. If the clusters are assumed to be formed at the apocentre while filling the tidal radius there, M_{T,i} can be as high as 6.9x10^8 Msun, which amounts to ~75 per cent of the current mass in the stellar halo.Comment: To appear in May 2008 issue of MNRAS, 386, L6
    corecore