106 research outputs found
Recommended from our members
On finite-time ruin probabilities in a generalized dual risk model with dependence
In this paper, we study the finite-time ruin probability in a reasonably generalized dual risK model, where we assume any non-negative non-decreasing cumulative operational cost function and arbitrary capital gains arrival process. Establishing an enlightening link between this dual risk model and its corresponding insurance risk model, explicit expressions for the finite-time survival probability in the dual risk model are obtained under various general assumptions for the distribution of the capital gains. In order to make the model more realistic and general, different dependence structures among capital gains and inter-arrival times and between both are also introduced and corresponding ruin probability expressions are also given. The concept of alarm time, as introduced in Das and Kratz (2012), is applied to the dual risk model within the context of risk capital allocation. Extensive numerical illustrations are provided
How Many Subpopulations is Too Many? Exponential Lower Bounds for Inferring Population Histories
Reconstruction of population histories is a central problem in population
genetics. Existing coalescent-based methods, like the seminal work of Li and
Durbin (Nature, 2011), attempt to solve this problem using sequence data but
have no rigorous guarantees. Determining the amount of data needed to correctly
reconstruct population histories is a major challenge. Using a variety of tools
from information theory, the theory of extremal polynomials, and approximation
theory, we prove new sharp information-theoretic lower bounds on the problem of
reconstructing population structure -- the history of multiple subpopulations
that merge, split and change sizes over time. Our lower bounds are exponential
in the number of subpopulations, even when reconstructing recent histories. We
demonstrate the sharpness of our lower bounds by providing algorithms for
distinguishing and learning population histories with matching dependence on
the number of subpopulations. Along the way and of independent interest, we
essentially determine the optimal number of samples needed to learn an
exponential mixture distribution information-theoretically, proving the upper
bound by analyzing natural (and efficient) algorithms for this problem.Comment: 38 pages, Appeared in RECOMB 201
Experimental analysis of computer system dependability
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance
Design for dependability: A simulation-based approach
This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced
Sample-path large deviations for tandem and priority queues with Gaussian inputs
This paper considers Gaussian flows multiplexed in a queueing network. A
single node being a useful but often incomplete setting, we examine more
advanced models. We focus on a (two-node) tandem queue, fed by a large number
of Gaussian inputs. With service rates and buffer sizes at both nodes scaled
appropriately, Schilder's sample-path large-deviations theorem can be applied
to calculate the asymptotics of the overflow probability of the second queue.
More specifically, we derive a lower bound on the exponential decay rate of
this overflow probability and present an explicit condition for the lower bound
to match the exact decay rate. Examples show that this condition holds for a
broad range of frequently used Gaussian inputs. The last part of the paper
concentrates on a model for a single node, equipped with a priority scheduling
policy. We show that the analysis of the tandem queue directly carries over to
this priority queueing system.Comment: Published at http://dx.doi.org/10.1214/105051605000000133 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Large scale analytic calculations in quantum field theories
We present a survey on the mathematical structure of zero- and single scale
quantities and the associated calculation methods and function spaces in higher
order perturbative calculations in relativistic renormalizable quantum field
theories.Comment: 25 pages Latex, 1 style fil
- …