774 research outputs found

    Estimating customer impatience in a service system with unobserved balking

    Full text link
    This paper studies a service system in which arriving customers are provided with information about the delay they will experience. Based on this information they decide to wait for service or to leave the system. The main objective is to estimate the customers' patience-level distribution and the corresponding potential arrival rate, using knowledge of the actual queue-length process only. The main complication, and distinguishing feature of our setup, lies in the fact that customers who decide not to join are not observed, but, remarkably, we manage to devise a procedure to estimate the load they would generate. We express our system in terms of a multi-server queue with a Poisson stream of customers, which allows us to evaluate the corresponding likelihood function. Estimating the unknown parameters relying on a maximum likelihood procedure, we prove strong consistency and derive the asymptotic distribution of the estimation error. Several applications and extensions of the method are discussed. The performance of our approach is further assessed through a series of numerical experiments. By fitting parameters of hyperexponential and generalized-hyperexponential distributions our method provides a robust estimation framework for any continuous patience-level distribution

    The Variance Gamma Scaled Self-Decomposable Process in Actuarial Modelling

    Get PDF
    A scaled self-decomposable stochastic process put forward by Carr, Geman, Madan and Yor (2007) is used to model long term equity returns and options prices. This parsimonious model is compared to a number of other one-dimensional continuous time stochastic processes (models) that are commonly used in finance and the actuarial sciences. The comparisons are conducted along three dimensions: the models ability to fit monthly time series data on a number of different equity indices; the models ability to fit the tails of the times series and the models ability to calibrate to index option prices across strike price and maturities. The last criteria is becoming increasingly important given the popularity of capital gauranteed products that contain long term imbedded options that can be (at least partially) hedged by purchasing short term index options and rolling them over or purchasing longer term index options. Thus we test if the models can reproduce a typical implied volatility surface seen in the market.Variance gamma, regime switching lognormal, long term equity returns.

    Constant False Alarm Rate Target Detection in Synthetic Aperture Radar Imagery

    Get PDF
    Target detection plays a significant role in many synthetic aperture radar (SAR) applications, ranging from surveillance of military tanks and enemy territories to crop monitoring in agricultural uses. Detection of targets faces two major problems namely, first, how to remotely acquire high resolution images of targets, second, how to efficiently extract information regarding features of clutter-embedded targets. The first problem is addressed by the use of high penetration radar like synthetic aperture radar. The second problem is tackled by efficient algorithms for accurate and fast detection. So far, there are many methods of target detection for SAR imagery available such as CFAR, generalized likelihood ratio test (GLRT) method, multiscale autoregressive method, wavelet transform based method etc. The CFAR method has been extensively used because of its attractive features like simple computation and fast detection of targets. The CFAR algorithm incorporates precise statistical description of background clutter which determines how accurately target detection is achieved. The primary goal of this project is to investigate the statistical distribution of SAR background clutter from homogeneous and heterogeneous ground areas and analyze suitability of statistical distributions mathematically modelled for SAR clutter. The threshold has to be accurately computed based on statistical distribution so as to efficiently distinguish target from SAR clutter. Several distributions such as lognormal, Weibull, K, KK, G0, generalized Gamma (GGD) distributions are considered for clutter amplitude modeling in SAR images. The CFAR detection algorithm based on appropriate background clutter distribution is applied to moving and stationary target acquisition and recognition (MSTAR) images. The experimental results show that, CFAR detector based on GGD outmatches CFAR detectors based on lognormal, Weibull, K, KK, G0 distributions in terms of accuracy and computation time.

    Stochastic Signal Processing and Power Control for Wireless Communication Systems

    Get PDF
    This dissertation is concerned with dynamical modeling, estimation and identification of wireless channels from received signal measurements. Optimal power control algorithms, mobile location and velocity estimation methods are developed based on the proposed models. The ultimate performance limits of any communication system are determined by the channel it operates in. In this dissertation, we propose new stochastic wireless channel models which capture both the space and time variations of wireless systems. The proposed channel models are based on stochastic differential equations (SDEs) driven by Brownian motions. These models are more realistic than the time invariant models encountered in the literature which do not capture and track the time varying characteristics of the propagation environment. The statistics of the proposed models are shown to be time varying, and converge in steady state to their static counterparts. Cellular and ad hoc wireless channel models are developed. In urban propagation environment, the parameters of the channel models can be determined from approximating the band-limited Doppler power spectral density (DPSD) by rational transfer functions. However, since the DPSD is not available on-line, a filterbased expectation maximization algorithm and Kalman filter to estimate the channel parameters and states, respectively, are proposed. The algorithm is recursive allowing the inphase and quadrature components and parameters to be estimated on-line from received signal measurements. The algorithms are tested using experimental data, and the results demonstrate the method’s viability for both cellular and ad hoc networks. Power control increases system capacity and quality of communications, and reduces battery power consumption. A stochastic power control algorithm is developed using the so-called predictable power control strategies. An iterative distributed algorithm is then deduced using stochastic approximations. The latter only requires each mobile to know its received signal to interference ratio at the receiver

    Variable Selection in Accelerated Failure Time (AFT) Frailty Models: An Application of Penalized Quasi-Likelihood

    Get PDF
    Variable selection is one of the standard ways of selecting models in large scale datasets. It has applications in many fields of research study, especially in large multi-center clinical trials. One of the prominent methods in variable selection is the penalized likelihood, which is both consistent and efficient. However, the penalized selection is significantly challenging under the influence of random (frailty) covariates. It is even more complicated when there is involvement of censoring as it may not have a closed-form solution for the marginal log-likelihood. Therefore, we applied the penalized quasi-likelihood (PQL) approach that approximates the solution for such a likelihood. In addition, we introduce an adaptive penalty function that makes the selection on both fixed and frailty effects in a left-censored dataset for a parametric AFT frailty model. We also compared our penalty function with other established procedures via their performance on accurately choosing the significant coefficients and shrinking the non-significant coefficients to zero

    Multivariate hydrological frequency analysis and risk mapping

    Get PDF
    In hydrological frequency analysis, it is difficult to apply standard statistical methods to derive multivariate probability distributions of the characteristics of hydrologic or hydraulic variables except under the following restrictive assumptions: (1) variables are assumed independent, (2) variables are assumed to have the same marginal distributions, and (3) variables are assumed to follow or are transformed to normal distribution. Relaxing these assumptions when deriving multivariate distributions of the characteristics of correlated hydrologic and hydraulic variables. The copula methodology is applied to perform multivariate frequency analysis of rainfall, flood, low-flow, water quality, and channel flow, using data from the Amite river basin in Louisiana. And finally, the risk methodology is applied to analyze flood risks. Through the study, it was found that (1) copula method was found reasonably well to be applied to derive the multivariate hydrological frequency model compared with other conventional methods, i.e., multivariate normal approach, N-K model approach, independence transformation approach etc.; (2) nonstationarity was found more or less existed in the rainfall and streamflow time series, but according to the nonstationary test, in most cases, the stationarity assumption may be approximately valid; (3) the multivariate frequency analysis coupling nonstationarity indicated that the stationary assumption was valid for both bivariate and trivariate analysis; and (4) risk, defined by both flooding event and the damage caused by the scenario, showed the difference from that defined by T-year return period design event and the probability of total damage with the comparison indicating that only one character, i.e., T-year event or probability of total damage was not adequate to define the risk

    Robust Estimation of Parametric Models for Insurance Loss Data

    Get PDF
    Parametric statistical models for insurance claims severity are continuous, right-skewed, and frequently heavy-tailed. The data sets that such models are usually fitted to contain outliers that are difficult to identify and separate from genuine data. Moreover, due to commonly used actuarial “loss control strategies,” the random variables we observe and wish to model are affected by truncation (due to deductibles), censoring (due to policy limits), scaling (due to coinsurance proportions) and other transformations. In the current practice, statistical inference for loss models is almost exclusively likelihood (MLE) based, which typically results in non-robust parameter estimators, pricing models, and risk measures. To alleviate the lack of robustness of MLE-based inference in risk modeling, two broad classes of parameter estimators - Method of Trimmed Moments (MTM) and Method of Winsorized Moments (MWM) - have been recently developed. MTM and MWM estimators are sufficiently general and flexible, and possess excellent large- and small- sample properties, but they were designed for complete (not transformed) data. In this dissertation, we first redesign MTM estimators to be applicable to claim severity models that are fitted to truncated, censored, and insurance payments data. Asymptotic properties of such estimators are thoroughly investigated and their practical performance is illustrated using Norwegian fire claims data. In addition, we explore several extensions of MTM and MWM estimators for complete data. In particular, we introduce truncated, censored, and insurance payment-type estimators and study their asymptotic properties. Our analysis establishes new connections between data truncation, trimming, and censoring which paves the way for more effective modeling of non-linearly transformed loss data
    corecore