7 research outputs found

    Estimation of change point in binomial random variables

    Get PDF
    Statistically, change point is the location or the time point such that observations follow one distribution up to the point and then another afterwards. Change point problems are encountered in our daily life and in disciplines such as economics, finance, medicine, geology, literature among others. In this paper, the change point in binomial observations whose the mean is dependent on explanatory variables is estimated. The maximum likelihood method was used to estimate the change point while the conditional means were estimated using the artificial neural network The consistency and asymptotic normality of neural network parameter estimates was also proved. We used simulated data to estimate the change point and also estimated the LD50 for the Bliss beetles data.Keywords: maximum likelihood estimate, binomial distribution, change point, artificial neural‐ networ

    Parametric modeling of probability of bank loan default in Kenya

    Get PDF
    Commercial banks in Kenya are the key players not only in the financial market but also in spurring the economic growth that has been witnessed in the country in the recent past. Besides Safaricom and East Africa Breweries, the other top ten most profitable companies in Kenya are the Commercial banks. The biggest part of these huge profits emanates from the interests charged on loans they advance to their customers. If these loans non-perform, these blue chip companies will come tumbling down and the entire economy will be threatened. This makes the study on probability of a customer defaulting very useful while analyzing the credit risk policies. In this paper, we use a raw data set that contains demographic information about the borrowers. The data sets have been used to identify which risk factors associated with the borrowers contribute towards default. These risk factors are gender, age, marital status, occupation and term of loan. Results show that male customers have high odds (1.91) of defaulting compared to their female counter parts, single customers have a higher likelihood (odds of 1.48) of defaulting compared to their married customers, younger customers have high odds of defaulting unlike elderly customers, financial sector customers have equallikelihood of default as support staff customers and long term loans have less likelihood of defaulting compared to short term loans.Key words: The logistic model, the logit transformation, parameter estimatio

    Parametric change point estimation, testing and confidence interval applied in business

    Get PDF
    In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect a possible change point, we use a Schwarz Information Criterion (SIC) statistic whose asymptotic distribution under the null hypothesis is determined. The percentile bootstrap method is used to construct the confidence interval of the estimated change point. The developed tools and methods are applied to the 1987 – 1988 US trade deficit data. Our results show that a significant change in US trade deficit occurred in November 1987. Further, it is shown that the percentile bootstrap confidence intervals are not always symmetrical.Key words: Change point, Schwarz information criterion, percentile bootstra

    Classification rates: non‐parametric verses parametric models using binary data

    Get PDF
    Estimations of the conditional mean and the marginal effects for particular small changes in the covariates have been of interest in financial, economics and even educational sectors. The standard approach has been to specify a parametric model such as probit or logit and then estimating the coefficients by maximum likelihood method. This is only applicable when the distribution form from which the data has been drawn is known. Non parametric methods have been proposed when the functional form assumptions cannot be ascertained. This research sought to establish if non parametric modeling achieves a higher correct classification ratio than a parametric model. The local likelihood technique was used to model fit the data sets. The same sets of data were modeled using parametric logit and the abilities of the two models to correctly predict the binary outcome compared. The results obtained showed that non‐parametric estimation gives a better prediction rate (classification ratio) for a binary data than parametric estimation. This was achieved both empirically and through simulation. For empirical results two different data sets were used. The first set consisted of loan applications of customers and the second set consisted of approved loans. In both data sets the classification ratio for non‐parametric method was found to be 1 while that for parametric was found to be 0.87 (only 87 out of the 100 observations were correctly classified) and 0.83 respectively. Simulation was done based on sample sizes of 25, 50, 75, 100,150,200,250,300 and 500. The simulated results further showed that the accuracy of both models decrease as sample size increases.Key words: Parametric, non‐parametric, local likelihood, logit, confusion matrix and classification rati

    Grand Challenges in global eye health: a global prioritisation process using Delphi method

    Full text link
    Background: We undertook a Grand Challenges in Global Eye Health prioritisation exercise to identify the key issues that must be addressed to improve eye health in the context of an ageing population, to eliminate persistent inequities in health-care access, and to mitigate widespread resource limitations. Methods: Drawing on methods used in previous Grand Challenges studies, we used a multi-step recruitment strategy to assemble a diverse panel of individuals from a range of disciplines relevant to global eye health from all regions globally to participate in a three-round, online, Delphi-like, prioritisation process to nominate and rank challenges in global eye health. Through this process, we developed both global and regional priority lists. Findings: Between Sept 1 and Dec 12, 2019, 470 individuals complete round 1 of the process, of whom 336 completed all three rounds (round 2 between Feb 26 and March 18, 2020, and round 3 between April 2 and April 25, 2020) 156 (46%) of 336 were women, 180 (54%) were men. The proportion of participants who worked in each region ranged from 104 (31%) in sub-Saharan Africa to 21 (6%) in central Europe, eastern Europe, and in central Asia. Of 85 unique challenges identified after round 1, 16 challenges were prioritised at the global level; six focused on detection and treatment of conditions (cataract, refractive error, glaucoma, diabetic retinopathy, services for children and screening for early detection), two focused on addressing shortages in human resource capacity, five on other health service and policy factors (including strengthening policies, integration, health information systems, and budget allocation), and three on improving access to care and promoting equity. Interpretation: This list of Grand Challenges serves as a starting point for immediate action by funders to guide investment in research and innovation in eye health. It challenges researchers, clinicians, and policy makers to build collaborations to address specific challenges. Funding: The Queen Elizabeth Diamond Jubilee Trust, Moorfields Eye Charity, National Institute for Health Research Moorfields Biomedical Research Centre, Wellcome Trust, Sightsavers, The Fred Hollows Foundation, The Seva Foundation, British Council for the Prevention of Blindness, and Christian Blind Mission. Translations: For the French, Spanish, Chinese, Portuguese, Arabic and Persian translations of the abstract see Supplementary Materials section
    corecore