229,398 research outputs found
A Simple Continuous Measure of Credit Risk
This paper introduces a simple continuous measure of credit risk that associates to each firm a risk parameter related to the firm's risk-neutral default intensity. These parameters can be computed from quoted bond prices and allow assignment of credit ratings much finer than those provided by various rating agencies. We estimate the risk measures on a daily basis for a sample of US firms and compare them with the corresponding ratings provided by Moody's and the distance to default measures calculated using the Merton (1974) model. The three measures group the sample of firms into various risk classes in a similar but far from identical way, possibly reflecting the models' different forecasting horizons. Among the three measures, the highest rank correlation is found between our continuous measure and Moody's ratings. The techniques in this paper can be used to extract the entire distribution of inter-temporal risk-neutral default intensities which is useful for time-to-default estimators as well as for pricing credit derivatives.
A Discrete Evolutionary Model for Chess Players' Ratings
The Elo system for rating chess players, also used in other games and sports,
was adopted by the World Chess Federation over four decades ago. Although not
without controversy, it is accepted as generally reliable and provides a method
for assessing players' strengths and ranking them in official tournaments.
It is generally accepted that the distribution of players' rating data is
approximately normal but, to date, no stochastic model of how the distribution
might have arisen has been proposed. We propose such an evolutionary stochastic
model, which models the arrival of players into the rating pool, the games they
play against each other, and how the results of these games affect their
ratings. Using a continuous approximation to the discrete model, we derive the
distribution for players' ratings at time as a normal distribution, where
the variance increases in time as a logarithmic function of . We validate
the model using published rating data from 2007 to 2010, showing that the
parameters obtained from the data can be recovered through simulations of the
stochastic model.
The distribution of players' ratings is only approximately normal and has
been shown to have a small negative skew. We show how to modify our
evolutionary stochastic model to take this skewness into account, and we
validate the modified model using the published official rating data.Comment: 17 pages, 4 figure
Estimation in the continuous time mover-stayer model with an application to bond ratings migration
The usual tool for modeling bond ratings migration is a discrete, timehomogeneuous
Markov chain. Such model assumes that all bonds are homogeneous with respect to their movement behavior among rating categories
and that the movement behavior does not change over time. However, among
recognized sources of heterogeneity in ratings migration is age of a bond (time
elapsed since issuance). It has been observed that young bonds have a lower propensity to change ratings, and thus to default, than more seasoned bonds. The aimof this paper is to introduce a continuous, time-nonhomogeneuous model for bond ratings migration, which also incorporates a simple form of population heterogeneity. The specific form of heterogeneity postulated by
the proposed model appears to be suitable for modeling the effect of age of a bond on its propensity to change ratings. This model, called a mover-stayer model, is an extension of a time-nonhomogeneuous Markov chain.
This paper derives the maximum likelihood estimators for the parameters of a continuous time mover-stayer model based on a sample of independent continuously monitored histories of the process, and develops the likelihood
ratio test for discriminating between the Markov chain and the mover-stayer model. The methods are illustrated using a sample of rating histories of young corporate issuers. For this sample, the likelihood ratio test rejects a Markov chain in favor of a mover-stayer model. For young bonds with
lowest rating the default probabilities predicted by the mover-stayer model
are substantially lower than those predicted by the Markov chain.Statistics Working Papers Serie
Testing Homogeneity of Time-Continuous Rating Transitions
Banks could achieve substantial improvements of their portfolio credit risk assessment by estimating rating transition matrices within a time-continuous Markov model, thereby using continuous-time rating transitions provided by internal rating systems instead of discrete-time rating information. A non-parametric test for the hypothesis of time-homogeneity is developed. The alternative hypothesis is multiple structural change of transition intensities, i.e. time-varying transition probabilities. The partial-likelihood ratio for the multivariate counting process of rating transitions is shown to be asymptotically c2 -distributed. A Monte Carlo simulation finds both size and power to be adequate for our example. We analyze transitions in credit-ratings in a rating system with 8 rating states and 2743 transitions for 3699 obligors observed over seven years. The test rejects the homogeneity hypothesis at all conventional levels of significance. --Portfolio credit risk,Rating transitions,Markov model,time-homogeneity,partial likelihood
An Evaluation of Effectiveness of Fuzzy Logic Model in Predicting the Business Bankruptcy
In front of the current global financial crisis, the future existence of the firms is uncertain. The characteristics and the dynamics of the current world and the interdependences between the financial and economic markets around it demand a continuous research for new methods of bankruptcy prediction. The purpose of this article is to present a fuzzy logic-based system that predicts bankruptcy for one, two and three years before the possible failure of companies. The proposed fuzzy model uses as inputs financial ratios, that is dynamics of the financial ratios. In order to design and to implement the model, authors have used financial statements of 132 stock equity companies (25 bankrupt and 107 nonbankrupt). The paper presents also the testing and validation of the created fuzzy logic models.bankruptcy, crisis, prediction, fuzzy logic, ratings
Recommended from our members
Transcranial direct current stimulation (tDCS) in the treatment of depression: Systematic review and meta-analysis of efficacy and tolerability
BACKGROUND
Transcranial direct current stimulation (tDCS) is a potential alternative treatment option for major depressive episodes (MDE).
OBJECTIVES
We address the efficacy and safety of tDCS in MDE.
METHODS
The outcome measures were Hedges' g for continuous depression ratings, and categorical response and remission rates.
RESULTS
A random effects model indicated that tDCS was superior to sham tDCS (k=11, N=393, g=0.30, 95% CI=[0.04, 0.57], p=0.027). Adjunctive antidepressant medication and cognitive control training negatively impacted on the treatment effect. The pooled log odds ratios (LOR) for response and remission were positive, but statistically non-significant (response: k=9, LOR=0.36, 95% CI[-0.16, 0.88], p=0.176, remission: k=9, LOR=0.25, 95% CI [-0.42, 0.91], p=0.468). We estimated that for a study to detect the pooled continuous effect (g=0.30) at 80% power (alpha=0.05), a total N of at least 346 would be required (with the total N required to detect the upper and lower bound being 49 and 12,693, respectively).
CONCLUSIONS
tDCS may be efficacious for treatment of MDE. The data do not support the use of tDCS in treatment-resistant depression, or as an add-on augmentation treatment. Larger studies over longer treatment periods are needed
Boosted Beta regression.
Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1). Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures
Recommended from our members
Stochastic satisficing account of confidence in uncertain value-based decisions
Every day we make choices under uncertainty; choosing what route to work or which queue in a supermarket to take, for example. It is unclear how outcome variance, e.g. uncertainty about waiting time in a queue, affects decisions and confidence when outcome is stochastic and continuous. How does one evaluate and choose between an option with unreliable but high expected reward, and an option with more certain but lower expected reward? Here we used an experimental design where two choices’ payoffs took continuous values, to examine the effect of outcome variance on decision and confidence. We found that our participants’ probability of choosing the good (high expected reward) option decreased when the good or the bad options’ payoffs were more variable. Their confidence ratings were affected by outcome variability, but only when choosing the good option. Unlike perceptual detection tasks, confidence ratings correlated only weakly with decisions’ time, but correlated with the consistency of trial-by-trial choices. Inspired by the satisficing heuristic, we propose a “stochastic satisficing” (SSAT) model for evaluating options with continuous uncertain outcomes. In this model, options are evaluated by their probability of exceeding an acceptability threshold, and confidence reports scale with the chosen option’s thus-defined satisficing probability. Participants’ decisions were best explained by an expected reward model, while the SSAT model provided the best prediction of decision confidence. We further tested and verified the predictions of this model in a second experiment. Our model and experimental results generalize the models of metacognition from perceptual detection tasks to continuous-value based decisions. Finally, we discuss how the stochastic satisficing account of decision confidence serves psychological and social purposes associated with the evaluation, communication and justification of decision-making
When Can We Trust Our Memories? Quantitative and Qualitative Indicators of Recognition Accuracy
In this dissertation, I present a quartet of experiments that studied confidence ratings and remember/know/guess judgments as indicators of recognition accuracy. The goal of these experiments was to examine the validity of these quantitative and qualitative measures of metacognitive monitoring and to interpret them using the continuous dual-process model of signal detection (Wixted & Mickes, 2010).
In Experiment 1, subjects heard or read items belonging to categorized lists and took an old/new recognition test over studied and new items while making remember/know/guess judgments after each recognition decision. Consistent with prior literature, remember judgments were more likely to be accurate than know judgments, and knows more accurate than guesses. Subjects were more likely to commit remember false alarms to nonstudied category members of higher response frequency for a category (e.g., eagle) than to items of lower response frequency (e.g., ostrich), although the overall proportion of false remembering was lower than the proportion often found using associative false memory procedures (e.g., Roediger & McDermott, 1995). Presentation modality did not affect recognition performance.
In Experiment 2, subjects provided both confidence ratings and remember/know/guess judgments following recognition decisions in an otherwise similar procedure. Overall, accuracy correlated with both confidence and remember/know/guess judgment, and remembered memories rated with high confidence were more accurate than either high confidence or remembered memories alone. These results suggested that confident retrieval of episodic and contextual information supported accurate recognition decisions. I also calculated confidence-accuracy correlations using four methods and found that confidence and accuracy were correlated for remembered and known memories, but that no correlation was found for guesses.
In Experiment 3, subjects studied category items in different screen positions (instead of in the center of the screen, as in the prior experiments). On the recognition test following, subjects were tested on whether items presented were old or new and also reported the screen position in which items were presented (i.e., a test of source memory). Confidence ratings followed these recognition + source decisions. A similar relationship was found between confidence ratings and remember/know/guess judgments when predicting both old/new recognition accuracy and source accuracy. This result contradicts predictions made by the continuous dual-process model, which states that only remember judgments and not confidence ratings should indicate source accuracy.
Experiment 4 was conducted to replicate and extend results of Experiment 3 and to examine the effects of the order of judgments provided during the test. In this experiment, subjects were asked to make old/new recognition decisions, old/new confidence ratings, source decisions, source confidence ratings, and remember/know/guess judgments, with test order counterbalanced among four between-subjects conditions. In this study, I found that the relationship between confidence and old/new and source accuracy as a function of remember/know/guess judgment was similar regardless of condition, reproducing the observations of Experiment 3. These results were also inconsistent with predictions made by the continuous dual-process model and suggested that the results of Experiment 3 were not due to confounding effects of judgment order.
Taken together, the results of these four experiments suggest that confidence and remember/know/guess judgments are valuable when used jointly and that both contribute individually as indicators of recognition accuracy. The results show that the continuous dual- process model of signal detection is a helpful way to consider the interaction of confidence ratings and remember/know/guess judgments, but they also imply that additional research is necessary to evaluate how the present results fit with the model. In particular, Experiments 3 and 4 failed to obtain Wixted and Mickes\u27 (2010) finding of higher source accuracy for remember responses than for knows and guesses regardless of level of confidence.
The practical message is that researchers and rememberers should consider both quantitative and qualitative characteristics of a memory when attempting to infer its accuracy
- …