5,655,081 research outputs found
Loss Given Default Modelling: Comparative Analysis
In this study we investigated several most popular Loss Given Default (LGD) models (LSM, Tobit, Three-Tiered Tobit, Beta Regression, Inflated Beta Regression, Censored Gamma Regression) in order to compare their performance. We show that for a given input data set, the quality of the model calibration depends mainly on the proper choice (and availability) of explanatory variables (model factors), but not on the fitting model. Model factors were chosen based on the amplitude of their correlation with historical LGDs of the calibration data set. Numerical values of non-quantitative parameters (industry, ranking, type of collateral) were introduced as their LGD average. We show that different debt instruments depend on different sets of model factors (from three factors for Revolving Credit or for Subordinated Bonds to eight factors for Senior Secured Bonds). Calibration of LGD models using distressed business cycle periods provide better fit than data from total available time span. Calibration algorithms and details of their realization using the R statistical package are presented. We demonstrate how LGD models can be used for stress testing. The results of this study can be of use to risk managers concerned with the Basel accord compliance
Compositional analysis of InAs-GaAs-GaSb heterostructures by low-loss electron energy loss spectroscopy
As an alternative to Core-Loss Electron Energy Loss Spectroscopy, Low-Loss EELS is suitable for compositional analysis of complex heterostructures, such as the InAs-GaAs-GaSb system, since in this energy range the edges corresponding to these elements are better defined than in Core-Loss. Furthermore, the analysis of the bulk plasmon peak, which is present in this energy range, also provides information about the composition. In this work, compositional information in an InAs-GaAs-GaSb heterostructure has been obtained from Low-Loss EEL spectra
A differentiable BLEU loss. Analysis and first results
In natural language generation tasks, like neural machine translation and image captioning, there is usually a mismatch between the optimized loss and the de facto evaluation criterion, namely token-level maximum likelihood and corpus-level BLEU score. This article tries to reduce this gap by defining differentiable computations of the BLEU and GLEU scores. We test this approach on simple tasks, obtaining valuable lessons on its potential applications but also its pitfalls, mainly that these loss functions push each token in the hypothesis sequence toward the average of the tokens in the reference, resulting in a poor training signal.Peer ReviewedPostprint (published version
Loss analysis of air-core photonic crystal fibers
By using a multipole moment approach, we analyze the loss of an air-core photonic crystal fiber and demonstrate that it is possible reduce the transmission loss that is due to photon radiation leakage through the photonic crystal cladding to a level below 0.01 dB/km, with eight rings of air holes. An analogy is drawn between air-core photonic crystal fiber modes and Bragg fiber modes. The influence of material absorption in the silica glass is discussed
Loss Functions for Top-k Error: Analysis and Insights
In order to push the performance on realistic computer vision tasks, the
number of classes in modern benchmark datasets has significantly increased in
recent years. This increase in the number of classes comes along with increased
ambiguity between the class labels, raising the question if top-1 error is the
right performance measure. In this paper, we provide an extensive comparison
and evaluation of established multiclass methods comparing their top-k
performance both from a practical as well as from a theoretical perspective.
Moreover, we introduce novel top-k loss functions as modifications of the
softmax and the multiclass SVM losses and provide efficient optimization
schemes for them. In the experiments, we compare on various datasets all of the
proposed and established methods for top-k error optimization. An interesting
insight of this paper is that the softmax loss yields competitive top-k
performance for all k simultaneously. For a specific top-k error, our new top-k
losses lead typically to further improvements while being faster to train than
the softmax.Comment: In Computer Vision and Pattern Recognition (CVPR), 201
Zero-loss/deflection map analysis
Experimental plots of the fraction of detected electrons removed from the
zero-loss peak, versus the fraction of incident electrons scattered outside of
the objective aperture, can serve as a robust fingerprint of object-contrast in
an energy filtered transmission electron microscope (EFTEM). Examples of this,
along with the first in a series of models for interpreting the resulting
patterns, were presented at the August 2010 meeting of the Microscope Society
of America meeting in Portland, Oregon, and published in {\em Microscopy and
MicroAnalysis} {\bf 16}, Supplement 2, pages 1534-1535 by Cambridge University
Press.Comment: 3 pages (3 figs, 4 refs) RevTeX, cf.
http://www.umsl.edu/~fraundorfp/zldeflmaps.htm
Tortious Acts Affecting Markets
The present paper examines an injurer causing a temporary blackout to a firm as the primary victim but also affecting customers and competitors of the firm. Reflecting existing legal practice, the paper investigates efficiency properties of the negligence rule granting recovery of private losses but to the primary victim only. The regime is shown to provide efficient incentives for precaution provided that the primary loss exceeds the social loss from accidents. The main contribution of the paper consists of an explicit analysis of markets affected by a temporary blackout of one firm. The analysis reveals that the private loss exceeds the social loss indeed if the market is less than fully competitive. Moreover, the net social loss remains positive, no matter which market structure prevails
- …
