211 research outputs found
Quantitative Validation: An Overview and Framework for PD Backtesting and Benchmarking.
The aim of credit risk models is to identify and quantify future outcomes of a set of risk measurements. In other words, the model's purpose is to provide as good an approximation as possible of what constitutes the true underlying risk relationship between a set of inputs and a target variable. These parameters are used for regulatory capital calculations to determine the capital needed that serves a buffer to protect depositors in adverse economic conditions. In order to manage model risk, financial institutions need to set up validation processes so as to monitor the quality of the models on an ongoing basis. Validation is important to inform all stakeholders (e.g. board of directors, senior management, regulators, investors, borrowers, …) and as such allow them to make better decisions. Validation can be considered from both a quantitative and qualitative point of view. Backtesting and benchmarking are key quantitative validation tools. In backtesting, the predicted risk measurements (PD, LGD, CCF) will be contrasted with observed measurements using a workbench of available test statistics to evaluate the calibration, discrimination and stability of the model. A timely detection of reduced performance is crucial since it directly impacts profitability and risk management strategies. The aim of benchmarking is to compare internal risk measurements with external risk measurements so to allow to better gauge the quality of the internal rating system. This paper will focus on the quantitative PD validation process within a Basel II context. We will set forth a traffic light indicator approach that employs all relevant statistical tests to quantitatively validate the used PD model, and document this complete approach with a reallife case-study.Framework; Benchmarking; Credit; Credit scoring; Control;
From spinal central pattern generators to cortical network: integrated BCI for walking rehabilitation
Success in locomotor rehabilitation programs can be improved with the use of brain-computer interfaces (BCIs). Although a wealth of research has demonstrated that locomotion is largely controlled by spinal mechanisms, the brain is of utmost importance in monitoring locomotor patterns and therefore contains information regarding central pattern generation functioning. In addition, there is also a tight coordination between the upper and lower limbs, which can also be useful in controlling locomotion. The current paper critically investigates different approaches that are applicable to this field: the use of electroencephalogram (EEG), upper limb electromyogram (EMG), or a hybrid of the two neurophysiological signals to control assistive exoskeletons used in locomotion based on programmable central pattern generators (PCPGs) or dynamic recurrent neural networks (DRNNs). Plantar surface tactile stimulation devices combined with virtual reality may provide the sensation of walking while in a supine position for use of training brain signals generated during locomotion. These methods may exploit mechanisms of brain plasticity and assist in the neurorehabilitation of gait in a variety of clinical conditions, including stroke, spinal trauma, multiple sclerosis, and cerebral palsy
Discharge of parental authority: considerations regarding the compatibility of the new provision of the Dutch Civil Code with the European Convention on Human Rights.
Coherent privaatrech
Limits on diffuse fluxes of high energy extraterrestrial neutrinos with the AMANDA-B10 detector
Data from the AMANDA-B10 detector taken during the austral winter of 1997
have been searched for a diffuse flux of high energy extraterrestrial
muon-neutrinos, as predicted from, e.g., the sum of all active galaxies in the
universe. This search yielded no excess events above those expected from the
background atmospheric neutrinos, leading to upper limits on the
extraterrestrial neutrino flux. For an assumed E^-2 spectrum, a 90% classical
confidence level upper limit has been placed at a level E^2 Phi(E) = 8.4 x
10^-7 GeV cm^-2 s^-1 sr^-1 (for a predominant neutrino energy range 6-1000 TeV)
which is the most restrictive bound placed by any neutrino detector. When
specific predicted spectral forms are considered, it is found that some are
excluded.Comment: Submitted to Physical Review Letter
Search for Point Sources of High Energy Neutrinos with AMANDA
This paper describes the search for astronomical sources of high-energy
neutrinos using the AMANDA-B10 detector, an array of 302 photomultiplier tubes,
used for the detection of Cherenkov light from upward traveling
neutrino-induced muons, buried deep in ice at the South Pole. The absolute
pointing accuracy and angular resolution were studied by using coincident
events between the AMANDA detector and two independent telescopes on the
surface, the GASP air Cherenkov telescope and the SPASE extensive air shower
array. Using data collected from April to October of 1997 (130.1 days of
livetime), a general survey of the northern hemisphere revealed no
statistically significant excess of events from any direction. The sensitivity
for a flux of muon neutrinos is based on the effective detection area for
through-going muons. Averaged over the Northern sky, the effective detection
area exceeds 10,000 m^2 for E_{mu} ~ 10 TeV. Neutrinos generated in the
atmosphere by cosmic ray interactions were used to verify the predicted
performance of the detector. For a source with a differential energy spectrum
proportional to E_{nu}^{-2} and declination larger than +40 degrees, we obtain
E^2(dN_{nu}/dE) <= 10^{-6}GeVcm^{-2}s^{-1} for an energy threshold of 10 GeV.Comment: 46 pages, 22 figures, 4 tables, submitted to Ap.
- …