311 research outputs found

    The probability of default in internal ratings based (IRB) models in Basel II: an application of the rough sets methodology

    Get PDF
    El nuevo Acuerdo de Capital de junio de 2004 (Basilea II) da cabida e incentiva la implantación de modelos propios para la medición de los riesgos financieros en las entidades de crédito. En el trabajo que presentamos nos centramos en los modelos internos para la valoración del riesgo de crédito (IRB) y concretamente en la aproximación a uno de sus componentes: la probabilidad de impago (PD). Los métodos tradicionales usados para la modelización del riesgo de crédito, como son el análisis discriminante y los modelos logit y probit, parten de una serie de restricciones estadísticas. La metodología rough sets se presenta como una alternativa a los métodos estadísticos clásicos, salvando las limitaciones de estos. En nuestro trabajo aplicamos la metodología rought sets a una base de datos, compuesta por 106 empresas, solicitantes de créditos, con el objeto de obtener aquellos ratios que mejor discriminan entre empresas sanas y fallidas, así como una serie de reglas de decisión que ayudarán a detectar las operaciones potencialmente fallidas, como primer paso en la modelización de la probabilidad de impago. Por último, enfrentamos los resultados obtenidos con los alcanzados con el análisis discriminante clásico, para concluir que la metodología de los rough sets presenta mejores resultados de clasificación, en nuestro caso.The new Capital Accord of June 2004 (Basel II) opens the way for and encourages credit entities to implement their own models for measuring financial risks. In the paper presented, we focus on the use of internal rating based (IRB) models for the assessment of credit risk and specifically on the approach to one of their components: probability of default (PD). In our study we apply the rough sets methodology to a database composed of 106 companies, applicants for credit, with the object of obtaining those ratios that discriminate best between healthy and bankrupt companies, together with a series of decision rules that will help to detect the operations potentially in default, as a first step in modelling the probability of default. Lastly, we compare the results obtained against those obtained using classic discriminant análisis. We conclude that the rough sets methodology presents better risk classification results.Junta de Andalucía P06-SEJ-0153

    Semantics-Preserving Dimensionality Reduction: Rough and Fuzzy-Rough-Based Approaches

    Get PDF
    Abstract—Semantics-preserving dimensionality reduction refers to the problem of selecting those input features that are most predictive of a given outcome; a problem encountered in many areas such as machine learning, pattern recognition, and signal processing. This has found successful application in tasks that involve data sets containing huge numbers of features (in the order of tens of thousands), which would be impossible to process further. Recent examples include text processing and Web content classification. One of the many successful applications of rough set theory has been to this feature selection area. This paper reviews those techniques that preserve the underlying semantics of the data, using crisp and fuzzy rough set-based methodologies. Several approaches to feature selection based on rough set theory are experimentally compared. Additionally, a new area in feature selection, feature grouping, is highlighted and a rough set-based feature grouping technique is detailed. Index Terms—Dimensionality reduction, feature selection, feature transformation, rough selection, fuzzy-rough selection.

    Data mining an EEG dataset with an emphasis on dimensionality reduction

    Get PDF
    The human brain is obviously a complex system, and exhibits rich spatiotemporal dynamics. Among the non-invasive techniques for probing human brain dynamics, electroencephalography (EEG) provides a direct measure of cortical activity with millisecond temporal resolution. Early attempts to analyse EEG data relied on visual inspection of EEG records. Since the introduction of EEG recordings, the volume of data generated from a study involving a single patient has increased exponentially. Therefore, automation based on pattern classification techniques have been applied with considerable success. In this study, a multi-step approach for the classification of EEG signal has been adopted. We have analysed sets of EEG time series recording from healthy volunteers with open eyes and intracranial EEG recordings from patients with epilepsy during ictal (seizure) periods. In the present work, we have employed a discrete wavelet transform to the EEG data in order to extract temporal information in the form of changes in the frequency domain over time - that is they are able to extract non-stationary signals embedded in the noisy background of the human brain. Principal components analysis (PCA) and rough sets have been used to reduce the data dimensionality. A multi-classifier scheme consists of LVQ2.1 neural networks have been developed for the classification task. The experimental results validated the proposed methodology

    Combining rough and fuzzy sets for feature selection

    Get PDF
    • …
    corecore