129 research outputs found
El principio de parsimonia y el value-at-risk
Accésit Premio Estudios Financieros 1998
Actualmente el VAR es un cuerpo de doctrina financiera y estadística con el que se pretende valorar el riesgo que incurren las entidades financieras (o no), en su quehacer diario en los mercados; más concretamente se trata de cuantificar el riesgo de pérdidas derivado de las operaciones dentro y fuera de balance como consecuencia de los movimientos adversos en los precios de mercado.
Desde comienzos de año, los bancos deben adecuar sus recursos propios de manera que sirvan para atenuar, realmente, los efectos perniciosos que acarrea la exposición al riesgo asumido en los mercados financieros y de divisas independientemente del trato contable que se quiera dar a la operación, es decir, que se deben tener en cuenta todas las operaciones, incluidas las de los mercados de derivados. Creemos que la actualidad de tal medida bien merece unas líneas con el fin de poner al día los conocimientos que se han ido produciendo en la medición del VAR
Algoritmos de aprendizaje evolutivo y estadístico para la determinación de mapas de malas hierbas utilizando técnicas de teledetección
Este trabajo aborda la resolución de problemas de
clasificación binaria utilizando una metodología
híbrida que combina la regresión logística y
modelos evolutivos de redes neuronales de
unidades producto. Para estimar los coeficientes
del modelo lo haremos en dos etapas, en la
primera aprendemos los exponentes de las
funciones unidades producto, entrenando los
modelos de redes neuronales mediante
computación evolutiva y una vez estimados el
número de funciones potenciales y los exponentes
de estas funciones, se aplica el método de máxima
verosimilitud al espacio de características formado
por las covariables iniciales junto con las nuevas
funciones de base obtenidas al entrenar los
modelos de unidades producto. Esta metodología
híbrida en el diseño del modelo y en la estimación
de los coeficientes se aplica a un problema real
agronómico de predicción de presencia de la mala
hierba Ridolfia segetum Moris en campos de
cosecha de girasol. Los resultados obtenidos con
este modelo mejoran los conseguidos con una
regresión logística estándar en cuanto a porcentaje
de patrones bien clasificados sobre el conjunto de
generalización
Projection based ensemble learning for ordinal regression
The classification of patterns into naturally ordered
labels is referred to as ordinal regression. This paper proposes
an ensemble methodology specifically adapted to this type of
problems, which is based on computing different classification
tasks through the formulation of different order hypotheses.
Every single model is trained in order to distinguish between
one given class (k) and all the remaining ones, but grouping
them in those classes with a rank lower than k, and those
with a rank higher than k. Therefore, it can be considered as
a reformulation of the well-known one-versus-all scheme. The
base algorithm for the ensemble could be any threshold (or
even probabilistic) method, such as the ones selected in this
paper: kernel discriminant analysis, support vector machines
and logistic regression (all reformulated to deal with ordinal
regression problems). The method is seen to be competitive when
compared with other state-of-the-art methodologies (both ordinal
and nominal), by using six measures and a total of fifteen ordinal
datasets. Furthermore, an additional set of experiments is used to
study the potential scalability and interpretability of the proposed
method when using logistic regression as base methodology for
the ensemble
Cooperative coevolution of artificial neural network ensembles for pattern classification
This paper presents a cooperative coevolutive approach for designing neural network ensembles. Cooperative coevolution is a recent paradigm in evolutionary computation that allows the effective modeling of cooperative environments. Although theoretically, a single neural network with a sufficient number of neurons in the hidden layer would suffice to solve any problem, in practice many real-world problems are too hard to construct the appropriate network that solve them. In such problems, neural network ensembles are a successful alternative. Nevertheless, the design of neural network ensembles is a complex task. In this paper, we propose a general framework for designing neural network ensembles by means of cooperative coevolution. The proposed model has two main objectives: first, the improvement of the combination of the trained individual networks; second, the cooperative evolution of such networks, encouraging collaboration among them, instead of a separate training of each network. In order to favor the cooperation of the networks, each network is evaluated throughout the evolutionary process using a multiobjective method. For each network, different objectives are defined, considering not only its performance in the given problem, but also its cooperation with the rest of the networks. In addition, a population of ensembles is evolved, improving the combination of networks and obtaining subsets of networks to form ensembles that perform better than the combination of all the evolved networks. The proposed model is applied to ten real-world classification problems of a very different nature from the UCI machine learning repository and proben1 benchmark set. In all of them the performance of the model is better than the performance of standard ensembles in terms of generalization error. Moreover, the size of the obtained ensembles is also smaller
Error-Correcting Output Codes in the Framework of Deep Ordinal Classification
Automatic classification tasks on structured data have been revolutionized by Convolutional Neural Networks (CNNs), but the focus has been on binary and nominal classification tasks. Only recently, ordinal classification (where class labels present a natural ordering) has been tackled through the framework of CNNs. Also, ordinal classification datasets commonly present a high imbalance in the number of samples of each class, making it an even harder problem. Focus should be shifted from classic classification metrics towards per-class metrics (like AUC or Sensitivity) and rank agreement metrics (like Cohen’s Kappa or Spearman’s rank correlation coefficient). We present a new CNN architecture based on the Ordinal Binary Decomposition (OBD) technique using Error-Correcting Output Codes (ECOC). We aim to show experimentally, using four different CNN architectures and two ordinal classification datasets, that the OBD+ECOC methodology significantly improves the mean results on the relevant ordinal and class-balancing metrics. The proposed method is able to outperform a nominal approach as well as already existing ordinal approaches, achieving a mean performance of RMSE=1.0797 for the Retinopathy dataset and RMSE=1.1237 for the Adience dataset averaged over 4 different architectures
Borderline kernel based over-sampling
Nowadays, the imbalanced nature of some real-world data
is receiving a lot of attention from the pattern recognition and machine
learning communities in both theoretical and practical aspects, giving
rise to di erent promising approaches to handling it. However, preprocessing
methods operate in the original input space, presenting distortions
when combined with kernel classi ers, that operate in the feature
space induced by a kernel function. This paper explores the notion of
empirical feature space (a Euclidean space which is isomorphic to the feature
space and therefore preserves its structure) to derive a kernel-based
synthetic over-sampling technique based on borderline instances which
are considered as crucial for establishing the decision boundary. Therefore,
the proposed methodology would maintain the main properties of
the kernel mapping while reinforcing the decision boundaries induced by
a kernel machine. The results show that the proposed method achieves
better results than the same borderline over- sampling method applied
in the original input spac
Hybridization of evolutionary algorithms and local search by means of a clustering method
This paper presents a hybrid evolutionary algorithm (EA) to solve nonlinear-regression problems. Although EAs have proven their ability to explore large search spaces, they are comparatively inefficient in fine tuning the solution. This drawback is usually avoided by means of local optimization algorithms that are applied to the individuals of the population. The algorithms that use local optimization procedures are usually called hybrid algorithms. On the other hand, it is well known that the clustering process enables the creation of groups (clusters) with mutually close points that hopefully correspond to relevant regions of attraction. Local-search procedures can then be started once in every such region. This paper proposes the combination of an EA, a clustering process, and a local-search procedure to the evolutionary design of product-units neural networks. In the methodology presented, only a few individuals are subject to local optimization. Moreover, the local optimization algorithm is only applied at specific stages of the evolutionary process. Our results show a favorable performance when the regression method proposed is compared to other standard methods
- …