186 research outputs found
A comparison of the Normal and Laplace distributions in the models of fuzzy probability distribution for portfolio selection
The propose of this work is applied the fuzzy Laplace distribution on a possibilistic mean-variance model presented by Li et al which appliehe fuzzy normal distribution. The theorem necessary to introduce the Laplace distribution in the model was demonstrated. It was made an analysis of the behavior of the fuzzy normal and fuzzy Laplace distributions on the portfolio selection with VaR constraint and risk-free investment considering real data. The results showns that were not difference in assets selection and in return rate, however, There was a change in the risk rate, which was higher in the Laplace distribution than in the normal distribution
Robust portfolio management with multiple financial analysts
Portfolio selection theory, developed by Markowitz (1952), is one of the best known and widely applied methods for allocating funds among possible investment choices, where investment decision making is a trade-off between the expected return and risk of the portfolio. Many portfolio selection models have been developed on the basis of Markowitz’s theory. Most of them assume that complete investment information is available and that it can be accurately extracted from the historical data. However, this complete information never exists in reality. There are many kinds of ambiguity and vagueness which cannot be dealt with in the historical data but still need to be considered in portfolio selection. For example, to address the issue of uncertainty caused by estimation errors, the robust counterpart approach of Ben-Tal and Nemirovski (1998) has been employed frequently in recent years. Robustification, however, often leads to a more conservative solution. As a consequence, one of the most common critiques against the robust counterpart approach is the excessively pessimistic character of the robust asset allocation.
This thesis attempts to develop new approaches to improve on the respective performances of the robust counterpart approach by incorporating additional investment information sources, so that the optimal portfolio can be more reliable and, at the same time, achieve a greater return. [Continues.
Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions
[[abstract]]In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean–standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.[[notice]]補正完畢[[incitationindex]]SCI[[booktype]]紙
Adjustable Security Proportions in the Fuzzy Portfolio Selection under Guaranteed Return Rates
[[abstract]]Based on the concept of high returns as the preference to low returns, this study discusses the adjustable security proportion for excess investment and shortage investment based on the selected guaranteed return rates in a fuzzy environment, in which the return rates for selected securities are characterized by fuzzy variables. We suppose some securities are for excess investment because their return rates are higher than the guaranteed return rates, and the other securities whose return rates are lower than the guaranteed return rates are considered for shortage investment. Then, we solve the proposed expected fuzzy returns by the concept of possibility theory, where fuzzy returns are quantified by possibilistic mean and risks are measured by possibilistic variance, and then we use linear programming model to maximize the expected value of a portfolio’s return under investment risk constraints. Finally, we illustrate two numerical examples to show that the expected return rate under a lower guaranteed return rate is better than a higher guaranteed return rates in different levels of investment risks. In shortage investments, the investment proportion for the selected securities are almost zero under higher investment risks, whereas the portfolio is constructed from those securities in excess investments.[[notice]]補正完
Approximate Reasoning in Hydrogeological Modeling
The accurate determination of hydraulic conductivity is an important element of successful groundwater flow and transport modeling. However, the exhaustive measurement of this hydrogeological parameter is quite costly and, as a result, unrealistic. Alternatively, relationships between hydraulic conductivity and other hydrogeological variables less costly to measure have been used to estimate this crucial variable whenever needed. Until this point, however, the majority of these relationships have been assumed to be crisp and precise, contrary to what intuition dictates. The research presented herein addresses the imprecision inherent in hydraulic conductivity estimation, framing this process in a fuzzy logic framework. Because traditional hydrogeological practices are not suited to handle fuzzy data, various approaches to incorporating fuzzy data at different steps in the groundwater modeling process have been previously developed. Such approaches have been both redundant and contrary at times, including multiple approaches proposed for both fuzzy kriging and groundwater modeling. This research proposes a consistent rubric for the handling of fuzzy data throughout the entire groundwater modeling process. This entails the estimation of fuzzy data from alternative hydrogeological parameters, the sampling of realizations from fuzzy hydraulic conductivity data, including, most importantly, the appropriate aggregation of expert-provided fuzzy hydraulic conductivity estimates with traditionally-derived hydraulic conductivity measurements, and utilization of this information in the numerical simulation of groundwater flow and transport
Multimodel Approaches for Plasma Glucose Estimation in Continuous Glucose Monitoring. Development of New Calibration Algorithms
ABSTRACT
Diabetes Mellitus (DM) embraces a group of metabolic diseases which main characteristic is the presence of high glucose levels in blood. It is one of the diseases with major social and health impact, both for its prevalence and also the consequences of the chronic complications that it implies.
One of the research lines to improve the quality of life of people with diabetes is of technical focus. It involves several lines of research, including the development and improvement of devices to estimate "online" plasma glucose: continuous glucose monitoring systems (CGMS), both invasive and non-invasive. These devices estimate plasma glucose from sensor measurements from compartments alternative to blood. Current commercially available CGMS are minimally invasive and offer an estimation of plasma glucose from measurements in the interstitial fluid
CGMS is a key component of the technical approach to build the artificial pancreas, aiming at closing the loop in combination with an insulin pump. Yet, the accuracy of current CGMS is still poor and it may partly depend on low performance of the implemented Calibration Algorithm (CA). In addition, the sensor-to-patient sensitivity is different between patients and also for the same patient in time.
It is clear, then, that the development of new efficient calibration algorithms for CGMS is an interesting and challenging problem.
The indirect measurement of plasma glucose through interstitial glucose is a main confounder of CGMS accuracy. Many components take part in the glucose transport dynamics. Indeed, physiology might suggest the existence of different local behaviors in the glucose transport process.
For this reason, local modeling techniques may be the best option for the structure of the desired CA. Thus, similar input samples are represented by the same local model. The integration of all of them considering the input regions where they are valid is the final model of the whole data set.
Clustering is tBarceló Rico, F. (2012). Multimodel Approaches for Plasma Glucose Estimation in Continuous Glucose Monitoring. Development of New Calibration Algorithms [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17173Palanci
Comparing hard and overlapping clusterings
Similarity measures for comparing clusterings is an important component, e.g., of evaluating clustering algorithms, for consensus clustering, and for clustering stability assessment. These measures have been studied for over 40 years in the domain of exclusive hard clusterings (exhaustive and mutually exclusive object sets). In the past years, the literature has proposed measures to handle more general clusterings (e.g., fuzzy/probabilistic clusterings). This paper provides an overview of these new measures and discusses their drawbacks. We ultimately develop a corrected-for-chance measure (13AGRI) capable of comparing exclusive hard, fuzzy/probabilistic, non-exclusive hard, and possibilistic clusterings. We prove that 13AGRI and the adjusted Rand index (ARI, by Hubert and Arabie) are equivalent in the exclusive hard domain. The reported experiments show that only 13AGRI could provide both a fine-grained evaluation across clusterings with different numbers of clusters and a constant evaluation between random clusterings, showing all the four desirable properties considered here. We identified a high correlation between 13AGRI applied to fuzzy clusterings and ARI applied to hard exclusive clusterings over 14 real data sets from the UCI repository, which corroborates the validity of 13AGRI fuzzy clustering evaluation. 13AGRI also showed good results as a clustering stability statistic for solutions produced by the expectation maximization algorithm for Gaussian mixture
Robustness and Outliers
Producción CientíficaUnexpected deviations from assumed models as well as the presence of certain amounts of outlying data are common in most practical statistical applications. This fact could lead to undesirable solutions when applying non-robust statistical techniques. This is often the case in cluster analysis, too. The search for homogeneous groups with large heterogeneity between them can be spoiled due to the lack of robustness of standard clustering methods. For instance, the presence of (even few) outlying observations may result in heterogeneous clusters artificially joined together or in the detection of spurious clusters merely made up of outlying observations. In this chapter we will analyze the effects of different kinds of outlying data in cluster analysis and explore several alternative methodologies designed to avoid or minimize their undesirable effects.Ministerio de Economía, Industria y Competitividad (MTM2014-56235-C2-1-P)Junta de Castilla y León (programa de apoyo a proyectos de investigación – Ref. VA212U13
Recommended from our members
Computational intelligence techniques in asset risk analysis
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The problem of asset risk analysis is positioned within the computational intelligence paradigm. We suggest an algorithm for reformulating asset pricing, which involves incorporating imprecise information into the pricing factors through fuzzy variables as well as a calibration procedure for their possibility distributions. Then fuzzy mathematics is used to process the imprecise factors and obtain an asset evaluation. This evaluation is further automated using neural networks with sign restrictions on their weights. While such type of networks has been only used for up to two network inputs and hypothetical data, here we apply thirty-six inputs and empirical data. To achieve successful training, we modify the Levenberg-Marquart backpropagation algorithm. The intermediate result achieved is that the fuzzy asset evaluation inherits features of the factor imprecision and provides the basis for risk analysis. Next, we formulate a risk measure and a risk robustness measure based on the fuzzy asset evaluation under different characteristics of the pricing factors as well as different calibrations. Our database, extracted from DataStream, includes thirty-five companies traded on the London Stock Exchange. For each company, the risk and robustness measures are evaluated and an asset risk analysis is carried out through these values, indicating the implications they have on company performance. A comparative company risk analysis is also provided. Then, we employ both risk measures to formulate a two-step asset ranking method. The assets are initially rated according to the investors' risk preference. In addition, an algorithm is suggested to incorporate the asset robustness information and refine further the ranking benefiting market analysts. The rationale provided by the ranking technique serves as a point of departure in designing an asset risk classifier. We identify the fuzzy neural network structure of the classifier and develop an evolutionary training algorithm. The algorithm starts with suggesting preliminary heuristics in constructing a sufficient training set of assets with various characteristics revealed by the values of the pricing factors and the asset risk values. Then, the training algorithm works at two levels, the inner level targets weight optimization, while the outer level efficiently guides the exploration of the search space. The latter is achieved by automatically decomposing the training set into subsets of decreasing complexity and then incrementing backward the corresponding subpopulations of partially trained networks. The empirical results prove that the developed algorithm is capable of training the identified fuzzy network structure. This is a problem of such complexity that prevents single-level evolution from attaining meaningful results. The final outcome is an automatic asset classifier, based on the investors’ perceptions of acceptable risk. All the steps described above constitute our approach to reformulating asset risk analysis within the approximate reasoning framework through the fusion of various computational intelligence techniques
- …