41 research outputs found

    Informational Paradigm, management of uncertainty and theoretical formalisms in the clustering framework: A review

    Get PDF
    Fifty years have gone by since the publication of the first paper on clustering based on fuzzy sets theory. In 1965, L.A. Zadeh had published “Fuzzy Sets” [335]. After only one year, the first effects of this seminal paper began to emerge, with the pioneering paper on clustering by Bellman, Kalaba, Zadeh [33], in which they proposed a prototypal of clustering algorithm based on the fuzzy sets theory

    A multiobjective credibilistic portfolio selection model. Empirical study in the Latin American Integrated Market

    Full text link
    [EN] This paper extends the stochastic mean-semivariance model to a fuzzy multiobjective model, where apart from return and risk, also liquidity is considered to measure the performance of a portfolio. Uncertainty of future return and liquidity of each asset are modeled using L-R type fuzzy numbers that belong to the power reference function family. The decision process of this novel approach takes into account not only the multidimensional nature of the portfolio selection problem but also realistic constraints by investors. Particularly, it optimizes the expected return, the semivariance and the expected liquidity of a given portfolio, considering cardinality constraint and upper and lower bound constraints. The constrained portfolio optimization problem resulting is solved using the algorithm NSGA-II. As a novelty, in order to select the optimal portfolio, this study defines the credibilistic Sortino ratio as the ratio between the credibilistic risk premium and the credibilistic semivariance. An empirical study is included to show the effectiveness and efficiency of the model in practical applications using a data set of assets from the Latin American Integrated Market.García García, F.; Gonzalez-Bueno, J.; Guijarro, F.; Oliver-Muncharaz, J. (2020). A multiobjective credibilistic portfolio selection model. Empirical study in the Latin American Integrated Market. Enterpreneurship and Sustainability Issues. 8(2):1027-1046. https://doi.org/10.9770/jesi.2020.8.2(62)S102710468

    Метод адаптивної достовірної нечіткої кластеризації даних на основі еволюційного алгоритму

    Get PDF
    Методи обчислювального інтелекту широко використовуються для вирішення багатьох складних проблем, включаючи, звичайно, традиційні: видобуток даних та такі нові напрямки, як динамічний видобуток даних, видобуток потоків даних, видобуток великих даних, веб-видобуток, видобуток тексту, тощо. Одна з основних областей обчислювального інтелекту – це еволюційні алгоритми, які по суті представляють певні математичні моделі еволюції біологічних організмів. У роботі запропоновано адаптивний метод нечіткої кластеризації з використанням оптимізації еволюційних котячих зграй. Використовуючи запропонований підхід, можна вирішити завдання кластеризації в режимі он-лайн

    Правдо подібна нечітка кластеризація даних на основі еволюційного методу божевільних котів

    Get PDF
    The problem of fuzzy clustering of large datasets that are sent for processing in both batch and online modes, based on a credibilistic approach, is considered. To find the global extremum of the credibilistic fuzzy clustering goal function, the modification of the swarm algorithm of crazy cats swarms was introduced, that combined the advantages of evolutionary algorithms and a global random search. It is shown that different search modes are generated by a unified mathematical procedure, some cases of which are known algorithms for both local and global optimizations. The proposed approach is easy to implement and is characterized by the high speed and reliability in problems of multi-extreme fuzzy clustering

    Outlier Detection Methods for Industrial Applications

    Get PDF
    An outlier is an observation (or measurement) that is different with respect to the other values contained in a given dataset. Outliers can be due to several causes. The measurement can be incorrectly observed, recorded or entered into the process computer, the observed datum can come from a different population with respect to the normal situation and thus is correctly measured but represents a rare event. In literature different definitions of outlier exist: the most commonly referred are reported in the following: - "An outlier is an observation that deviates so much from other observations as to arouse suspicions that is was generated by a different mechanism " (Hawkins, 1980). - "An outlier is an observation (or subset of observations) which appear to be inconsistent with the remainder of the dataset" (Barnet & Lewis, 1994). - "An outlier is an observation that lies outside the overall pattern of a distribution" (Moore and McCabe, 1999). - "Outliers are those data records that do not follow any pattern in an application" (Chen and al., 2002). - "An outlier in a set of data is an observation or a point that is considerably dissimilar or inconsistent with the remainder of the data" (Ramasmawy at al., 2000). Many data mining algorithms try to minimize the influence of outliers for instance on a final model to develop, or to eliminate them in the data pre-processing phase. However, a data miner should be careful when automatically detecting and eliminating outliers because, if the data are correct, their elimination can cause the loss of important hidden information (Kantardzic, 2003). Some data mining applications are focused on outlier detection and they are the essential result of a data-analysis (Sane & Ghatol, 2006). The outlier detection techniques find applications in credit card fraud, network robustness analysis, network intrusion detection, financial applications and marketing (Han & Kamber, 2001). A more exhaustive list of applications that exploit outlier detection is provided below (Hodge, 2004): - Fraud detection: fraudulent applications for credit cards, state benefits or fraudulent usage of credit cards or mobile phones. - Loan application processing: fraudulent applications or potentially problematical customers. - Intrusion detection, such as unauthorized access in computer networks

    Developing an automatic brachial artery segmentation and bloodstream analysis tool using possibilistic C-means clustering from color doppler ultrasound images

    Get PDF
    Automatic segmentation of brachial artery and blood-flow dynamics are important for early detection of cardiovascular disease and other vascular endothelial malfunctions. In this paper, we propose a software that is noise tolerant and fully automatic in segmentation of brachial artery from color Doppler ultrasound images. Possibilistic C-Means clustering algorithm is applied to make the automatic segmentation. We use HSV color model to enhance the contrast of bloodstream area in the input image. Our software also provides index of hemoglobin distribution with respect to the blood flow velocity for pathologists to proceed further analysis. In experiment, the proposed method successfully extracts the target area in 59 out of 60 cases (98.3%) with field expert’s verification

    A Directed FCM Approach for Analysis of Stained Tissues

    Get PDF
    The use of digital imagery has increased phenomenally especially in the clinical field. These images are obtained from different modalities such as X-ray and MRI. Digital imaging of the more traditional imagery such as stained tissues has opened up new means of investigation. Hence a need to build a system to analyze the stained tissues and extract the salient information has risen. A Directed FCM Approach for Analysis of Stained Tissues introduces a modified FCM to analyze the tissues. The analysis can be controlled by the user by selecting the number of clusters, size of the clusters and the centers for the clusters. The results of this analysis are reported as the percent of changes in a specific square area
    corecore