1,723 research outputs found

    Using Clustering Method to Understand Indian Stock Market Volatility

    Full text link
    In this paper we use Clustering Method to understand whether stock market volatility can be predicted at all, and if so, when it can be predicted. The exercise has been performed for the Indian stock market on daily data for two years. For our analysis we map number of clusters against number of variables. We then test for efficiency of clustering. Our contention is that, given a fixed number of variables, one of them being historic volatility of NIFTY returns, if increase in the number of clusters improves clustering efficiency, then volatility cannot be predicted. Volatility then becomes random as, for a given time period, it gets classified in various clusters. On the other hand, if efficiency falls with increase in the number of clusters, then volatility can be predicted as there is some homogeneity in the data. If we fix the number of clusters and then increase the number of variables, this should have some impact on clustering efficiency. Indeed if we can hit upon, in a sense, an optimum number of variables, then if the number of clusters is reasonably small, we can use these variables to predict volatility. The variables that we consider for our study are volatility of NIFTY returns, volatility of gold returns, India VIX, CBOE VIX, volatility of crude oil returns, volatility of DJIA returns, volatility of DAX returns, volatility of Hang Seng returns and volatility of Nikkei returns. We use three clustering algorithms namely Kernel K-Means, Self Organizing Maps and Mixture of Gaussian models and two internal clustering validity measures, Silhouette Index and Dunn Index, to assess the quality of generated clusters

    Mapping Auto-context Decision Forests to Deep ConvNets for Semantic Segmentation

    Full text link
    We consider the task of pixel-wise semantic segmentation given a small set of labeled training images. Among two of the most popular techniques to address this task are Decision Forests (DF) and Neural Networks (NN). In this work, we explore the relationship between two special forms of these techniques: stacked DFs (namely Auto-context) and deep Convolutional Neural Networks (ConvNet). Our main contribution is to show that Auto-context can be mapped to a deep ConvNet with novel architecture, and thereby trained end-to-end. This mapping can be used as an initialization of a deep ConvNet, enabling training even in the face of very limited amounts of training data. We also demonstrate an approximate mapping back from the refined ConvNet to a second stacked DF, with improved performance over the original. We experimentally verify that these mappings outperform stacked DFs for two different applications in computer vision and biology: Kinect-based body part labeling from depth images, and somite segmentation in microscopy images of developing zebrafish. Finally, we revisit the core mapping from a Decision Tree (DT) to a NN, and show that it is also possible to map a fuzzy DT, with sigmoidal split decisions, to a NN. This addresses multiple limitations of the previous mapping, and yields new insights into the popular Rectified Linear Unit (ReLU), and more recently proposed concatenated ReLU (CReLU), activation functions

    A hybrid model for bankruptcy prediction using genetic algorithm, fuzzy c-means and mars

    Full text link
    Bankruptcy prediction is very important for all the organization since it affects the economy and rise many social problems with high costs. There are large number of techniques have been developed to predict the bankruptcy, which helps the decision makers such as investors and financial analysts. One of the bankruptcy prediction models is the hybrid model using Fuzzy C-means clustering and MARS, which uses static ratios taken from the bank financial statements for prediction, which has its own theoretical advantages. The performance of existing bankruptcy model can be improved by selecting the best features dynamically depend on the nature of the firm. This dynamic selection can be accomplished by Genetic Algorithm and it improves the performance of prediction model.Comment: Bankruptcy prediction, financial ratio models, Genetic Algorithm, Fuzzy c-means Clustering, MAR

    End-to-end Learning of Deterministic Decision Trees

    Full text link
    Conventional decision trees have a number of favorable properties, including interpretability, a small computational footprint and the ability to learn from little training data. However, they lack a key quality that has helped fuel the deep learning revolution: that of being end-to-end trainable, and to learn from scratch those features that best allow to solve a given supervised learning problem. Recent work (Kontschieder 2015) has addressed this deficit, but at the cost of losing a main attractive trait of decision trees: the fact that each sample is routed along a small subset of tree nodes only. We here propose a model and Expectation-Maximization training scheme for decision trees that are fully probabilistic at train time, but after a deterministic annealing process become deterministic at test time. We also analyze the learned oblique split parameters on image datasets and show that Neural Networks can be trained at each split node. In summary, we present the first end-to-end learning scheme for deterministic decision trees and present results on par with or superior to published standard oblique decision tree algorithms

    Truecluster: robust scalable clustering with model selection

    Full text link
    Data-based classification is fundamental to most branches of science. While recent years have brought enormous progress in various areas of statistical computing and clustering, some general challenges in clustering remain: model selection, robustness, and scalability to large datasets. We consider the important problem of deciding on the optimal number of clusters, given an arbitrary definition of space and clusteriness. We show how to construct a cluster information criterion that allows objective model selection. Differing from other approaches, our truecluster method does not require specific assumptions about underlying distributions, dissimilarity definitions or cluster models. Truecluster puts arbitrary clustering algorithms into a generic unified (sampling-based) statistical framework. It is scalable to big datasets and provides robust cluster assignments and case-wise diagnostics. Truecluster will make clustering more objective, allows for automation, and will save time and costs. Free R software is available.Comment: Article (10 figures). Changes in 2nd version: dropped supplements in favor of better integrated presentation, better literature coverage, put into proper English. Author's website available via http://www.truecluster.co

    Segmentation of articular cartilage and early osteoarthritis based on the fuzzy soft thresholding approach driven by modified evolutionary ABC optimization and local statistical aggregation

    Get PDF
    Articular cartilage assessment, with the aim of the cartilage loss identification, is a crucial task for the clinical practice of orthopedics. Conventional software (SW) instruments allow for just a visualization of the knee structure, without post processing, offering objective cartilage modeling. In this paper, we propose the multiregional segmentation method, having ambitions to bring a mathematical model reflecting the physiological cartilage morphological structure and spots, corresponding with the early cartilage loss, which is poorly recognizable by the naked eye from magnetic resonance imaging (MRI). The proposed segmentation model is composed from two pixel's classification parts. Firstly, the image histogram is decomposed by using a sequence of the triangular fuzzy membership functions, when their localization is driven by the modified artificial bee colony (ABC) optimization algorithm, utilizing a random sequence of considered solutions based on the real cartilage features. In the second part of the segmentation model, the original pixel's membership in a respective segmentation class may be modified by using the local statistical aggregation, taking into account the spatial relationships regarding adjacent pixels. By this way, the image noise and artefacts, which are commonly presented in the MR images, may be identified and eliminated. This fact makes the model robust and sensitive with regards to distorting signals. We analyzed the proposed model on the 2D spatial MR image records. We show different MR clinical cases for the articular cartilage segmentation, with identification of the cartilage loss. In the final part of the analysis, we compared our model performance against the selected conventional methods in application on the MR image records being corrupted by additive image noise.Web of Science117art. no. 86

    Survey of state-of-the-art mixed data clustering algorithms

    Full text link
    Mixed data comprises both numeric and categorical features, and mixed datasets occur frequently in many domains, such as health, finance, and marketing. Clustering is often applied to mixed datasets to find structures and to group similar objects for further analysis. However, clustering mixed data is challenging because it is difficult to directly apply mathematical operations, such as summation or averaging, to the feature values of these datasets. In this paper, we present a taxonomy for the study of mixed data clustering algorithms by identifying five major research themes. We then present a state-of-the-art review of the research works within each research theme. We analyze the strengths and weaknesses of these methods with pointers for future research directions. Lastly, we present an in-depth analysis of the overall challenges in this field, highlight open research questions and discuss guidelines to make progress in the field.Comment: 20 Pages, 2 columns, 6 Tables, 209 Reference

    Learning from Imprecise and Fuzzy Observations: Data Disambiguation through Generalized Loss Minimization

    Full text link
    Methods for analyzing or learning from "fuzzy data" have attracted increasing attention in recent years. In many cases, however, existing methods (for precise, non-fuzzy data) are extended to the fuzzy case in an ad-hoc manner, and without carefully considering the interpretation of a fuzzy set when being used for modeling data. Distinguishing between an ontic and an epistemic interpretation of fuzzy set-valued data, and focusing on the latter, we argue that a "fuzzification" of learning algorithms based on an application of the generic extension principle is not appropriate. In fact, the extension principle fails to properly exploit the inductive bias underlying statistical and machine learning methods, although this bias, at least in principle, offers a means for "disambiguating" the fuzzy data. Alternatively, we therefore propose a method which is based on the generalization of loss functions in empirical risk minimization, and which performs model identification and data disambiguation simultaneously. Elaborating on the fuzzification of specific types of losses, we establish connections to well-known loss functions in regression and classification. We compare our approach with related methods and illustrate its use in logistic regression for binary classification

    Numeric Input Relations for Relational Learning with Applications to Community Structure Analysis

    Full text link
    Most work in the area of statistical relational learning (SRL) is focussed on discrete data, even though a few approaches for hybrid SRL models have been proposed that combine numerical and discrete variables. In this paper we distinguish numerical random variables for which a probability distribution is defined by the model from numerical input variables that are only used for conditioning the distribution of discrete response variables. We show how numerical input relations can very easily be used in the Relational Bayesian Network framework, and that existing inference and learning methods need only minor adjustments to be applied in this generalized setting. The resulting framework provides natural relational extensions of classical probabilistic models for categorical data. We demonstrate the usefulness of RBN models with numeric input relations by several examples. In particular, we use the augmented RBN framework to define probabilistic models for multi-relational (social) networks in which the probability of a link between two nodes depends on numeric latent feature vectors associated with the nodes. A generic learning procedure can be used to obtain a maximum-likelihood fit of model parameters and latent feature values for a variety of models that can be expressed in the high-level RBN representation. Specifically, we propose a model that allows us to interpret learned latent feature values as community centrality degrees by which we can identify nodes that are central for one community, that are hubs between communities, or that are isolated nodes. In a multi-relational setting, the model also provides a characterization of how different relations are associated with each community
    corecore