12,922 research outputs found

    Novel Metaknowledge-based Processing Technique for Multimedia Big Data clustering challenges

    Full text link
    Past research has challenged us with the task of showing relational patterns between text-based data and then clustering for predictive analysis using Golay Code technique. We focus on a novel approach to extract metaknowledge in multimedia datasets. Our collaboration has been an on-going task of studying the relational patterns between datapoints based on metafeatures extracted from metaknowledge in multimedia datasets. Those selected are significant to suit the mining technique we applied, Golay Code algorithm. In this research paper we summarize findings in optimization of metaknowledge representation for 23-bit representation of structured and unstructured multimedia data in order toComment: IEEE Multimedia Big Data (BigMM 2015

    Towards a Comprehensible and Accurate Credit Management Model: Application of four Computational Intelligence Methodologies

    Get PDF
    The paper presents methods for classification of applicants into different categories of credit risk using four different computational intelligence techniques. The selected methodologies involved in the rule-based categorization task are (1) feedforward neural networks trained with second order methods (2) inductive machine learning, (3) hierarchical decision trees produced by grammar-guided genetic programming and (4) fuzzy rule based systems produced by grammar-guided genetic programming. The data used are both numerical and linguistic in nature and they represent a real-world problem, that of deciding whether a loan should be granted or not, in respect to financial details of customers applying for that loan, to a specific private EU bank. We examine the proposed classification models with a sample of enterprises that applied for a loan, each of which is described by financial decision variables (ratios), and classified to one of the four predetermined classes. Attention is given to the comprehensibility and the ease of use for the acquired decision models. Results show that the application of the proposed methods can make the classification task easier and - in some cases - may minimize significantly the amount of required credit data. We consider that these methodologies may also give the chance for the extraction of a comprehensible credit management model or even the incorporation of a related decision support system in bankin

    Classifying sequences by the optimized dissimilarity space embedding approach: a case study on the solubility analysis of the E. coli proteome

    Full text link
    We evaluate a version of the recently-proposed classification system named Optimized Dissimilarity Space Embedding (ODSE) that operates in the input space of sequences of generic objects. The ODSE system has been originally presented as a classification system for patterns represented as labeled graphs. However, since ODSE is founded on the dissimilarity space representation of the input data, the classifier can be easily adapted to any input domain where it is possible to define a meaningful dissimilarity measure. Here we demonstrate the effectiveness of the ODSE classifier for sequences by considering an application dealing with the recognition of the solubility degree of the Escherichia coli proteome. Solubility, or analogously aggregation propensity, is an important property of protein molecules, which is intimately related to the mechanisms underlying the chemico-physical process of folding. Each protein of our dataset is initially associated with a solubility degree and it is represented as a sequence of symbols, denoting the 20 amino acid residues. The herein obtained computational results, which we stress that have been achieved with no context-dependent tuning of the ODSE system, confirm the validity and generality of the ODSE-based approach for structured data classification.Comment: 10 pages, 49 reference

    Big data analytics:Computational intelligence techniques and application areas

    Get PDF
    Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment
    • 

    corecore