6,001 research outputs found

    A Multiple Cascade-Classifier System for a Robust and Partially Unsupervised Updating of Land-Cover Maps

    Get PDF
    A system for a regular updating of land-cover maps is proposed that is based on the use of multitemporal remote-sensing images. Such a system is able to face the updating problem under the realistic but critical constraint that, for the image to be classified (i.e., the most recent of the considered multitemporal data set), no ground truth information is available. The system is composed of an ensemble of partially unsupervised classifiers integrated in a multiple classifier architecture. Each classifier of the ensemble exhibits the following novel peculiarities: i) it is developed in the framework of the cascade-classification approach to exploit the temporal correlation existing between images acquired at different times in the considered area; ii) it is based on a partially unsupervised methodology capable to accomplish the classification process under the aforementioned critical constraint. Both a parametric maximum-likelihood classification approach and a non-parametric radial basis function (RBF) neural-network classification approach are used as basic methods for the development of partially unsupervised cascade classifiers. In addition, in order to generate an effective ensemble of classification algorithms, hybrid maximum-likelihood and RBF neural network cascade classifiers are defined by exploiting the peculiarities of the cascade-classification methodology. The results yielded by the different classifiers are combined by using standard unsupervised combination strategies. This allows the definition of a robust and accurate partially unsupervised classification system capable of analyzing a wide typology of remote-sensing data (e.g., images acquired by passive sensors, SAR images, multisensor and multisource data). Experimental results obtained on a real multitemporal and multisource data set confirm the effectiveness of the proposed system

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues

    An ontology enhanced parallel SVM for scalable spam filter training

    Get PDF
    This is the post-print version of the final paper published in Neurocomputing. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2013 Elsevier B.V.Spam, under a variety of shapes and forms, continues to inflict increased damage. Varying approaches including Support Vector Machine (SVM) techniques have been proposed for spam filter training and classification. However, SVM training is a computationally intensive process. This paper presents a MapReduce based parallel SVM algorithm for scalable spam filter training. By distributing, processing and optimizing the subsets of the training data across multiple participating computer nodes, the parallel SVM reduces the training time significantly. Ontology semantics are employed to minimize the impact of accuracy degradation when distributing the training data among a number of SVM classifiers. Experimental results show that ontology based augmentation improves the accuracy level of the parallel SVM beyond the original sequential counterpart

    Forecasting creditworthiness in retail banking: a comparison of cascade correlation neural networks, CART and logistic regression scoring models

    Get PDF
    The preoccupation with modelling credit scoring systems including their relevance to forecasting and decision making in the financial sector has been with developed countries whilst developing countries have been largely neglected. The focus of our investigation is the Cameroonian commercial banking sector with implications for fellow members of the Banque des Etats de L’Afrique Centrale (BEAC) family which apply the same system. We investigate their currently used approaches to assessing personal loans and we construct appropriate scoring models. Three statistical modelling scoring techniques are applied, namely Logistic Regression (LR), Classification and Regression Tree (CART) and Cascade Correlation Neural Network (CCNN). To compare various scoring models’ performances we use Average Correct Classification (ACC) rates, error rates, ROC curve and GINI coefficient as evaluation criteria. The results demonstrate that a reduction in terms of forecasting power from 15.69% default cases under the current system, to 3.34% based on the best scoring model, namely CART can be achieved. The predictive capabilities of all three models are rated as at least very good using GINI coefficient; and rated excellent using the ROC curve for both CART and CCNN. It should be emphasised that in terms of prediction rate, CCNN is superior to the other techniques investigated in this paper. Also, a sensitivity analysis of the variables identifies borrower’s account functioning, previous occupation, guarantees, car ownership, and loan purpose as key variables in the forecasting and decision making process which are at the heart of overall credit policy

    Regression modeling for digital test of ΣΔ modulators

    Get PDF
    The cost of Analogue and Mixed-Signal circuit testing is an important bottleneck in the industry, due to timeconsuming verification of specifications that require state-ofthe- art Automatic Test Equipment. In this paper, we apply the concept of Alternate Test to achieve digital testing of converters. By training an ensemble of regression models that maps simple digital defect-oriented signatures onto Signal to Noise and Distortion Ratio (SNDR), an average error of 1:7% is achieved. Beyond the inference of functional metrics, we show that the approach can provide interesting diagnosis information.Ministerio de Educación y Ciencia TEC2007-68072/MICJunta de Andalucía TIC 5386, CT 30
    corecore