6 research outputs found

    Leveraging the Potentials of Dedicated Collaborative Interactive Learning: Conceptual Foundations to Overcome Uncertainty by Human-Machine Collaboration

    Get PDF
    When a learning system learns from data that was previously assigned to categories, we say that the learning system learns in a supervised way. By supervised , we mean that a higher entity, for example a human, has arranged the data into categories. Fully categorizing the data is cost intensive and time consuming. Moreover, the categories (labels) provided by humans might be subject to uncertainty, as humans are prone to error. This is where dedicate collaborative interactive learning (D-CIL) comes together: The learning system can decide from which data it learns, copes with uncertainty regarding the categories, and does not require a fully labeled dataset. Against this background, we create the foundations of two central challenges in this early development stage of D-CIL: task complexity and uncertainty. We present an approach to crowdsourcing traffic sign labels with self-assessment that will support leveraging the potentials of D-CIL

    A Fault-Tolerant Regularizer for RBF Networks

    Full text link

    Integrating Local and Global Error Statistics for Multi-Scale RBF Network Training: An Assessment on Remote Sensing Data

    Get PDF
    Background This study discusses the theoretical underpinnings of a novel multi-scale radial basis function (MSRBF) neural network along with its application to classification and regression tasks in remote sensing. The novelty of the proposed MSRBF network relies on the integration of both local and global error statistics in the node selection process. Methodology and Principal Findings The method was tested on a binary classification task, detection of impervious surfaces using a Landsat satellite image, and a regression problem, simulation of waveform LiDAR data. In the classification scenario, results indicate that the MSRBF is superior to existing radial basis function and back propagation neural networks in terms of obtained classification accuracy and training-testing consistency, especially for smaller datasets. The latter is especially important as reference data acquisition is always an issue in remote sensing applications. In the regression case, MSRBF provided improved accuracy and consistency when contrasted with a multi kernel RBF network. Conclusion and Significance Results highlight the potential of a novel training methodology that is not restricted to a specific algorithmic type, therefore significantly advancing machine learning algorithms for classification and regression tasks. The MSRBF is expected to find numerous applications within and outside the remote sensing field

    An incremental training method for the probabilistic RBF network

    No full text
    Abstract—The probabilistic radial basis function (PRBF) network constitutes a probabilistic version of the RBF network for classification that extends the typical mixture model approach to classification by allowing the sharing of mixture components among all classes. The typical learning method of PRBF for a classification task employs the expectation–maximization (EM) algorithm and depends strongly on the initial parameter values. In this paper, we propose a technique for incremental training of the PRBF network for classification. The proposed algorithm starts with a single component and incrementally adds more components at appropriate positions in the data space. The addition of a new component is based on criteria for detecting a region in the data space that is crucial for the classification task. After the addition of all components, the algorithm splits every component of the network into subcomponents, each one corresponding to a different class. Experimental results using several well-known classification data sets indicate that the incremental method provides solutions of superior classification performance compared to the hierarchical PRBF training method. We also conducted comparative experiments with the support vector machines method and present the obtained results along with a qualitative comparison of the two approaches. Index Terms—Classification, decision boundary, mixture models, neural networks, probabilistic modeling, radial basis function networks. I
    corecore