131,443 research outputs found

    Financial predictions using cost sensitive neural networks for multi-class learning

    Get PDF
    The interest in the localisation of wireless sensor networks has grown in recent years. A variety of machine-learning methods have been proposed in recent years to improve the optimisation of the complex behaviour of wireless networks. Network administrators have found that traditional classification algorithms may be limited with imbalanced datasets. In fact, the problem of imbalanced data learning has received particular interest. The purpose of this study was to examine design modifications to neural networks in order to address the problem of cost optimisation decisions and financial predictions. The goal was to compare four learning-based techniques using cost-sensitive neural network ensemble for multiclass imbalance data learning. The problem is formulated as a combinatorial cost optimisation in terms of minimising the cost using meta-learning classification rules for Naïve Bayes, J48, Multilayer Perceptions, and Radial Basis Function models. With these models, optimisation faults and cost evaluations for network training are considered

    Simple, Efficient and Convenient Decentralized Multi-Task Learning for Neural Networks

    Get PDF
    Artificial intelligence relying on machine learning is increasingly used on small, personal, network-connected devices such as smartphones and vocal assistants, and these applications will likely evolve with the development of the Internet of Things. The learning process requires a lot of data, often real users’ data, and computing power. Decentralized machine learning can help to protect users’ privacy by keeping sensitive training data on users’ devices, and has the potential to alleviate the cost born by service providers by off-loading some of the learning effort to user devices. Unfortunately, most approaches proposed so far for distributed learning with neural network are mono-task, and do not transfer easily to multi-tasks problems, for which users seek to solve related but distinct learning tasks and the few existing multi-task approaches have serious limitations. In this paper, we propose a novel learning method for neural networks that is decentralized, multitask, and keeps users’ data local. Our approach works with different learning algorithms, on various types of neural networks. We formally analyze the convergence of our method, and we evaluateits efficiency in different situations on various kind of neural networks, with different learning algorithms, thus demonstrating its benefits in terms of learning quality and convergence
    • 

    corecore