51,417 research outputs found

    Financial predictions using cost sensitive neural networks for multi-class learning

    Get PDF
    The interest in the localisation of wireless sensor networks has grown in recent years. A variety of machine-learning methods have been proposed in recent years to improve the optimisation of the complex behaviour of wireless networks. Network administrators have found that traditional classification algorithms may be limited with imbalanced datasets. In fact, the problem of imbalanced data learning has received particular interest. The purpose of this study was to examine design modifications to neural networks in order to address the problem of cost optimisation decisions and financial predictions. The goal was to compare four learning-based techniques using cost-sensitive neural network ensemble for multiclass imbalance data learning. The problem is formulated as a combinatorial cost optimisation in terms of minimising the cost using meta-learning classification rules for Naïve Bayes, J48, Multilayer Perceptions, and Radial Basis Function models. With these models, optimisation faults and cost evaluations for network training are considered

    Enhancement of speed and efficiency of an Internet based gear design optimisation

    Get PDF
    An internet-based gear design optimisation program has been developed for geographically dispersed teams to collaborate over the internet. The optimisation program implements genetic algorithm. A novel methodology is presented that improves the speed of execution of the optimisation program by integrating artificial neural networks into the system. The paper also proposes a method that allows an improvement to the performance of the back propagation-learning algorithm. This is done by rescaling the output data patterns to lie slightly below and above the two extreme values of the full range neural activation function. Experimental tests show the reduction of execution time by approximately 50%, as well as an improvement in the training and generalisation errors and the rate of learning of the network

    An Online Learning Method for Microgrid Energy Management Control*

    Get PDF
    We propose a novel Model Predictive Control (MPC) scheme based on online-learning (OL) for microgrid energy management, where the control optimisation is embedded as the last layer of the neural network. The proposed MPC scheme deals with uncertainty on the load and renewable generation power profiles and on electricity prices, by employing the predictions provided by an online trained neural network in the optimisation problem. In order to adapt to possible changes in the environment, the neural network is online trained based on continuously received data. The network hyperparameters are selected by performing a hyperparameter optimisation before the deployment of the controller, using a pretraining dataset. We show the effectiveness of the proposed method for microgrid energy management through extensive experiments on real microgrid datasets. Moreover, we show that the proposed algorithm has good transfer learning (TL) capabilities among different microgrids

    Random Neural Networks and Optimisation

    Get PDF
    In this thesis we introduce new models and learning algorithms for the Random Neural Network (RNN), and we develop RNN-based and other approaches for the solution of emergency management optimisation problems. With respect to RNN developments, two novel supervised learning algorithms are proposed. The first, is a gradient descent algorithm for an RNN extension model that we have introduced, the RNN with synchronised interactions (RNNSI), which was inspired from the synchronised firing activity observed in brain neural circuits. The second algorithm is based on modelling the signal-flow equations in RNN as a nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory quasi-Newton algorithm specifically designed for the RNN case. Regarding the investigation of emergency management optimisation problems, we examine combinatorial assignment problems that require fast, distributed and close to optimal solution, under information uncertainty. We consider three different problems with the above characteristics associated with the assignment of emergency units to incidents with injured civilians (AEUI), the assignment of assets to tasks under execution uncertainty (ATAU), and the deployment of a robotic network to establish communication with trapped civilians (DRNCTC). AEUI is solved by training an RNN tool with instances of the optimisation problem and then using the trained RNN for decision making; training is achieved using the developed learning algorithms. For the solution of ATAU problem, we introduce two different approaches. The first is based on mapping parameters of the optimisation problem to RNN parameters, and the second on solving a sequence of minimum cost flow problems on appropriately constructed networks with estimated arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer linear programming formulation, which is based on network flows. Finally, we design and implement distributed heuristic algorithms for the deployment of robots when the civilian locations are known or uncertain

    Empirical Formulation of Highway Traffic Flow Prediction Objective Function Based on Network Topology

    Get PDF
    Accurate Highway road predictions are necessary for timely decision making by the transport authorities. In this paper, we propose a traffic flow objective function for a highway road prediction model. The bi-directional flow function of individual roads is reported considering the net inflows and outflows by a topological breakdown of the highway network. Further, we optimise and compare the proposed objective function for constraints involved using stacked long short-term memory (LSTM) based recurrent neural network machine learning model considering different loss functions and training optimisation strategies. Finally, we report the best fitting machine learning model parameters for the proposed flow objective function for better prediction accuracy.Peer reviewe

    Learning to Generate Genotypes with Neural Networks

    Get PDF
    Neural networks and evolutionary computation have a rich intertwined history. They most commonly appear together when an evolutionary algorithm optimises the parameters and topology of a neural network for reinforcement learning problems, or when a neural network is applied as a surrogate fitness function to aid the evolutionary optimisation of expensive fitness functions. In this paper we take a different approach, asking the question of whether a neural network can be used to provide a mutation distribution for an evolutionary algorithm, and what advantages this approach may offer? Two modern neural network models are investigated, a Denoising Autoencoder modified to produce stochastic outputs and the Neural Autoregressive Distribution Estimator. Results show that the neural network approach to learning genotypes is able to solve many difficult discrete problems, such as MaxSat and HIFF, and regularly outperforms other evolutionary techniques

    Optimisation in Neurosymbolic Learning Systems

    Get PDF
    In the last few years, Artificial Intelligence (AI) has reached the public consciousness through high-profile applications such as chatbots, image generators, speech synthesis and transcription. These are all due to the success of deep learning: Machine learning algorithms that learn tasks from massive amounts of data. The neural network models used in deep learning involve many parameters, often in the order of billions. These models often fail on tasks that computers are traditionally very good at, like calculating arithmetic expressions, reasoning about many different pieces of information, planning and scheduling complex systems, and retrieving information from a database. These tasks are traditionally solved using symbolic methods in AI based on logic and formal reasoning. Neurosymbolic AI instead aims to integrate deep learning with symbolic AI. This integration has many promises, such as decreasing the amount of data required to train the neural networks, improving the explainability and interpretability of answers given by models and verifying the correctness of trained systems. We mainly study neurosymbolic learning, where we have, in addition to data, background knowledge expressed using symbolic languages. How do we connect the symbolic and neural components to communicate this knowledge to the neural networks? We consider two answers: Fuzzy and probabilistic reasoning. Fuzzy reasoning studies degrees of truth. A person can be very or somewhat tall: Tallness is not a binary concept. Instead, probabilistic reasoning studies the probability that something is true or will happen. A coin has a 0.5 probability of landing heads. We never say it landed on "somewhat heads". What happens when we use fuzzy (part I) or probabilistic (part II) approaches to neurosymbolic learning? Moreover, do these approaches use the background knowledge we expect them to? Our first research question studies how different forms of fuzzy reasoning combine with learning. We find surprising results like a connection to the Raven paradox, which states that we confirm "ravens are black" when we observe a green apple. In this study, we gave our neural network a training objective created from the background knowledge. However, we did not use the background knowledge when we deployed our models after training. In our second research question, we studied how to use background knowledge in deployed models. To this end, we developed a new neural network layer based on fuzzy reasoning. The remaining research questions study probabilistic approaches to neurosymbolic learning. Probabilistic reasoning is a natural fit for neural networks, which we usually train to be probabilistic. However, probabilistic approaches come at a cost: They are expensive to compute and do not scale well to large tasks. In our third research question, we study how to connect probabilistic reasoning with neural networks by sampling to estimate averages. Sampling circumvents computing reasoning outcomes for all input combinations. In the fourth and final research question, we study scaling probabilistic neurosymbolic learning to much larger problems than possible before. Our insight is to train a neural network to predict the result of probabilistic reasoning. We perform this training process with just the background knowledge: We do not collect data. How is this related to optimisation? All research questions are related to optimisation problems. Within neurosymbolic learning, optimisation with popular methods like gradient descent undertake a form of reasoning. There is ample opportunity to study how this optimisation perspective improves our neurosymbolic learning methods. We hope this dissertation provides some of the answers needed to make practical neurosymbolic learning a reality: Where practitioners provide both data and knowledge that the neurosymbolic learning methods use as efficiently as possible to train the next generation of neural networks
    • 

    corecore