153,576 research outputs found

    Data augmentation and semi-supervised learning for deep neural networks-based text classifier

    Get PDF
    User feedback is essential for understanding user needs. In this paper, we use free-text obtained from a survey on sleep-related issues to build a deep neural networks-based text classifier. However, to train the deep neural networks model, a lot of labelled data is needed. To reduce manual data labelling, we propose a method which is a combination of data augmentation and pseudo-labelling: data augmentation is applied to labelled data to increase the size of the initial train set and then the trained model is used to annotate unlabelled data with pseudo-labels. The result shows that the model with the data augmentation achieves macro-averaged f1 score of 65.2% while using 4,300 training data, whereas the model without data augmentation achieves macro-averaged f1 score of 68.2% with around 14,000 training data. Furthermore, with the combination of pseudo-labelling, the model achieves macro-averaged f1 score of 62.7% with only using 1,400 training data with labels. In other words, with the proposed method we can reduce the amount of labelled data for training while achieving relatively good performance

    A high order feedback net (HOFNET) with variable non-linearity

    Get PDF
    Most neural networks proposed for pattern recognition sample the incoming image at one instant and then analyse it. This means that the data to be analysed is limited to that containing the noise present at one instant. Time independent noise is therefore, captured but only one sample of time dependent noise is included in the analysis. If however, the incoming image is sampled at several instants, or continuously, then in the subsequent analysis the time dependent noise can be averaged out. This, of course, assumes that sufficient samples can be taken before the object being imaged, has moved an appreciable distance in the field of view. High speed sampling requires parallel image input and is most conveniently carried out by optoelectronic neural network image analysis systems. Optical technology is particularly good at performing certain operations, such as Fourier Transforms, correlations and convolutions while others such as subtraction are difficult. So for an optical net it is best to choose an architecture based on convenient operations such as the high order neural networks

    Neural NILM: Deep Neural Networks Applied to Energy Disaggregation

    Get PDF
    Energy disaggregation estimates appliance-by-appliance electricity consumption from a single meter that measures the whole home's electricity demand. Recently, deep neural networks have driven remarkable improvements in classification performance in neighbouring machine learning fields such as image classification and automatic speech recognition. In this paper, we adapt three deep neural network architectures to energy disaggregation: 1) a form of recurrent neural network called `long short-term memory' (LSTM); 2) denoising autoencoders; and 3) a network which regresses the start time, end time and average power demand of each appliance activation. We use seven metrics to test the performance of these algorithms on real aggregate power data from five appliances. Tests are performed against a house not seen during training and against houses seen during training. We find that all three neural nets achieve better F1 scores (averaged over all five appliances) than either combinatorial optimisation or factorial hidden Markov models and that our neural net algorithms generalise well to an unseen house.Comment: To appear in ACM BuildSys'15, November 4--5, 2015, Seou
    • …
    corecore