4 research outputs found

    Hyperparameter tuning for deep learning in natural language processing

    Get PDF
    Deep Neural Networks have advanced rapidly over the past several years. However, it still seems like a black art for many people to make use of them efficiently. The reason for this complexity is that obtaining a consistent and outstanding result from a deep architecture requires optimizing many parameters known as hyperparameters. Hyperparameter tuning is an essential task in deep learning, which can make significant changes in network performance. This paper is the essence of over 3000 GPU hours on optimizing a network for a text classification task on a wide array of hyperparameters. We provide a list of hyperparameters to tune in addition to their tuning impact on the network performance. The hope is that such a listing will provide the interested researchers a mean to prioritize their efforts and to modify their deep architecture for getting the best performance with the least effort

    Optimisation Method for Training Deep Neural Networks in Classification of Non- functional Requirements

    Get PDF
    Non-functional requirements (NFRs) are regarded critical to a software system's success. The majority of NFR detection and classification solutions have relied on supervised machine learning models. It is hindered by the lack of labelled data for training and necessitate a significant amount of time spent on feature engineering. In this work we explore emerging deep learning techniques to reduce the burden of feature engineering. The goal of this study is to develop an autonomous system that can classify NFRs into multiple classes based on a labelled corpus. In the first section of the thesis, we standardise the NFRs ontology and annotations to produce a corpus based on five attributes: usability, reliability, efficiency, maintainability, and portability. In the second section, the design and implementation of four neural networks, including the artificial neural network, convolutional neural network, long short-term memory, and gated recurrent unit are examined to classify NFRs. These models, necessitate a large corpus. To overcome this limitation, we proposed a new paradigm for data augmentation. This method uses a sort and concatenates strategy to combine two phrases from the same class, resulting in a two-fold increase in data size while keeping the domain vocabulary intact. We compared our method to a baseline (no augmentation) and an existing approach Easy data augmentation (EDA) with pre-trained word embeddings. All training has been performed under two modifications to the data; augmentation on the entire data before train/validation split vs augmentation on train set only. Our findings show that as compared to EDA and baseline, NFRs classification model improved greatly, and CNN outperformed when trained using our suggested technique in the first setting. However, we saw a slight boost in the second experimental setup with just train set augmentation. As a result, we can determine that augmentation of the validation is required in order to achieve acceptable results with our proposed approach. We hope that our ideas will inspire new data augmentation techniques, whether they are generic or task specific. Furthermore, it would also be useful to implement this strategy in other languages

    Du texto vers la norme : traduire automatiquement le langage SMS

    Get PDF
    De nouvelles technologies comme le téléphone cellulaire ont révolutionné nos échanges comme jamais auparavant. Pour les utilisateurs, ces nouveaux canaux de communication représentent un contexte informel propice à l'exploration d'une forme récente d'écriture qui s'éloigne considérablement de la norme académique : le langage SMS. Devant l'ascension de cette forme d'expression, différentes méthodes ont été testées par le passé pour tenter de normaliser l'écrit SMS, c'est-à-dire le convertir en un français normé en vue de l'appliquer à d'éventuelles tâches de traitement automatique du langage. Or, très rares sont les études réalisées en français qui adoptent les réseaux de neurones comme solution de normalisation. La présente étude vise donc à produire un logiciel prototype pour normaliser automatiquement le langage SMS, en se servant d'une architecture encodeur-décodeur constituée de réseaux de neurones à mémoire à long et à court terme (LSTM). L'architecture neuronale est entraînée et évaluée sur la base du corpus belge de Fairon et al. (2006), en testant le mot et le caractère comme unités de base. Au-delà du logiciel prototype, cette étude se veut surtout une occasion d'explorer les points forts et les points faibles d'une telle approche neuronale dans le cadre de la normalisation du langage SMS. Avec un score BLEU-4 encourageant -- compte tenu de la taille limitée du corpus -- de près de 0,5, le modèle à base de mots est supérieur à celui à base de caractères. Malgré tout, la méthode produit un nombre considérable d'erreurs que nous attribuons en grande partie à la taille modeste du corpus, mais aussi à la nature même des réseaux de neurones
    corecore