1,140 research outputs found

    Quaternion-based deep belief networks fine-tuning

    Get PDF
    Deep learning techniques have been paramount in the last years, mainly due to their outstanding results in a number of applications. In this paper, we address the issue of fine-tuning parameters of Deep Belief Networks by means of meta-heuristics in which real-valued decision variables are described by quaternions. Such approaches essentially perform optimization in fitness landscapes that are mapped to a different representation based on hypercomplex numbers that may generate smoother surfaces. We therefore can map the optimization process onto a new space representation that is more suitable to learning parameters. Also, we proposed two approaches based on Harmony Search and quaternions that outperform the state-of-the-art results obtained so far in three public datasets for the reconstruction of binary images

    Handling dropout probability estimation in convolution neural networks using meta-heuristics

    Get PDF
    Deep learning-based approaches have been paramount in recent years, mainly due to their outstanding results in several application domains, ranging from face and object recognition to handwritten digit identification. Convolutional Neural Networks (CNN) have attracted a considerable attention since they model the intrinsic and complex brain working mechanisms. However, one main shortcoming of such models concerns their overfitting problem, which prevents the network from predicting unseen data effectively. In this paper, we address this problem by means of properly selecting a regularization parameter known as Dropout in the context of CNNs using meta-heuristic-driven techniques. As far as we know, this is the first attempt to tackle this issue using this methodology. Additionally, we also take into account a default dropout parameter and a dropout-less CNN for comparison purposes. The results revealed that optimizing Dropout-based CNNs is worthwhile, mainly due to the easiness in finding suitable dropout probability values, without needing to set new parameters empirically

    How meta-heuristic algorithms contribute to deep learning in the hype of big data analytics

    Get PDF
    Deep learning (DL) is one of the most emerging types of contemporary machine learning techniques that mimic the cognitive patterns of animal visual cortex to learn the new abstract features automatically by deep and hierarchical layers. DL is believed to be a suitable tool so far for extracting insights from very huge volume of so-called big data. Nevertheless, one of the three “V” or big data is velocity that implies the learning has to be incremental as data are accumulating up rapidly. DL must be fast and accurate. By the technical design of DL, it is extended from feed-forward artificial neural network with many multi-hidden layers of neurons called deep neural network (DNN). In the training process of DNN, it has certain inefficiency due to very long training time required. Obtaining the most accurate DNN within a reasonable run-time is a challenge, given there are potentially many parameters in the DNN model configuration and high dimensionality of the feature space in the training dataset. Meta-heuristic has a history of optimizing machine learning models successfully. How well meta-heuristic could be used to optimize DL in the context of big data analytics is a thematic topic which we pondered on in this paper. As a position paper, we review the recent advances of applying meta-heuristics on DL, discuss about their pros and cons and point out some feasible research directions for bridging the gaps between meta-heuristics and DL
    • …
    corecore