3 research outputs found

    Feature Reduction for Molecular Similarity Searching Based on Autoencoder Deep Learning

    Get PDF
    The concept of molecular similarity has been commonly used in rational drug design, where structurally similar molecules are examined in molecular databases to retrieve function-ally similar molecules. The most used conventional similarity methods used two-dimensional (2D) fingerprints to evaluate the similarity of molecules towards a target query. However, these descriptors include redundant and irrelevant features that might impact the performance of similarity searching methods. Thus, this study proposed a new approach for identifying the important features of molecules in chemical datasets based on the representation of the molecular features using Autoencoder (AE), with the aim of removing irrelevant and redundant features. The proposed approach experimented using the MDL Data Drug Report standard dataset (MDDR). Based on experimental findings, the proposed approach performed better than several existing benchmark similarity methods such as Tanimoto Similarity Method (TAN), Adapted Similarity Measure of Text Processing (ASMTP), and Quantum-Based Similarity Method (SQB). The results demonstrated that the performance achieved by the proposed approach has proven to be superior, particularly with the use of structurally heterogeneous datasets, where it yielded improved results compared to other previously used methods with the similar goal of improving molecular similarity searching

    Molecular similarity searching based on deep learning for feature reduction

    Get PDF
    The concept of molecular similarity has been widely used in rational drug design, where structurally similar molecules are explored in molecular databases for retrieving functionally similar molecules. The most used conventional similarity methods are two-dimensional (2D) fingerprints to evaluate the similarity of molecules towards a target query. However, these descriptors include redundant and irrelevant features that might impact the effectiveness of similarity searching methods. Moreover, the majority of existing similarity searching methods often disregard the importance of some features over others and assume all features are equally important. Thus, this study proposed three approaches for identifying the important features of molecules in chemical datasets. The first approach was based on the representation of the molecular features using Autoencoder (AE), which removes irrelevant and redundant features. The second approach was the feature selection model based on Deep Belief Networks (DBN), which are used to select only the important features. In this approach, the DBN is used to find subset of features that represent the important ones. The third approach was conducted to include descriptors that complement to each other. Different important features from many descriptors were filtered through DBN and combined to form a new descriptor used for molecular similarity searching. The proposed approaches were experimented on the MDL Data Drug Report standard dataset (MDDR). Based on the test results, the three proposed approaches overcame some of the existing benchmark similarity methods, such as Bayesian Inference Networks (BIN), Tanimoto Similarity Method (TAN), Adapted Similarity Measure of Text Processing (ASMTP) and Quantum-Based Similarity Method (SQB). The results showed that the performance of the three proposed approaches proved to be better in term of average recall values, especially with the use of structurally heterogeneous datasets that could produce results than other methods used previously to improve molecular similarity searching

    Unsupervised relational feature learning for vision

    No full text
    This thesis contributes to the field of machine learning with a specific focus on the methods for learning relations between the inputs. Learning relationships between images is the most common primitive in vision. There are many vision tasks in which relationships across images play an important role. Some of them are motion estimation, activity recognition, stereo vision, multi-view geometry and visual odometry. Many of such tasks mainly depend on motion and disparity cues, which are inferred based on the relations across multiple image pairs. The approaches presented in this thesis mainly deal with, but are not limited to, learning of the representations for motion and depth. This thesis by articles consists of five articles which present relational feature learning models along with their applications in computer vision. In the first article, we present an approach for encoding motion in videos. To this end, we show that the detection of spatial transformations can be viewed as detection of coincidence or synchrony between the given sequence of frames and a sequence of features which are related by the transformation we wish to detect. Learning to detect synchrony is possible by introducing "multiplicative interactions'' into the hidden units of single layered sparse coding models. We show that the learned motion representations employed for the task of activity recognition achieve competitive performance on multiple benchmarks. Stereo vision is an important challenge in computer vision and useful for many applications in that field. In the second article, we extend the energy based learning models, which were previously used for motion encoding, to the context of depth perception. Given the common architecture of the models for encoding motion and depth, we show that it is possible to define a single model for learning a unified representation for both the cues. Our experimental results show that learning a combined representation for depth and motion makes it possible to achieve state-of-the-art performance at the task of 3-D activity analysis, and to perform better than the existing hand-engineered 3-D motion features. Autoencoder is a popular unsupervised learning method for learning efficient encoding for a given set of data samples. Typically, regularized autoencoders which are used to learn over-complete and sparse representations for the input data, were shown to fail on intrinsically high dimensional data like videos. In the third article, we investigate the reason for such a behavior. It can be observed that the regularized autoencoders typically learn negative hidden unit biases. We show that the learning of negative biases is the result of hidden units being responsible for both the sparsity and the representation of the input data. It is shown that, as a result, the behavior of the model resembles clustering methods which would require exponentially large number of features to model intrinsically high dimensional data. Based on this understanding, we propose a new activation function which decouples the roles of hidden layer and uses linear encoding. This allows to learn representations on data with very high intrinsic dimensionality. We also show that gating connections in the bi-linear models and the single layer models from articles one and two of this thesis can be thought of as a way to attain a linear encoding scheme which allows them to learn good representations on videos. Visual odometry is the task of inferring egomotion of a moving object from visual information such as images and videos. It can primarily be used for the task of localization and has many applications in the fields of robotics and navigation. The work in article four was motivated by the idea of using deep learning techniques, which are successful methods for many vision tasks, for visual odometry. The visual odometry task mainly requires inference of motion and depth information from visual input which can then be mapped to velocity and change in direction. We use relational feature models presented in the articles one and two for inferring a combined motion and depth representation from stereo video sequences. The combined representation is then mapped to discrete velocity and change in direction labels using convolutional neural networks. Our approach is an end-to-end deep learning-based architecture which uses a single type of computational model and learning rule. Preliminary results show that the architecture is capable of learning the mapping from input video to egomotion. Activity recognition is a challenging computer vision task with many real world applications. It is well know that it is a hard task to use computer vision research for real-time applications. In the fifth article of this thesis, we present a real-time activity recognition system based on deep learning based methods. Our approach uses energy based relational feature learning models for the computation of local motion features directly from videos. A bag-of-words over the local motion features is used for the analysis of activity in a given video sequence. We implement this system on a distributed computational platform and demonstrate its performance on the iCub robot. Using GPUs we demonstrate real time performance which makes the deployment of activity recognition systems in real world scenarios possible
    corecore