7 research outputs found

    What is hidden in the darkness? Deep-learning assisted large-scale protein family curation uncovers novel protein families and folds

    Get PDF
    Driven by the development and upscaling of fast genome sequencing and assembly pipelines, the number of protein-coding sequences deposited in public protein sequence databases is increasing exponentially. Recently, the dramatic success of deep learning-based approaches applied to protein structure prediction has done the same for protein structures. We are now entering a new era in protein sequence and structure annotation, with hundreds of millions of predicted protein structures made available through the AlphaFold database. These models cover most of the catalogued natural proteins, including those difficult to annotate for function or putative biological role based on standard, homology-based approaches. In this work, we quantified how much of such "dark matter" of the natural protein universe was structurally illuminated by AlphaFold2 and modelled this diversity as an interactive sequence similarity network that can be navigated at https://uniprot3d.org/atlas/AFDB90v4 . In the process, we discovered multiple novel protein families by searching for novelties from sequence, structure, and semantic perspectives. We added a number of them to Pfam, and experimentally demonstrate that one of these belongs to a novel superfamily of toxin-antitoxin systems, TumE-TumA. This work highlights the role of large-scale, evolution-driven protein comparison efforts in combination with structural similarities, genomic context conservation, and deep-learning based function prediction tools for the identification of novel protein families, aiding not only annotation and classification efforts but also the curation and prioritisation of target proteins for experimental characterisation

    Design and Development of Robotic Part Assembly System under Vision Guidance

    Get PDF
    Robots are widely used for part assembly across manufacturing industries to attain high productivity through automation. The automated mechanical part assembly system contributes a major share in production process. An appropriate vision guided robotic assembly system further minimizes the lead time and improve quality of the end product by suitable object detection methods and robot control strategies. An approach is made for the development of robotic part assembly system with the aid of industrial vision system. This approach is accomplished mainly in three phases. The first phase of research is mainly focused on feature extraction and object detection techniques. A hybrid edge detection method is developed by combining both fuzzy inference rule and wavelet transformation. The performance of this edge detector is quantitatively analysed and compared with widely used edge detectors like Canny, Sobel, Prewitt, mathematical morphology based, Robert, Laplacian of Gaussian and wavelet transformation based. A comparative study is performed for choosing a suitable corner detection method. The corner detection technique used in the study are curvature scale space, Wang-Brady and Harris method. The successful implementation of vision guided robotic system is dependent on the system configuration like eye-in-hand or eye-to-hand. In this configuration, there may be a case that the captured images of the parts is corrupted by geometric transformation such as scaling, rotation, translation and blurring due to camera or robot motion. Considering such issue, an image reconstruction method is proposed by using orthogonal Zernike moment invariants. The suggested method uses a selection process of moment order to reconstruct the affected image. This enables the object detection method efficient. In the second phase, the proposed system is developed by integrating the vision system and robot system. The proposed feature extraction and object detection methods are tested and found efficient for the purpose. In the third stage, robot navigation based on visual feedback are proposed. In the control scheme, general moment invariants, Legendre moment and Zernike moment invariants are used. The selection of best combination of visual features are performed by measuring the hamming distance between all possible combinations of visual features. This results in finding the best combination that makes the image based visual servoing control efficient. An indirect method is employed in determining the moment invariants for Legendre moment and Zernike moment. These moments are used as they are robust to noise. The control laws, based on these three global feature of image, perform efficiently to navigate the robot in the desire environment
    corecore