8 research outputs found

    An entropy based heuristic model for predicting functional sub-type divisions of protein families

    Get PDF
    Multiple sequence alignments of protein families are often used for locating residues that are widely apart in the sequence, which are considered as influential for determining functional specificity of proteins towards various substrates, ligands, DNA and other proteins. In this paper, we propose an entropy-score based heuristic algorithm model for predicting functional sub-family divisions of protein families, given the multiple sequence alignment of the protein family as input without any functional sub-type or key site information given for any protein sequence. Two of the experimented test-cases are reported in this paper. First test-case is Nucleotidyl Cyclase protein family consisting of guanalyate and adenylate cyclases. And the second test-case is a dataset of proteins taken from six superfamilies in Structure-Function Linkage Database (SFLD). Results from these test-cases are reported in terms of confirmed sub-type divisions with phylogeny relations from former studies in the literature

    Optimization of morphological data in numerical taxonomy analysis using genetic algorithms feature selection method

    Get PDF
    Studies in Numerical Taxonomy are carried out by measuring characters as much as possible. The workload over scientists and labor to perform measurements will increase proportionally with the number of variables (or characters) to be used in the study. However, some part of the data may be irrelevant or sometimes meaningless. Here in this study, we introduce an algorithm to obtain a subset of data with minimum characters that can represent original data. Morphological characters were used in optimization of data by Genetic Algorithms Feature Selection method. The analyses were performed on an 18 character*11 taxa data matrix with standardized continuous characters. The analyses resulted in a minimum set of 2 characters, which means the original tree based on the complete data can also be constructed by those two characters

    Testing robustness of relative complexity measure method constructing robust phylogenetic trees for Galanthus L. Using the relative complexity measure

    Get PDF
    Background: Most phylogeny analysis methods based on molecular sequences use multiple alignment where the quality of the alignment, which is dependent on the alignment parameters, determines the accuracy of the resulting trees. Different parameter combinations chosen for the multiple alignment may result in different phylogenies. A new non-alignment based approach, Relative Complexity Measure (RCM), has been introduced to tackle this problem and proven to work in fungi and mitochondrial DNA. Result: In this work, we present an application of the RCM method to reconstruct robust phylogenetic trees using sequence data for genus Galanthus obtained from different regions in Turkey. Phylogenies have been analyzed using nuclear and chloroplast DNA sequences. Results showed that, the tree obtained from nuclear ribosomal RNA gene sequences was more robust, while the tree obtained from the chloroplast DNA showed a higher degree of variation. Conclusions: Phylogenies generated by Relative Complexity Measure were found to be robust and results of RCM were more reliable than the compared techniques. Particularly, to overcome MSA-based problems, RCM seems to be a reasonable way and a good alternative to MSA-based phylogenetic analysis. We believe our method will become a mainstream phylogeny construction method especially for the highly variable sequence families where the accuracy of the MSA heavily depends on the alignment parameters

    Challenges in Curating 2D Multimedia Data in the Application of Machine Learning in Biodiversity Image Analysis

    No full text
    Over 1 billion biodiversity collection specimens ranging from fungi to fish to fossils are housed in more than 1,600 natural history collections across the United States. The digitization of these specimens has risen significantly within the last few decades and this is only likely to increase, as the use of digitized data gains more importance every day. Numerous experiments with automated image analysis have proven the practicality and usefulness of digitized biodiversity images by computational techniques such as neural networks and image processing. However, most of the computational techniques to analyze images of biodiversity collection specimens require a good curation of this data. One of the challenges in curating multimedia data of biodiversity collection specimens is the quality of the multimedia objects—in our case, two dimensional images. To tackle the image quality problem, multimedia needs to be captured in a specific format and presented with appropriate descriptors. In this study we present an analysis of two image repositories each consisting of 2D images of fish specimens from several institutions—the Integrated Digitized Biocollections (iDigBio) and the Great Lakes Invasives Network (GLIN).  Approximately 70 thousand images have been processed from the GLIN repository and 450 thousand images have been processed from the iDigBio repository and their suitability assessed for use in neural network-based species identification and trait extraction applications. Our findings showed that images that came from the GLIN dataset were more successful for image processing and machine learning purposes. Almost 40% of the species have been represented with less than 10 images while only 20% have more than 100 images per species.We identified and captured 20 metadata descriptors that define quality and usability of the image. According to the captured metadata information, 70% of the GLIN dataset images were found to be useful for further analysis according to the overall image quality score. Quality issues with the remaining images included: curved specimens, non-fish objects in the images such as tags, labels and rocks that obstructed the view of the specimen; color, focus and brightness issues; folded or overlapping parts as well as missing parts.We used both the web interface and the API (Application Programming Interface) for downloading images from iDigBio. We searched for all fish genera, families and classes in three different searches with the images-only option selected. Then we combined all of the search results and removed duplicates. Our search on the iDigBio database for fish taxa returned approximately 450 thousand records with images. We narrowed this down to 90 thousand fish images aided by the multimedia metadata with the downloaded search results, excluding some non-fish images, fossil samples, X-ray and CT (computed tomography) scans and several others. Only 44% of these 90 thousand images were found to be suitable for further analysis.In this study, we discovered some of the limitations of biodiversity image datasets and built an infrastructure for assessing the quality of biodiversity images for neural network analysis. Our experience with the fish images gathered from two different image repositories has enabled describing image quality metadata features. With the help of these metadata descriptors, one can simply create a dataset for a desired image quality for the purpose of analysis. Likewise, the availability of the metadata descriptors will help advance our understanding of quality issues, while helping data technicians, curators and the other digitization staff be more aware of multimedia

    Application of AI-Helped Image Classification of Fish Images: An iDigBio dataset example

    No full text
    Artificial Intelligence (AI) becomes more prevalent in data science as well as in areas of computational science. Commonly used classification methods in AI can also be used for unorganized databases, if a proper model is trained. Most of the classification work is done on image data for purposes such as object detection and face recognition. If an object is well detected from an image, the classification may be done to organize image data. In this work, we try to identify images from an Integrated Digitized Biocollections (iDigBio) dataset and to classify these images to generate metadata to use as an AI-ready dataset in the future. The main problem of the museum image datasets is the lack of metadata information on images, wrong categorization, or poor image quality. By using AI, it maybe possible to overcome these problems. Automatic tools can help find, eliminate or fix these problems. For our example, we trained a model for 10 classes (e.g., complete fish, photograph, notes/labels, X-ray, CT (computerized tomotography) scan, partial fish, fossil, skeleton) by using a manually tagged iDigBio image dataset. After training a model for each for class, we reclassified the dataset by using these trained models. Some of the results are given in Table 1.As can be seen in the table, even manually classified images can be identified as different classes, and some classes are very similar to each other visually such as CT scans and X-rays or fossils and skeletons. Those kind of similarities are very confusing for the human eye as well as AI results.

    Application of AI-Helped Image Classification of Fish Images: An iDigBio dataset example

    No full text
    Artificial Intelligence (AI) becomes more prevalent in data science as well as in areas of computational science. Commonly used classification methods in AI can also be used for unorganized databases, if a proper model is trained. Most of the classification work is done on image data for purposes such as object detection and face recognition. If an object is well detected from an image, the classification may be done to organize image data. In this work, we try to identify images from an Integrated Digitized Biocollections (iDigBio) dataset and to classify these images to generate metadata to use as an AI-ready dataset in the future. The main problem of the museum image datasets is the lack of metadata information on images, wrong categorization, or poor image quality. By using AI, it maybe possible to overcome these problems. Automatic tools can help find, eliminate or fix these problems. For our example, we trained a model for 10 classes (e.g., complete fish, photograph, notes/labels, X-ray, CT (computerized tomotography) scan, partial fish, fossil, skeleton) by using a manually tagged iDigBio image dataset. After training a model for each for class, we reclassified the dataset by using these trained models. Some of the results are given in Table 1.As can be seen in the table, even manually classified images can be identified as different classes, and some classes are very similar to each other visually such as CT scans and X-rays or fossils and skeletons. Those kind of similarities are very confusing for the human eye as well as AI results.

    On Image Quality Metadata, FAIR in ML, AI-Readiness and Reproducibility: Fish-AIR example

    No full text
    A new science discipline has emerged within the last decade at the intersection of informatics, computer science and biology: Imageomics. Like most other -omics fields, Imageomics also uses emerging technologies to analyze biological data but from the images. One of the most applied data analysis methods for image datasets is Machine Learning (ML). In 2019, we started working on a United States National Science Foundation (NSF) funded project, known as Biology Guided Neural Networks (BGNN) with the purpose of extracting information about biology by using neural networks and biological guidance such as species descriptions, identifications, phylogenetic trees and morphological annotations (Bart et al. 2021). Even though the variety and abundance of biological data is satisfactory for some ML analysis and the data are openly accessible, researchers still spend up to 80% of their time preparing data into a usable, AI-ready format, leaving only 20% for exploration and modeling (Long and Romanoff 2023). For this reason, we have built a dataset composed of digitized fish specimens, taken either directly from collections or from specialized repositories. The range of digital representations we cover is broad and growing, from photographs and radiographs, to CT scans, and even illustrations.  We have added new groups of vocabularies to the dataset management system including image quality metadata, extended image metadata and batch metadata. With the image quality metadata and extended image metadata, we aimed to extract information from the digital objects that can possibly help ML scientists in their research with filtering, image processing and object recognition routines. Image quality metadata provides information about objects contained in the image, features and condition of the specimen, and some basic visual properties of the image, while extended image metadata provides information about technical properties of the digital file and the digital multimedia object (Bakış et al. 2021, Karnani et al. 2022, Leipzig et al. 2021, Pepper et al. 2021, Wang et al. 2021) (see details on Fish-AIR vocabulary web page). Batch metadata is used for separating different datasets and facilitates downloading and uploading data in batches with additional batch information and supplementary files.Additional flexibility, built into the database infrastructure using an RDF framework, will enable the system to host different taxonomic groups, which might require new metadata features (Jebbia et al. 2023). By the combination of these features, along with FAIR (Findable, Accessable, Interoperable, Reusable) principles, and reproducibility, we provide Artificial Intelligence Readiness (AIR; Long and Romanoff 2023) to the dataset.Fish-AIR provides an easy-to-access, filtered, annotated and cleaned biological dataset for researchers from different backgrounds and facilitates the integration of biological knowledge based on digitized preserved specimens into ML pipelines. Because of the flexible database infrastructure and addition of new datasets, researchers will also be able to access additional types of data—such as landmarks, specimen outlines, annotated parts, and quality scores—in the near future. Already, the dataset is the largest and most detailed AI-ready fish image dataset with integrated Image Quality Management System (Jebbia et al. 2023, Wang et al. 2021)

    Extracting Landmark and Trait Information from Segmented Digital Specimen Images Generated by Artificial Neural Networks

    No full text
    We have been successfully developing Artificial Intelligence (AI) models for automatically classifying fish species using neural networks over the last three years during the “Biology Guided Neural Network” (BGNN) project*1. We continue our efforts in another broader project, “Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning”*2. One of the main topics in the Imageomics Project is “Morphological Barcoding”. Within the Morphological Barcoding study, we are trying to build a gold standard method to identify species in different taxonomic groups based on their external morphology. This list of characters will contain, but not be limited to, landmarks, quantitative traits such as measurements of distances, areas, angles, proportions, colors, histograms, patterns, shapes, and outlines. The taxonomic groups will be limited by the data available, and we will be using fish as the topic of interest in this preliminary study.In this current study, we have focused on extracting morphological characters that are relying on anatomical features of fish, such as location of the eye, body length, and area of the head. We developed a schematic workflow to describe how we processed the data and extract the information (Fig. 1). We performed our analysis on the segmented images produced by Karpatne Team within the BGNN project (Bart et al. 2021). Segmentation analysis was performed using Artificial Neural Networks - Semantic Segmentation (Long et al. 2015); the list of segments to be detected were given as eye, head, trunk, caudal fin, pectoral fin, dorsal fin, anal fin, pelvic fin for fish.Segmented images, metadata and species lists were given as input to the workflow. During the cleaning and filtering subroutines, a subset of data was created by filtering down to the desired segmented images with corresponding metadata. In the validation step, segmented images were checked by comparing the number of specimens in the original image to the separate bounding-boxed specimen images, noting: violations in the segmentations, counts of segments, comparisons of relative positions of the segments among one another, traces of batch effect; comparisons according to their size and shape and finally based on these validation criteria each segmented image was assigned a score from 1 to 5 similar to Adobe XMP Basic namespace.The landmarks and the traits to be used in the study were extracted from the current literature, while mindful that some of the features may not be extracted successfully computationally. By using the landmark list, landmarks have been extracted by adapting the descriptions from the literature on to the segments, such as picking the left most point on the head as the tip of snout and top left point on the pelvic fin as base of the pelvic fin. These 2D vectors (coordinates), are then fine tuned by adjusting their positions to be on the outline of the fish, since most of the landmarks are located on the outline. Procrustes analysis*3 was performed to scale all of the measurements together and point clouds were generated. These vectors were stored as landmark data. Segment centroids were also treated as landmarks. Extracted landmarks were validated by comparing their relative position among each other, and then if available, compared with their manually captured position. A score was assigned based on these comparisons, similar to the segmentation validation score. Based on the trait list definitions, traits were extracted by measuring the distances between two landmarks, angles between three landmarks, areas between three or more landmarks, areas of the segments, ratios between two distances or areas and between a distance and a square rooted area and then stored as trait data. Finally these values were compared within their own species clusters for errors and whether the values were still within the bounds. Trait scores were calculated from these error calculations similar to segmentation scores aiming selecting good quality scores for further analysis such as Principal Component Analysis.Our work on extraction of features from segmented digital specimen images has shown that the accuracy of the traits such as measurements, areas, and angles depends on the accuracy of the landmarks. Accuracy of the landmarks is highly dependent on segmentation of the parts of the specimen. The landmarks that are located on the outline of the body (combination of head and trunk segments of the fish) are found to be more accurate comparing to the landmarks that represents inner features such as mouth and pectoral fin in some taxonomic groups. However, eye location is almost always accurate, since it is based on the centroid of the eye segment. In the remaining part of this study we will improve the score calculation for segments, images, landmarks and traits and calculate the accuracy of the scores by comparing the statistical results obtained by analysis of the landmark and trait data
    corecore