72 research outputs found

    Bioinformatics Methods For Studying Intra-Host and Inter-Host Evolution Of Highly Mutable Viruses

    Get PDF
    Reproducibility and robustness of genomic tools are two important factors to assess the reliability of bioinformatics analysis. Such assessment based on these criteria requires repetition of experiments across lab facilities which is usually costly and time consuming. In this study we propose methods that are able to generate computational replicates, allowing the assessment of the reproducibility of genomic tools. We analyzed three different groups of genomic tools: DNA-seq read alignment tools, structural variant (SV) detection tools and RNA-seq gene expression quantification tools. We tested these tools with different technical replicate data. We observed that while some tools were impacted by the technical replicate data some remained robust. We observed the importance of the choice of read alignment tools for SV detection as well. On the other hand, we found out that the RNA-seq quantification tools (Kallisto and Salmon) that we chose were not affected by the shuffled data but were affected by reverse complement data. Using these findings, our proposed method here may help biomedical communities to advice on the robustness and reproducibility factors of genomic tools and help them to choose the most appropriate tools in terms of their needs. Furthermore, this study will give an insight to genomic tool developers about the importance of a good balance between technical improvements and reliable results

    Hierarchical Path Planner using Workspace Decomposition and Parallel Task-Space RRTs

    Get PDF
    This paper presents a hierarchical path planner consisting of two stages: a global planner that uses workspace information to create collision-free paths for the robot end-effector to follow, and multiple local planners running in parallel that verify the paths in the configuration space by expanding a task-space rapidly-exploring random tree (RRT). We demonstrate the practicality of our approach by comparing it with state-of-the-art planners in several challenging path planning problems. While using a single tree, our planner outperforms other single tree approaches in task-space or configuration space (C-space), while its performance and robustness are comparable or better than that of parallelized bidirectional C-space planners

    Technology dictates algorithms: Recent developments in read alignment

    Full text link
    Massively parallel sequencing techniques have revolutionized biological and medical sciences by providing unprecedented insight into the genomes of humans, animals, and microbes. Modern sequencing platforms generate enormous amounts of genomic data in the form of nucleotide sequences or reads. Aligning reads onto reference genomes enables the identification of individual-specific genetic variants and is an essential step of the majority of genomic analysis pipelines. Aligned reads are essential for answering important biological questions, such as detecting mutations driving various human diseases and complex traits as well as identifying species present in metagenomic samples. The read alignment problem is extremely challenging due to the large size of analyzed datasets and numerous technological limitations of sequencing platforms, and researchers have developed novel bioinformatics algorithms to tackle these difficulties. Importantly, computational algorithms have evolved and diversified in accordance with technological advances, leading to todays diverse array of bioinformatics tools. Our review provides a survey of algorithmic foundations and methodologies across 107 alignment methods published between 1988 and 2020, for both short and long reads. We provide rigorous experimental evaluation of 11 read aligners to demonstrate the effect of these underlying algorithms on speed and efficiency of read aligners. We separately discuss how longer read lengths produce unique advantages and limitations to read alignment techniques. We also discuss how general alignment algorithms have been tailored to the specific needs of various domains in biology, including whole transcriptome, adaptive immune repertoire, and human microbiome studies

    Mapping the landscape: Peer review in computing education research

    Get PDF
    Peer review is a mainstay of academic publication – indeed, it is the peer-review process that provides much of the publications’ credibility. As the number of computing education conferences and the number of submissions increase, the need for reviewers grows. This report does not attempt to set standards for reviewing; rather, as a first step toward meeting the need for well qualified reviewers, it presents an overview of the ways peer review is used in various venues, both inside computing education and, for com- parison, in closely-related areas outside our field. It considers four key components of peer review in some depth: criteria, the review process, roles and responsibilities, and ethics and etiquette. To do so, it draws on relevant literature, guidance and forms associated with peer review, interviews with journal editors and conference chairs, and a limited survey of the computing education research community. In addition to providing an overview of practice, this report identifies a number of themes running through the discourse that have relevance for decision making about how best to conduct peer review for a given venue

    Automatic segmentation of corpus collasum using Gaussian mixture modeling and Fuzzy C means methods

    No full text
    This paper presents a comparative study of the success and performance of the Gaussian mixture modeling and Fuzzy C means methods to determine the volume and cross-sectionals areas of the corpus callosum (CC) using simulated and real MR brain images. The Gaussian mixture model (GMM) utilizes weighted sum of Gaussian distributions by applying statistical decision procedures to define image classes. In the Fuzzy C means (FCM), the image classes are represented by certain membership function according to fuzziness information expressing the distance from the cluster centers. In this study, automatic segmentation for midsagittal section of the CC was achieved from simulated and real brain images. The volume of CC was obtained using sagittal sections areas. To compare the success of the methods, segmentation accuracy, Jaccard similarity and time consuming for segmentation were calculated. The results show that the GMM method resulted by a small margin in more accurate segmentation (midsagittal section segmentation accuracy 98.3% and 97.01% for GMM and FCM); however the FCM method resulted in faster segmentation than GMM. With this study, an accurate and automatic segmentation system that allows opportunity for quantitative comparison to doctors in the planning of treatment and the diagnosis of diseases affecting the size of the CC was developed. This study can be adapted to perform segmentation on other regions of the brain, thus, it can be operated as practical use in the clinic. (C) 2013 Elsevier Ireland Ltd. All rights reserved

    Spectral analysing of portal vein Doppler signals in the cirrhosis patients

    No full text
    In this study, we have researched the efficacy of short-time Fourier transformation (STFT) of Doppler signals from the portal veins of healthy volunteers and cirrhosis patients. Sonogram and power spectral distribution for portal vein Doppler spectral waveform changes in the cirrhosis disease were utilized and these graphics compared with healthy volunteers. Five parameters were used to compare power spectrum graphics. Clear differences were detected in the calculated parameters between healthy and cirrhosis patients. It was seen that power spectral graphics and sonograms of portal vein Doppler signals may be used to determine cirrhosis disease. (C) 2007 Elsevier Ltd. All rights reserved

    Comparison of multilayer perceptron training algorithms for portal venous doppler signals in the cirrhosis disease

    No full text
    In this study, we developed an expert diagnostic system for the interpretation of the portal vein Doppler signals belong the patients with cirrhosis and healthy subjects using signal processing and Artificial Neural Network (ANN) methods. Power spectral densities (PSD) of these signals were obtained to input of ANN using Short Time Fourier Transform (STFT) method. The four layered Multilayer Perceptron (MLP) training algorithms that we have built had given very promising results in classifying the healthy and cirrhosis. For prediction purposes, it has been presented that Levenberg Marquardt training algorithm of MLP network employing backpropagation works reasonably well. The diagnosis performance of the study shows the advantages of this system: It is rapid, easy to operate, noninvasive and not expensive. This system is of the better clinical application over others, especially for earlier survey of population. The stated results show that the proposed method can make an effective interpretation and point out the ability of design of a new intelligent assistance diagnosis system. © 2005 Elsevier Ltd. All rights reserved

    Classification of macular and optic nerve disease by principal component analysis

    No full text
    In this study, pattern electroretinography (PERG) signals were obtained by electrophysiological testing devices from 70 subjects. The group consisted of optic nerve and macular diseases subjects. Characterization and interpretation of the physiological PERG signal was done by principal component analysis (PCA). While the first principal component of data matrix acquired from optic nerve patients represents 67.24% of total variance, the first principal component of the macular patients data matrix represents 76.81% of total variance. The basic differences between the two patient groups were obtained with first principal component, obviously. In addition, the graphic of second principal component vs. first principal component of optic nerve and macular subjects was analyzed. The two patient groups were separated clearly from each other without any hesitation. This research developed an auxiliary system for the interpretation of the PERG signals. The stated results show that the use of PCA of physiological waveforms is presented as a powerful method likely to be incorporated in future medical signal processing. (C) 2006 Elsevier Ltd. All rights reserved
    corecore