8 research outputs found

    Ethical, legal and social implications of genetically modified organism in the shadow of advanced genetic tools

    Get PDF
    In order to define the term GMO, different scientific definitions and legal explanations are available. In the regulation process of GM foods, the US and EU legal frameworks are based on the methodologies themselves. Currently, for the production of GMOs, several genome editing tools are available. Along with different site-directed nucleases (ZFN, TALENs, etc.), RNAi and CRISPR/Cas9 have proven to be the very effective tools for genome editing. According to the current EU legislative, introduced in 2018, CRISPR/Cas9 and RNAi techniques are regulated as methods that produce GMOs, because the methodology of the process itself resembles the traditional breeding methods. In the past few years, a large number of scientific publications have confirmed that CRISPR/Cas9 and RNAi technology produce GMOs, supporting and suggesting that the legislation policies in the EU and especially in the USA have to be elaborated. Besides, a huge public pressure makes it difficult to develop and implement new methodologies for GMO production. For this reason, ELSI society is responsible to investigate and question whether the new genetic engineering techniques produce GMO food that is safe for human consumption

    Contemporary Challenges and Solutions

    Get PDF
    CA18131 CP16/00163 NIS-3317 NIS-3318 decision 295741 C18/BM/12585940The human microbiome has emerged as a central research topic in human biology and biomedicine. Current microbiome studies generate high-throughput omics data across different body sites, populations, and life stages. Many of the challenges in microbiome research are similar to other high-throughput studies, the quantitative analyses need to address the heterogeneity of data, specific statistical properties, and the remarkable variation in microbiome composition across individuals and body sites. This has led to a broad spectrum of statistical and machine learning challenges that range from study design, data processing, and standardization to analysis, modeling, cross-study comparison, prediction, data science ecosystems, and reproducible reporting. Nevertheless, although many statistics and machine learning approaches and tools have been developed, new techniques are needed to deal with emerging applications and the vast heterogeneity of microbiome data. We review and discuss emerging applications of statistical and machine learning techniques in human microbiome studies and introduce the COST Action CA18131 “ML4Microbiome” that brings together microbiome researchers and machine learning experts to address current challenges such as standardization of analysis pipelines for reproducibility of data analysis results, benchmarking, improvement, or development of existing and new tools and ontologies.publishersversionpublishe

    Statistical and Machine Learning Techniques in Human Microbiome Studies: Contemporary Challenges and Solutions

    Get PDF
    The human microbiome has emerged as a central research topic in human biology and biomedicine. Current microbiome studies generate high-throughput omics data across different body sites, populations, and life stages. Many of the challenges in microbiome research are similar to other high-throughput studies, the quantitative analyses need to address the heterogeneity of data, specific statistical properties, and the remarkable variation in microbiome composition across individuals and body sites. This has led to a broad spectrum of statistical and machine learning challenges that range from study design, data processing, and standardization to analysis, modeling, cross-study comparison, prediction, data science ecosystems, and reproducible reporting. Nevertheless, although many statistics and machine learning approaches and tools have been developed, new techniques are needed to deal with emerging applications and the vast heterogeneity of microbiome data. We review and discuss emerging applications of statistical and machine learning techniques in human microbiome studies and introduce the COST Action CA18131 "ML4Microbiome" that brings together microbiome researchers and machine learning experts to address current challenges such as standardization of analysis pipelines for reproducibility of data analysis results, benchmarking, improvement, or development of existing and new tools and ontologies

    On the Accuracy of Sequence Similarity Based Protein 3D Prediction

    No full text
    In an article (Akcesme, and Can 2015), authors examined the relation between primary and secondary structure mismatches of the substrings of length seventeen residues from two different proteins. They have shown that the mismatches in the corresponding secondary structure sequence substrings of the same length mostly lag behind primary mismatches. In the PhD dissertation thesis (Akcesme 2016) author examined the possibility of secondary structure prediction by the use of smaller conserved segments and created a software AVISENNA that outperforms PSIPRED and all other available secondary structure prediction tools. In another article (Akcesme, et. al. 2017), the issue of how far secondary structure of proteins can be predicted based on hosts (larger proteins that contain the query protein as a subchain) of this protein in the set of solved structures currently deposited in PDB. It is seen that around 17% of proteins have hosts in PDB, and secondary structures of them can be predicted with a mean accuracy of 90.39 %. This accuracy of the host based secondary structure prediction set also an upper bound for the homology based tertiary structure predictions. In this article the impact of the mentioned inaccuracy on the homology based 3D structure predictions by the three predictors I-Tasser, Phyre2, and SwissModel are studied. Inaccuracies in predicted tertiary structures are seen in the visual comparison of the 3D structures of query proteins and their predicted 3D images by the three 3D predictors, and their counterparts in host proteins

    Accuracy of Identical Subsequences Based Protein Secondary Structure Prediction

    No full text
    Chou, and Fasman developed the first empirical prediction method to predict secondary structure of proteins from their amino acid sequences. Subsequently, a more sophisticated GOR method has been developed. Although it became very popular among biologists, their accuracy was only slightly better than random. A significant improvement in prediction accuracy >70% has been achieved by ‘second generation’ methods such as PHD, SAM-T98, and PSIPRED, which utilized information concerning sequence conservation. Only recently F. B. Akcesme developed a local similarity based method to obtain an accuracy >90%in secondary structure prediction of any new protein. In this article we examined the possibility of sequence similarity based secondary structure prediction of proteins. To deal with this issue, all proteins of PDB dataset are searched for identical subsequences in the other larger proteins of PDB dataset. It is seen that around 17% of proteins in the PDB dataset have identical subsequences in other larger proteins of PDB dataset. When the secondary structures of proteins are assigned as the corresponding secondary structures of identical parts in other larger proteins, the average prediction accuracy is found to be 90.39 %. Therefore, we concluded that an unknown protein has a chance of 17 % to have an identical subsequence in a larger protein in Protein Data Bank (PDB), and there is a possibility that its secondary structure be predicted with around 90% accuracy with this method

    Recurrent Neural Networks for Linear B-Epitope Prediction in Antigens

    No full text
    Experimental methods used for characterizing epitopes that play a vital role in the development of peptide vaccines, in diagnosis of diseases, and also for allergy research are time consuming and need huge resources. There are many online epitope prediction tools that can help experimenters in short listing the candidate peptides. To predict B-cell epitopes in an antigenic sequence, Jordan recurrent neural network (JRNN) are found to be more successful. To train and test neural networks, 262.583 B epitopes are retrieved from IEDB database. 99.9% of these epitopes have lengths in the interval 6-25 amino acids. For each of these lengths, committees of 11 expert recurrent neural networks are trained. To train these experts alongside epitopes, non-epitopes are needed. Non-epitopes are created as random sequences of amino acids of the same length followed by a filtering process. To distinguish epitopes and non-epitopes, the votes of eleven experts are aggregated by majority vote. An overall accuracy of 97.23% is achieved. Then these experts are used to predict the linear b epitopes of antigen, ESAT6 (Tuberculosis)

    Perceptions of students in health and molecular life sciences regarding pharmacogenomics and personalized medicine

    No full text
    Abstract Background Increasing evidence is demonstrating that a patient’s unique genetic profile can be used to detect the disease’s onset, prevent its progression, and optimize its treatment. This led to the increased global efforts to implement personalized medicine (PM) and pharmacogenomics (PG) in clinical practice. Here we investigated the perceptions of students from different universities in Bosnia and Herzegovina (BH) towards PG/PM as well as related ethical, legal, and social implications (ELSI). This descriptive, cross-sectional study is based on the survey of 559 students from the Faculties of Medicine, Pharmacy, Health Studies, Genetics, and Bioengineering and other study programs. Results Our results showed that 50% of students heard about personal genome testing companies and 69% consider having a genetic test done. A majority of students (57%) agreed that PM represents a promising healthcare model, and 40% of students agreed that their study program is well designed for understanding PG/PM. This latter opinion seems to be particularly influenced by the field of study (7.23, CI 1.99–26.2, p = 0.003). Students with this opinion are also more willing to continue their postgraduate education in the PM (OR = 4.68, CI 2.59–8.47, p < 0.001). Furthermore, 45% of students are aware of different ethical aspects of genetic testing, with most of them (46%) being concerned about the patient’s privacy. Conclusions Our results indicate a positive attitude of biomedical students in Bosnia and Herzegovina towards genetic testing and personalized medicine. Importantly, our results emphasize the key importance of pharmacogenomic education for more efficient translation of precision medicine into clinical practice

    Statistical and Machine Learning Techniques in Human Microbiome Studies: Contemporary Challenges and Solutions

    Get PDF
    The human microbiome has emerged as a central research topic in human biology and biomedicine. Current microbiome studies generate high-throughput omics data across different body sites, populations, and life stages. Many of the challenges in microbiome research are similar to other high-throughput studies, the quantitative analyses need to address the heterogeneity of data, specific statistical properties, and the remarkable variation in microbiome composition across individuals and body sites. This has led to a broad spectrum of statistical and machine learning challenges that range from study design, data processing, and standardization to analysis, modeling, cross-study comparison, prediction, data science ecosystems, and reproducible reporting. Nevertheless, although many statistics and machine learning approaches and tools have been developed, new techniques are needed to deal with emerging applications and the vast heterogeneity of microbiome data. We review and discuss emerging applications of statistical and machine learning techniques in human microbiome studies and introduce the COST Action CA18131 “ML4Microbiome” that brings together microbiome researchers and machine learning experts to address current challenges such as standardization of analysis pipelines for reproducibility of data analysis results, benchmarking, improvement, or development of existing and new tools and ontologies
    corecore