1,469 research outputs found

    "Going back to our roots": second generation biocomputing

    Full text link
    Researchers in the field of biocomputing have, for many years, successfully "harvested and exploited" the natural world for inspiration in developing systems that are robust, adaptable and capable of generating novel and even "creative" solutions to human-defined problems. However, in this position paper we argue that the time has now come for a reassessment of how we exploit biology to generate new computational systems. Previous solutions (the "first generation" of biocomputing techniques), whilst reasonably effective, are crude analogues of actual biological systems. We believe that a new, inherently inter-disciplinary approach is needed for the development of the emerging "second generation" of bio-inspired methods. This new modus operandi will require much closer interaction between the engineering and life sciences communities, as well as a bidirectional flow of concepts, applications and expertise. We support our argument by examining, in this new light, three existing areas of biocomputing (genetic programming, artificial immune systems and evolvable hardware), as well as an emerging area (natural genetic engineering) which may provide useful pointers as to the way forward.Comment: Submitted to the International Journal of Unconventional Computin

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues

    The nature of chemical innovation: new enzymes by evolution

    Get PDF
    I describe how we direct the evolution of non-natural enzyme activities, using chemical intuition and information on structure and mechanism to guide us to the most promising reaction/enzyme systems. With synthetic reagents to generate new reactive intermediates and just a few amino acid substitutions to tune the active site, a cytochrome P450 can catalyze a variety of carbene and nitrene transfer reactions. The cyclopropanation, N–H insertion, C–H amination, sulfimidation, and aziridination reactions now demonstrated are all well known in chemical catalysis but have no counterparts in nature. The new enzymes are fully genetically encoded, assemble and function inside of cells, and can be optimized for different substrates, activities, and selectivities. We are learning how to use nature's innovation mechanisms to marry some of the synthetic chemists’ favorite transformations with the exquisite selectivity and tunability of enzymes

    Biochemical parameter estimation vs. benchmark functions: A comparative study of optimization performance and representation design

    Get PDF
    © 2019 Elsevier B.V. Computational Intelligence methods, which include Evolutionary Computation and Swarm Intelligence, can efficiently and effectively identify optimal solutions to complex optimization problems by exploiting the cooperative and competitive interplay among their individuals. The exploration and exploitation capabilities of these meta-heuristics are typically assessed by considering well-known suites of benchmark functions, specifically designed for numerical global optimization purposes. However, their performances could drastically change in the case of real-world optimization problems. In this paper, we investigate this issue by considering the Parameter Estimation (PE) of biochemical systems, a common computational problem in the field of Systems Biology. In order to evaluate the effectiveness of various meta-heuristics in solving the PE problem, we compare their performance by considering a set of benchmark functions and a set of synthetic biochemical models characterized by a search space with an increasing number of dimensions. Our results show that some state-of-the-art optimization methods – able to largely outperform the other meta-heuristics on benchmark functions – are characterized by considerably poor performances when applied to the PE problem. We also show that a limiting factor of these optimization methods concerns the representation of the solutions: indeed, by means of a simple semantic transformation, it is possible to turn these algorithms into competitive alternatives. We corroborate this finding by performing the PE of a model of metabolic pathways in red blood cells. Overall, in this work we state that classic benchmark functions cannot be fully representative of all the features that make real-world optimization problems hard to solve. This is the case, in particular, of the PE of biochemical systems. We also show that optimization problems must be carefully analyzed to select an appropriate representation, in order to actually obtain the performance promised by benchmark results

    Detecting deception and suspicion in dyadic game interactions

    Get PDF
    In this paper we focus on detection of deception and suspicion from electrodermal activity (EDA) measured on left and right wrists during a dyadic game interaction. We aim to answer three research questions: (i) Is it possible to reliably distinguish deception from truth based on EDA measurements during a dyadic game interaction? (ii) Is it possible to reliably distinguish the state of suspicion from trust based on EDA measurements during a card game? (iii) What is the relative importance of EDA measured on left and right wrists? To answer our research questions we conducted a study in which 20 participants were playing the game Cheat in pairs with one EDA sensor placed on each of their wrists. Our experimental results show that EDA measures from left and right wrists provide more information for suspicion detection than for deception detection and that the person-dependent detection is more reliable than the person-independent detection. In particular, classifying the EDA signal with Support Vector Machine (SVM) yields accuracies of 52% and 57% for person-independent prediction of deception and suspicion respectively, and 63% and 76% for person-dependent prediction of deception and suspicion respectively. Also, we found that: (i) the optimal interval of informative EDA signal for deception detection is about 1 s while it is around 3.5 s for suspicion detection; (ii) the EDA signal relevant for deception/ suspicion detection can be captured after around 3.0 seconds after a stimulus occurrence regardless of the stimulus type (deception/ truthfulness/suspicion/trust); and that (iii) features extracted from EDA from both wrists are important for classification of both deception and suspicion. To the best of our knowledge, this is the firstwork that uses EDA data to automatically detect both deception and suspicion in a dyadic game interaction setting.N

    Predicting synthetic rescues in metabolic networks

    Full text link
    An important goal of medical research is to develop methods to recover the loss of cellular function due to mutations and other defects. Many approaches based on gene therapy aim to repair the defective gene or to insert genes with compensatory function. Here, we propose an alternative, network-based strategy that aims to restore biological function by forcing the cell to either bypass the functions affected by the defective gene, or to compensate for the lost function. Focusing on the metabolism of single-cell organisms, we computationally study mutants that lack an essential enzyme, and thus are unable to grow or have a significantly reduced growth rate. We show that several of these mutants can be turned into viable organisms through additional gene deletions that restore their growth rate. In a rather counterintuitive fashion, this is achieved via additional damage to the metabolic network. Using flux balance-based approaches, we identify a number of synthetically viable gene pairs, in which the removal of one enzyme-encoding gene results in a nonviable phenotype, while the deletion of a second enzyme-encoding gene rescues the organism. The systematic network-based identification of compensatory rescue effects may open new avenues for genetic interventions.Comment: Supplementary Information is available at the Molecular Systems Biology website: http://www.nature.com/msb/journal/v4/n1/full/msb20081.htm

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Biocrystals:Growth, Synthesis and Materials

    Get PDF
    • 

    corecore