9,815 research outputs found

    System-theoretic trends in econometrics

    Get PDF
    Economics;Estimation;econometrics

    Computational Investigations on Polymerase Actions in Gene Transcription and Replication Combining Physical Modeling and Atomistic Simulations

    Full text link
    Polymerases are protein enzymes that move along nucleic acid chains and catalyze template-based polymerization reactions during gene transcription and replication. The polymerases also substantially improve transcription or replication fidelity through the non-equilibrium enzymatic cycles. We briefly review computational efforts that have been made toward understanding mechano-chemical coupling and fidelity control mechanisms of the polymerase elongation. The polymerases are regarded as molecular information motors during the elongation process. It requires a full spectrum of computational approaches from multiple time and length scales to understand the full polymerase functional cycle. We keep away from quantum mechanics based approaches to the polymerase catalysis due to abundant former surveys, while address only statistical physics modeling approach and all-atom molecular dynamics simulation approach. We organize this review around our own modeling and simulation practices on a single-subunit T7 RNA polymerase, and summarize commensurate studies on structurally similar DNA polymerases. For multi-subunit RNA polymerases that have been intensively studied in recent years, we leave detailed discussions on the simulation achievements to other computational chemical surveys, while only introduce very recently published representative studies, including our own preliminary work on structure-based modeling on yeast RNA polymerase II. In the end, we quickly go through kinetic modeling on elongation pauses and backtracking activities. We emphasize the fluctuation and control mechanisms of the polymerase actions, highlight the non-equilibrium physical nature of the system, and try to bring some perspectives toward understanding replication and transcription regulation from single molecular details to a genome-wide scale

    Kernel Density Estimation with Linked Boundary Conditions

    Get PDF
    Kernel density estimation on a finite interval poses an outstanding challenge because of the well-recognized bias at the boundaries of the interval. Motivated by an application in cancer research, we consider a boundary constraint linking the values of the unknown target density function at the boundaries. We provide a kernel density estimator (KDE) that successfully incorporates this linked boundary condition, leading to a non-self-adjoint diffusion process and expansions in non-separable generalized eigenfunctions. The solution is rigorously analyzed through an integral representation given by the unified transform (or Fokas method). The new KDE possesses many desirable properties, such as consistency, asymptotically negligible bias at the boundaries, and an increased rate of approximation, as measured by the AMISE. We apply our method to the motivating example in biology and provide numerical experiments with synthetic data, including comparisons with state-of-the-art KDEs (which currently cannot handle linked boundary constraints). Results suggest that the new method is fast and accurate. Furthermore, we demonstrate how to build statistical estimators of the boundary conditions satisfied by the target function without apriori knowledge. Our analysis can also be extended to more general boundary conditions that may be encountered in applications

    Coding Theory and Algebraic Combinatorics

    Full text link
    This chapter introduces and elaborates on the fruitful interplay of coding theory and algebraic combinatorics, with most of the focus on the interaction of codes with combinatorial designs, finite geometries, simple groups, sphere packings, kissing numbers, lattices, and association schemes. In particular, special interest is devoted to the relationship between codes and combinatorial designs. We describe and recapitulate important results in the development of the state of the art. In addition, we give illustrative examples and constructions, and highlight recent advances. Finally, we provide a collection of significant open problems and challenges concerning future research.Comment: 33 pages; handbook chapter, to appear in: "Selected Topics in Information and Coding Theory", ed. by I. Woungang et al., World Scientific, Singapore, 201

    High-throughput sequencing of the T-cell receptor repertoire: pitfalls and opportunities.

    Get PDF
    T-cell specificity is determined by the T-cell receptor, a heterodimeric protein coded for by an extremely diverse set of genes produced by imprecise somatic gene recombination. Massively parallel high-throughput sequencing allows millions of different T-cell receptor genes to be characterized from a single sample of blood or tissue. However, the extraordinary heterogeneity of the immune repertoire poses significant challenges for subsequent analysis of the data. We outline the major steps in processing of repertoire data, considering low-level processing of raw sequence files and high-level algorithms, which seek to extract biological or pathological information. The latest generation of bioinformatics tools allows millions of DNA sequences to be accurately and rapidly assigned to their respective variable V and J gene segments, and to reconstruct an almost error-free representation of the non-templated additions and deletions that occur. High-level processing can measure the diversity of the repertoire in different samples, quantify V and J usage and identify private and public T-cell receptors. Finally, we discuss the major challenge of linking T-cell receptor sequence to function, and specifically to antigen recognition. Sophisticated machine learning algorithms are being developed that can combine the paradoxical degeneracy and cross-reactivity of individual T-cell receptors with the specificity of the overall T-cell immune response. Computational analysis will provide the key to unlock the potential of the T-cell receptor repertoire to give insight into the fundamental biology of the adaptive immune system and to provide powerful biomarkers of disease

    Computational Intelligence for Life Sciences

    Get PDF
    Computational Intelligence (CI) is a computer science discipline encompassing the theory, design, development and application of biologically and linguistically derived computational paradigms. Traditionally, the main elements of CI are Evolutionary Computation, Swarm Intelligence, Fuzzy Logic, and Neural Networks. CI aims at proposing new algorithms able to solve complex computational problems by taking inspiration from natural phenomena. In an intriguing turn of events, these nature-inspired methods have been widely adopted to investigate a plethora of problems related to nature itself. In this paper we present a variety of CI methods applied to three problems in life sciences, highlighting their effectiveness: we describe how protein folding can be faced by exploiting Genetic Programming, the inference of haplotypes can be tackled using Genetic Algorithms, and the estimation of biochemical kinetic parameters can be performed by means of Swarm Intelligence. We show that CI methods can generate very high quality solutions, providing a sound methodology to solve complex optimization problems in life sciences

    Model-based Cognitive Neuroscience: Multifield Mechanistic Integration in Practice

    Get PDF
    Autonomist accounts of cognitive science suggest that cognitive model building and theory construction (can or should) proceed independently of findings in neuroscience. Common functionalist justifications of autonomy rely on there being relatively few constraints between neural structure and cognitive function (e.g., Weiskopf, 2011). In contrast, an integrative mechanistic perspective stresses the mutual constraining of structure and function (e.g., Piccinini & Craver, 2011; Povich, 2015). In this paper, I show how model-based cognitive neuroscience (MBCN) epitomizes the integrative mechanistic perspective and concentrates the most revolutionary elements of the cognitive neuroscience revolution (Boone & Piccinini, 2016). I also show how the prominent subset account of functional realization supports the integrative mechanistic perspective I take on MBCN and use it to clarify the intralevel and interlevel components of integration
    corecore