36 research outputs found

    Reversible Jump Metropolis Light Transport using Inverse Mappings

    Full text link
    We study Markov Chain Monte Carlo (MCMC) methods operating in primary sample space and their interactions with multiple sampling techniques. We observe that incorporating the sampling technique into the state of the Markov Chain, as done in Multiplexed Metropolis Light Transport (MMLT), impedes the ability of the chain to properly explore the path space, as transitions between sampling techniques lead to disruptive alterations of path samples. To address this issue, we reformulate Multiplexed MLT in the Reversible Jump MCMC framework (RJMCMC) and introduce inverse sampling techniques that turn light paths into the random numbers that would produce them. This allows us to formulate a novel perturbation that can locally transition between sampling techniques without changing the geometry of the path, and we derive the correct acceptance probability using RJMCMC. We investigate how to generalize this concept to non-invertible sampling techniques commonly found in practice, and introduce probabilistic inverses that extend our perturbation to cover most sampling methods found in light transport simulations. Our theory reconciles the inverses with RJMCMC yielding an unbiased algorithm, which we call Reversible Jump MLT (RJMLT). We verify the correctness of our implementation in canonical and practical scenarios and demonstrate improved temporal coherence, decrease in structured artifacts, and faster convergence on a wide variety of scenes

    Ensemble metropolis light transport

    Get PDF
    This article proposes a Markov Chain Monte Carlo (MCMC) rendering algorithm based on a family of guided transition kernels. The kernels exploit properties of ensembles of light transport paths, which are distributed according to the lighting in the scene, and utilize this information to make informed decisions for guiding local path sampling. Critically, our approach does not require caching distributions in world space, saving time and memory, yet it is able to make guided sampling decisions based on whole paths. We show how this can be implemented efficiently by organizing the paths in each ensemble and designing transition kernels for MCMC rendering based on a carefully chosen subset of paths from the ensemble. This algorithm is easy to parallelize and leads to improvements in variance when rendering a variety of scenes

    Quality Assessment and Variance Reduction in Monte Carlo Rendering Algorithms

    Get PDF
    Over the past few decades much work has been focused on the area of physically based rendering which attempts to produce images that are indistinguishable from natural images such as photographs. Physically based rendering algorithms simulate the complex interactions of light with physically based material, light source, and camera models by structuring it as complex high dimensional integrals [Kaj86] which do not have a closed form solution. Stochastic processes such as Monte Carlo methods can be structured to approximate the expectation of these integrals, producing algorithms which converge to the true rendering solution as the amount of computation is increased in the limit.When a finite amount of computation is used to approximate the rendering solution, images will contain undesirable distortions in the form of noise from under-sampling in image regions with complex light interactions. An important aspect of developing algorithms in this domain is to have a means of accurately comparing and contrasting the relative performance gains between different approaches. Image Quality Assessment (IQA) measures provide a way of condensing the high dimensionality of image data to a single scalar value which can be used as a representative measure of image quality and fidelity. These measures are largely developed in the context of image datasets containing natural images (photographs) coupled with their synthetically distorted versions, and quality assessment scores given by human observers under controlled viewing conditions. Inference using these measures therefore relies on whether the synthetic distortions used to develop the IQA measures are representative of the natural distortions that will be seen in images from domain being assessed.When we consider images generated through stochastic rendering processes, the structure of visible distortions that are present in un-converged images is highly complex and spatially varying based on lighting and scene composition. In this domain the simple synthetic distortions used commonly to train and evaluate IQA measures are not representative of the complex natural distortions from the rendering process. This raises a question of how robust IQA measures are when applied to physically based rendered images.In this thesis we summarize the classical and recent works in the area of physicallybased rendering using stochastic approaches such as Monte Carlo methods. We develop a modern C++ framework wrapping MPI for managing and running code on large scale distributed computing environments. With this framework we use high performance computing to generate a dataset of Monte Carlo images. From this we provide a study on the effectiveness of modern and classical IQA measures and their robustness when evaluating images generated through stochastic rendering processes. Finally, we build on the strengths of these IQA measures and apply modern deep-learning methods to the No Reference IQA problem, where we wish to assess the quality of a rendered image without knowing its true value

    New Directions for Contact Integrators

    Get PDF
    Contact integrators are a family of geometric numerical schemes which guarantee the conservation of the contact structure. In this work we review the construction of both the variational and Hamiltonian versions of these methods. We illustrate some of the advantages of geometric integration in the dissipative setting by focusing on models inspired by recent studies in celestial mechanics and cosmology.Comment: To appear as Chapter 24 in GSI 2021, Springer LNCS 1282

    CHARMM: The biomolecular simulation program

    Full text link
    CHARMM (Chemistry at HARvard Molecular Mechanics) is a highly versatile and widely used molecular simulation program. It has been developed over the last three decades with a primary focus on molecules of biological interest, including proteins, peptides, lipids, nucleic acids, carbohydrates, and small molecule ligands, as they occur in solution, crystals, and membrane environments. For the study of such systems, the program provides a large suite of computational tools that include numerous conformational and path sampling methods, free energy estimators, molecular minimization, dynamics, and analysis techniques, and model-building capabilities. The CHARMM program is applicable to problems involving a much broader class of many-particle systems. Calculations with CHARMM can be performed using a number of different energy functions and models, from mixed quantum mechanical-molecular mechanical force fields, to all-atom classical potential energy functions with explicit solvent and various boundary conditions, to implicit solvent and membrane models. The program has been ported to numerous platforms in both serial and parallel architectures. This article provides an overview of the program as it exists today with an emphasis on developments since the publication of the original CHARMM article in 1983. © 2009 Wiley Periodicals, Inc.J Comput Chem, 2009.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/63074/1/21287_ftp.pd

    Allostery of the flavivirus NS3 helicase and bacterial IGPS studied with molecular dynamics simulations

    Get PDF
    2020 Spring.Includes bibliographical references.Allostery is a biochemical phenomenon where the binding of a molecule at one site in a biological macromolecule (e.g. a protein) results in a perturbation of activity or function at another distinct active site in the macromolecule's structure. Allosteric mechanisms are seen throughout biology and play important functions during cell signaling, enzyme activation, and metabolism regulation as well as genome transcription and replication processes. Biochemical studies have identified allosteric effects for numerous proteins, yet our understanding of the molecular mechanisms underlying allostery is still lacking. Molecular-level insights obtained from all-atom molecular dynamics simulations can drive our understanding and further experimentation on the allosteric mechanisms at play in a protein. This dissertation reports three such studies of allostery using molecular dynamics simulations in conjunction with other methods. Specifically, the first chapter introduces allostery and how computational simulation of proteins can provide insight into the mechanisms of allosteric enzymes. The second and third chapters are foundational studies of the flavivirus non-structural 3 (NS3) helicase. This enzyme hydrolyzes nucleoside triphosphate molecules to power the translocation of the enzyme along single-stranded RNA as well as the unwinding of double-stranded RNA; both the hydrolysis and helicase functions (translocation and unwinding) have allosteric mechanisms where the hydrolysis active site's ligand affects the protein-RNA interactions and bound RNA enhances the hydrolysis activity. Specifically, a bound RNA oligomer is seen to affect the behavior and positioning of waters within the hydrolysis active site, which is hypothesized to originate, in part, from the RNA-dependent conformational states of the RNA-binding loop. Additionally, the substrate states of the NTP hydrolysis reaction cycle are seen to affect protein-RNA interactions, which is hypothesized to drive unidirectional translocation of the enzyme along the RNA polymer. Finally, chapter four introduces a novel method to study the biophysical coupling between two active sites in a protein. The short-ranged residue-residue interactions within the protein's three dimensional structure are used to identify paths that connect the two active sites. This method is used to highlight the paths and residue-residue interactions that are important to the allosteric enhancement observed for the Thermatoga maritima imidazole glycerol phosphate synthase (IGPS) protein. Results from this new quantitative analysis have provided novel insights into the allosteric paths of IGPS. For both the NS3 and IGPS proteins, results presented in this dissertation have highlighted structural regions that may be targeted for small-molecule inhibition or mutagenesis studies. Towards this end, the future studies of both allosteric proteins as well as broader impacts of the presented research are discussed in the final chapter

    The Role of Mutations in Protein Structural Dynamics and Function: A Multi-scale Computational Approach

    Get PDF
    abstract: Proteins are a fundamental unit in biology. Although proteins have been extensively studied, there is still much to investigate. The mechanism by which proteins fold into their native state, how evolution shapes structural dynamics, and the dynamic mechanisms of many diseases are not well understood. In this thesis, protein folding is explored using a multi-scale modeling method including (i) geometric constraint based simulations that efficiently search for native like topologies and (ii) reservoir replica exchange molecular dynamics, which identify the low free energy structures and refines these structures toward the native conformation. A test set of eight proteins and three ancestral steroid receptor proteins are folded to 2.7Å all-atom RMSD from their experimental crystal structures. Protein evolution and disease associated mutations (DAMs) are most commonly studied by in silico multiple sequence alignment methods. Here, however, the structural dynamics are incorporated to give insight into the evolution of three ancestral proteins and the mechanism of several diseases in human ferritin protein. The differences in conformational dynamics of these evolutionary related, functionally diverged ancestral steroid receptor proteins are investigated by obtaining the most collective motion through essential dynamics. Strikingly, this analysis shows that evolutionary diverged proteins of the same family do not share the same dynamic subspace. Rather, those sharing the same function are simultaneously clustered together and distant from those functionally diverged homologs. This dynamics analysis also identifies 77% of mutations (functional and permissive) necessary to evolve new function. In silico methods for prediction of DAMs rely on differences in evolution rate due to purifying selection and therefore the accuracy of DAM prediction decreases at fast and slow evolvable sites. Here, we investigate structural dynamics through computing the contribution of each residue to the biologically relevant fluctuations and from this define a metric: the dynamic stability index (DSI). Using DSI we study the mechanism for three diseases observed in the human ferritin protein. The T30I and R40G DAMs show a loss of dynamic stability at the C-terminus helix and nearby regulatory loop, agreeing with experimental results implicating the same regulatory loop as a cause in cataracts syndrome.Dissertation/ThesisPh.D. Physics 201

    Approaches for studying allostery using network theory

    Get PDF
    Allostery is the process whereby binding of a substrate at a site other than the active site modulates the function of a protein. Allostery is thus one of the myriad of biological processes that keeps cells under tight regulatory control, specifically one that acts at the level of the protein rather than through changes in gene transciption or translation of mRNA. Despite over 50 years of investigation, allostery has remained a difficult phenomenon to elucidate. Structural changes are often too subtle for many experimental methods to capture and it has become increasingly obvious that a range of timescales are involved, from extremely fast pico- to nanosecond local fluctuations all the way up to the millisecond or even second timescales over which the biological effects of allostery are observed. As a result, computational methods have arisen to become a powerful means of studying allostery, aided greatly by the staggering increases in computational power over the last 70 years. A field that has experienced a surge in interest over the last 20 years or so is \emph{network theory}, perhaps stimulated by the development of the internet and the Web, two examples of immensely important networks in our everyday life. One of the reasons for the popularity of networks in modelling is their comparative simplicity: a network consists of \emph{nodes}, representing a set of objects in a system, and \emph{edges}, that capture the relations between them. In this thesis, we both apply existing ideas and methods from network theory and develop new computational network methods to study allostery in proteins. We attempt to tackle this problem in three distinct ways, each representing a protein using a different form of a network. Our initial work follows on logically from previous work in the group, representing proteins as \emph{graphs} where atoms are nodes and bonds are energy weighted edges. In effect we disregard the 3-dimensional structure of the protein and instead focus on how the bond \emph{connectivity} can be used to explain potential long range communication between allosteric and active sites in a multimeric protein. We then focus on a class of protein models known as \emph{elastic network models}, in which our edges now correspond to mechanical Hooke springs between either atoms or residues, in order to attempt to understand the physical, mechanistic basis of allostery.Open Acces
    corecore