145 research outputs found

    Knowledge-based energy functions for computational studies of proteins

    Full text link
    This chapter discusses theoretical framework and methods for developing knowledge-based potential functions essential for protein structure prediction, protein-protein interaction, and protein sequence design. We discuss in some details about the Miyazawa-Jernigan contact statistical potential, distance-dependent statistical potentials, as well as geometric statistical potentials. We also describe a geometric model for developing both linear and non-linear potential functions by optimization. Applications of knowledge-based potential functions in protein-decoy discrimination, in protein-protein interactions, and in protein design are then described. Several issues of knowledge-based potential functions are finally discussed.Comment: 57 pages, 6 figures. To be published in a book by Springe

    Renormalization group flows and continual Lie algebras

    Full text link
    We study the renormalization group flows of two-dimensional metrics in sigma models and demonstrate that they provide a continual analogue of the Toda field equations based on the infinite dimensional algebra G(d/dt;1). The resulting Toda field equation is a non-linear generalization of the heat equation, which is integrable in target space and shares the same dissipative properties in time. We provide the general solution of the renormalization group flows in terms of free fields, via Backlund transformations, and present some simple examples that illustrate the validity of their formal power series expansion in terms of algebraic data. We study in detail the sausage model that arises as geometric deformation of the O(3) sigma model, and give a new interpretation to its ultra-violet limit by gluing together two copies of Witten's two-dimensional black hole in the asymptotic region. We also provide some new solutions that describe the renormalization group flow of negatively curved spaces in different patches, which look like a cane in the infra-red region. Finally, we revisit the transition of a flat cone C/Z_n to the plane, as another special solution, and note that tachyon condensation in closed string theory exhibits a hidden relation to the infinite dimensional algebra G(d/dt;1) in the regime of gravity. Its exponential growth holds the key for the construction of conserved currents and their systematic interpretation in string theory, but they still remain unknown.Comment: latex, 73pp including 14 eps fig

    Dust in Supernovae and Supernova Remnants I : Formation Scenarios

    Get PDF
    Supernovae are considered as prime sources of dust in space. Observations of local supernovae over the past couple of decades have detected the presence of dust in supernova ejecta. The reddening of the high redshift quasars also indicate the presence of large masses of dust in early galaxies. Considering the top heavy IMF in the early galaxies, supernovae are assumed to be the major contributor to these large amounts of dust. However, the composition and morphology of dust grains formed in a supernova ejecta is yet to be understood with clarity. Moreover, the dust masses inferred from observations in mid-infrared and submillimeter wavelength regimes differ by two orders of magnitude or more. Therefore, the mechanism responsible for the synthesis of molecules and dust in such environments plays a crucial role in studying the evolution of cosmic dust in galaxies. This review summarises our current knowledge of dust formation in supernova ejecta and tries to quantify the role of supernovae as dust producers in a galaxy.Peer reviewe

    Performance of novel VUV-sensitive Silicon Photo-Multipliers for nEXO

    Full text link
    Liquid xenon time projection chambers are promising detectors to search for neutrinoless double beta decay (0νββ\nu \beta \beta), due to their response uniformity, monolithic sensitive volume, scalability to large target masses, and suitability for extremely low background operations. The nEXO collaboration has designed a tonne-scale time projection chamber that aims to search for 0νββ\nu \beta \beta of \ce{^{136}Xe} with projected half-life sensitivity of 1.35×10281.35\times 10^{28}~yr. To reach this sensitivity, the design goal for nEXO is \leq1\% energy resolution at the decay QQ-value (2458.07±0.312458.07\pm 0.31~keV). Reaching this resolution requires the efficient collection of both the ionization and scintillation produced in the detector. The nEXO design employs Silicon Photo-Multipliers (SiPMs) to detect the vacuum ultra-violet, 175 nm scintillation light of liquid xenon. This paper reports on the characterization of the newest vacuum ultra-violet sensitive Fondazione Bruno Kessler VUVHD3 SiPMs specifically designed for nEXO, as well as new measurements on new test samples of previously characterised Hamamatsu VUV4 Multi Pixel Photon Counters (MPPCs). Various SiPM and MPPC parameters, such as dark noise, gain, direct crosstalk, correlated avalanches and photon detection efficiency were measured as a function of the applied over voltage and wavelength at liquid xenon temperature (163~K). The results from this study are used to provide updated estimates of the achievable energy resolution at the decay QQ-value for the nEXO design

    Modern temporal network theory: A colloquium

    Full text link
    The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions can make predictions and mechanistic understanding more accurate. The drawback, however, is that there are not so many methods available, partly because temporal networks is a relatively young field, partly because it more difficult to develop such methods compared to for static networks. In this colloquium, we review the methods to analyze and model temporal networks and processes taking place on them, focusing mainly on the last three years. This includes the spreading of infectious disease, opinions, rumors, in social networks; information packets in computer networks; various types of signaling in biology, and more. We also discuss future directions.Comment: Final accepted versio

    On the origin and evolution of the material in 67P/Churyumov-Gerasimenko

    Get PDF
    International audiencePrimitive objects like comets hold important information on the material that formed our solar system. Several comets have been visited by spacecraft and many more have been observed through Earth- and space-based telescopes. Still our understanding remains limited. Molecular abundances in comets have been shown to be similar to interstellar ices and thus indicate that common processes and conditions were involved in their formation. The samples returned by the Stardust mission to comet Wild 2 showed that the bulk refractory material was processed by high temperatures in the vicinity of the early sun. The recent Rosetta mission acquired a wealth of new data on the composition of comet 67P/Churyumov-Gerasimenko (hereafter 67P/C-G) and complemented earlier observations of other comets. The isotopic, elemental, and molecular abundances of the volatile, semi-volatile, and refractory phases brought many new insights into the origin and processing of the incorporated material. The emerging picture after Rosetta is that at least part of the volatile material was formed before the solar system and that cometary nuclei agglomerated over a wide range of heliocentric distances, different from where they are found today. Deviations from bulk solar system abundances indicate that the material was not fully homogenized at the location of comet formation, despite the radial mixing implied by the Stardust results. Post-formation evolution of the material might play an important role, which further complicates the picture. This paper discusses these major findings of the Rosetta mission with respect to the origin of the material and puts them in the context of what we know from other comets and solar system objects

    Volume I. Introduction to DUNE

    Get PDF
    The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decay—these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. This TDR is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the Project. Volume I contains an executive summary that introduces the DUNE science program, the far detector and the strategy for its modular designs, and the organization and management of the Project. The remainder of Volume I provides more detail on the science program that drives the choice of detector technologies and on the technologies themselves. It also introduces the designs for the DUNE near detector and the DUNE computing model, for which DUNE is planning design reports. Volume II of this TDR describes DUNE\u27s physics program in detail. Volume III describes the technical coordination required for the far detector design, construction, installation, and integration, and its organizational structure. Volume IV describes the single-phase far detector technology. A planned Volume V will describe the dual-phase technology

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Characteristics of signals originating near the lithium-diffused N+ contact of high purity germanium p-type point contact detectors

    Get PDF
    A study of signals originating near the lithium-diffused n+ contact of p-type point contact (PPC) high purity germanium detectors (HPGe) is presented. The transition region between the active germanium and the fully dead layer of the n+ contact is examined. Energy depositions in this transition region are shown to result in partial charge collection. This provides a mechanism for events with a well defined energy to contribute to the continuum of the energy spectrum at lower energies. A novel technique to quantify the contribution from this source of background is introduced. Experiments that operate germanium detectors with a very low energy threshold may benefit from the methods presented herein
    corecore