22 research outputs found

    Improving the mesoscopic modeling of DNA denaturation dynamics

    Get PDF
    Although previously developed mesoscopic DNA models have successfully reproduced thermodynamic denaturation data, recent studies show that these overestimate the rate of denaturation by orders of magnitude. Using adapted Peyrard–Bishop–Dauxois (PBD) models, we have calculated the denaturation rates of several DNA hairpins and made comparison with experimental data. We show that the addition of a barrier at the onsite potential of the PBD model gives a more accurate description of the unzipping dynamics of short DNA sequences. The new models provide a refined theoretical insight on the dynamical mechanisms of unzipping which can have implications for the understanding of transcription and replication processes. Still, this class of adapted PBD models seems to have a fundamental limitation which implies that it is not possible to get agreement with available experimental results on the dynamics of long DNA sequences and at the same time maintain the good agreement regarding its thermodynamics. The reason for this is that the denaturation rate of long DNA chains is not dramatically lowered by the additional barrier—as the base-pairs that open are more likely to remain open, facilitating the opening of the full DNA molecule. Some care has to be taken, since experimental techniques suitable to the study of denaturation rates of long sequences seem not to agree with other experimental data on short DNA sequences. Further research, both theoretical and experimental, is therefore needed to resolve these inconsistencies—which will be a starting point for new minimalistic models that are able to describe both thermodynamics and dynamics at a predictive level.Spanish Ministry of Economy, Industry and Competitiveness (BES-2013-065453, FIS2012-38827) and the University of Burgos and the Anders Jahre fund (Project 40105000

    Deciphering the cosmic star formation history and the Nature of Type Ia Supernovae by Future Supernova Surveys

    Full text link
    We investigate the prospects of future supernova searches to get meaningful constraints about the cosmic star formation history (CSFH) and the delay time of type Ia supernovae from star formation (tau_{Ia}), based only on supernova data. Here we parameterize the CSFH by two parameters, alpha and beta that are the evolutionary indices (proportional to (1+z)^{alpha, beta}) at z ~ 1, respectively, and quantitatively examined how well the three parameters (alpha, beta, and tau_{Ia}) can be constrained in ongoing and future supernova surveys. We found that the type classification of detected supernovae down to the magnitude of I_{AB} ~ 27 is essential, to get useful constraint on beta. The parameter tau_{Ia} can also be constrained within an accuracy of ~ 1--2 Gyr, without knowing alpha that is somewhat degenerate with tau_{Ia}. This might be potentially achieved by ground-based surveys but depending on the still highly uncertain type-classification by imaging data. More reliable classification will be achieved by the SNAP mission. The supernova counts at a magnitude level of I_{AB} or K_{AB} ~ 30 will allow us to break degeneracies between alpha and tau_{Ia} and independently constrain all the three parameters, even without knowing supernova types. This can be achieved by the SNAP and JWST missions, having different strength of larger statistics and reach to higher redshifts, respectively. The dependence of observable quantities on survey time intervals is also quantitatively calculated and discussed.Comment: 10 pages, 6 figures, accepted to Ap

    Gravitationally lensed quasars and supernovae in future wide-field optical imaging surveys

    Full text link
    Cadenced optical imaging surveys in the next decade will be capable of detecting time-varying galaxy-scale strong gravitational lenses in large numbers, increasing the size of the statistically well-defined samples of multiply-imaged quasars by two orders of magnitude, and discovering the first strongly-lensed supernovae. We carry out a detailed calculation of the likely yields of several planned surveys, using realistic distributions for the lens and source properties and taking magnification bias and image configuration detectability into account. We find that upcoming wide-field synoptic surveys should detect several thousand lensed quasars. In particular, the LSST should find 8000 lensed quasars, 3000 of which will have well-measured time delays, and also ~130 lensed supernovae, which is compared with ~15 lensed supernovae predicted to be found by the JDEM. We predict the quad fraction to be ~15% for the lensed quasars and ~30% for the lensed supernovae. Generating a mock catalogue of around 1500 well-observed double-image lenses, we compute the available precision on the Hubble constant and the dark energy equation parameters for the time delay distance experiment (assuming priors from Planck): the predicted marginalised 68% confidence intervals are \sigma(w_0)=0.15, \sigma(w_a)=0.41, and \sigma(h)=0.017. While this is encouraging in the sense that these uncertainties are only 50% larger than those predicted for a space-based type-Ia supernova sample, we show how the dark energy figure of merit degrades with decreasing knowledge of the the lens mass distribution. (Abridged)Comment: 17 pages, 10 figures, 3 tables, accepted for publication in MNRAS; mock LSST lens catalogue may be available at http://kipac-prod.stanford.edu/collab/research/lensing/mocklen

    The Hubble Space Telescope Cluster Supernova Survey: V. Improving the Dark Energy Constraints Above z>1 and Building an Early-Type-Hosted Supernova Sample

    Get PDF
    We present ACS, NICMOS, and Keck AO-assisted photometry of 20 Type Ia supernovae SNe Ia from the HST Cluster Supernova Survey. The SNe Ia were discovered over the redshift interval 0.623 < z < 1.415. Fourteen of these SNe Ia pass our strict selection cuts and are used in combination with the world's sample of SNe Ia to derive the best current constraints on dark energy. Ten of our new SNe Ia are beyond redshift z=1z=1, thereby nearly doubling the statistical weight of HST-discovered SNe Ia beyond this redshift. Our detailed analysis corrects for the recently identified correlation between SN Ia luminosity and host galaxy mass and corrects the NICMOS zeropoint at the count rates appropriate for very distant SNe Ia. Adding these supernovae improves the best combined constraint on the dark energy density \rho_{DE}(z) at redshifts 1.0 < z < 1.6 by 18% (including systematic errors). For a LambdaCDM universe, we find \Omega_\Lambda = 0.724 +0.015/-0.016 (68% CL including systematic errors). For a flat wCDM model, we measure a constant dark energy equation-of-state parameter w = -0.985 +0.071/-0.077 (68% CL). Curvature is constrained to ~0.7% in the owCDM model and to ~2% in a model in which dark energy is allowed to vary with parameters w_0 and w_a. Tightening further the constraints on the time evolution of dark energy will require several improvements, including high-quality multi-passband photometry of a sample of several dozen z>1 SNe Ia. We describe how such a sample could be efficiently obtained by targeting cluster fields with WFC3 on HST.Comment: 27 pages, 11 figures. Submitted to ApJ. This first posting includes updates in response to comments from the referee. See http://www.supernova.lbl.gov for other papers in the series pertaining to the HST Cluster SN Survey. The updated supernova Union2.1 compilation of 580 SNe is available at http://supernova.lbl.gov/Unio

    Prognostic factors in soft tissue sarcomaTissue microarray for immunostaining, the importance of whole-tumor sections and time-dependence

    Full text link

    The dynamics of DNA denaturation

    No full text
    Gaining knowledge and understanding of the structure and function of DNA, our genetic material, is crucial for dealing with diseases related to DNA. In 2004, the mapping of the complete human genome was accomplished, leading to an enormous progress in treating DNA-related diseases. The attention was for a long time directed at the DNA structure, specifically the sequence. However, the knowledge of the structure of DNA is not sufficient to understand biological processes. For this, we also need to understand how this structure affects the equilibrium properties and the dynamics of the DNA molecule. In fundamental genetic processes, such as transcription and replication, DNA must undergo dynamical changes. Both processes are highly complex, and due to the lack of detail insight, a satisfactory descriptive model is difficult to design. However, since these processes require a local opening of the DNA molecule, they resemble DNA denaturation, or DNA melting, which is a considerably simpler process to study theoretically and experimentally.Studying DNA denaturation is, besides that it is interesting by itself, also considered a well-grounded step towards the full comprehension of the mechanisms involved in transcription and replication.\newlineIn this work, the denaturation of DNA was explored by computer simulations applying the Peyrard-Bishop-Dauixous (PBD) model. DNA chains consisting of 33 \% AT base pairs and 66 \% GC base pairs were investigated, as well as a key secondary structure of DNA and RNA, the hairpin, which is involved in many important processes of DNA and RNA.Although DNA dynamics has gained increased interest during the last decades, there is still a need of more insight and knowledge within this field. This work contains the first quantitative study in which dynamical data, like denaturation rate constants, of the well-known PBD mesoscopic model has been compared with experiments. Our work will be valuable for improvements of these mesoscopic models

    Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison

    No full text
    Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimental data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones

    Fast Decorrelating Monte Carlo Moves for Efficient Path Sampling

    No full text
    Many relevant processes in chemistry, physics, and biology are rare events from a computational perspective as they take place beyond the accessible time scale of molecular dynamics (MD). Examples are chemical reactions, nucleation, and conformational changes of biomolecules. Path sampling is an approach to break this time scale limit via a Monte Carlo (MC) sampling of MD trajectories. Still, many trajectories are needed for accurately predicting rate constants. To improve the speed of convergence, we propose two new MC moves, stone skipping and web throwing. In these moves, trajectories are constructed via a sequence of subpaths obeying superdetailed balance. By a reweighting procedure, almost all paths can be accepted. Whereas the generation of a single trajectory becomes more expensive, the reduced correlation results in a significant speedup. For a study on DNA denaturation, the increase was found to be a factor 12

    Teaching complex molecular simulation algorithms: Using self‐evaluation to tailor web‐based exercises at an individual level

    No full text
    It is quite challenging to learn complex mathematical algorithms used in molecular simulations, stressing the importance of using the most advantageous teaching methods. Ideally, individuals should learn at their pace and deal with tasks fitting their levels. Web‐based exercises make it possible to tailor every small step of the learning process, but this requires continuous monitoring of the learner. Differentiation based on the scores after the first round of common tasks can be demotivating for all students, as they will experience the initial set of tasks as being either too easy or too hard. We designed two tests, a self‐monitoring test and a rapid test (RT) in which the students explained equations relating to the current topic. The first test was aimed to see if the students were able to evaluate their own level of knowledge, whereas the RT was aimed to find a fast way to determine the level of the students. We compared both tests with traditional measures of knowledge and used a relatively new method, which was originally designed for the analysis of molecular simulation data, to interpret the results. Based on this analysis, we concluded that self‐evaluation, rather than an RT, is a valuable tool to automatically steer individual students through a tree of web‐based exercises to match their skill levels and interests
    corecore