39 research outputs found

    Affinity, stoichiometry and cooperativity of heterochromatin protein 1 (HP1) binding to nucleosomal arrays

    Get PDF
    Heterochromatin protein 1 (HP1) participates in establishing and maintaining heterochromatin via its histone-modification-dependent chromatin interactions. In recent papers HP1 binding to nucleosomal arrays was measured in vitro and interpreted in terms of nearest-neighbour cooperative binding. This mode of chromatin interaction could lead to the spreading of HP1 along the nucleosome chain. Here, we reanalysed previous data by representing the nucleosome chain as a 1D binding lattice and showed how the experimental HP1 binding isotherms can be explained by a simpler model without cooperative interactions between neighboring HP1 dimers. Based on these calculations and spatial models of dinucleosomes and nucleosome chains, we propose that binding stoichiometry depends on the nucleosome repeat length (NRL) rather than protein interactions between HP1 dimers. According to our calculations, more open nucleosome arrays with long DNA linkers are characterized by a larger number of binding sites in comparison to chains with a short NRL. Furthermore, we demonstrate by Monte Carlo simulations that the NRL dependent folding of the nucleosome chain can induce allosteric changes of HP1 binding sites. Thus, HP1 chromatin interactions can be modulated by the change of binding stoichiometry and the type of binding to condensed (methylated) and non-condensed (unmethylated) nucleosome arrays in the absence of direct interactions between HP1 dimers

    Parallel high-performance grid computing: Capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency

    Get PDF
    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we- can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency

    Systems Biological Determination of the Epi-Genomic Structure Function Relation:

    Get PDF
    Despite our knowledge of the sequence of the human genome, the relation of its three-dimensional dynamic architecture with its function – the storage and expression of genetic information – remains one of the central unresolved issues of our age. It became very clear meanwhile that this link is crucial for the entire holistic function of the genome on all genomic coding levels from the DNA sequence to the entire chromosomes. To fulfil the dreams for better diagnostics and treatment in the 21st century (e.g. by gene therapy by inserting a gene into a new global context), we propose here in a unique interdisciplinary project to combine experiment with theory to analyze the (epi-)genomic structure function relationships within the dynamic organization of the -Globin locus, the Immuno Globin loci, and the Tumor Necrosis Factor Alpha regulated SAMD4 region in mouse and human active and inactive cell states, and their global genomic context. The project consists of five work packages (WP1-WP5) and corresponding tasks connected in a system biological approach with iterative use of data, modelling, simulation and experiments via a unique data sharing and visualization platform: In WP1 (Längst, Rippe, Wedemann, Knoch/Grosfeld; T1-T5) to investigate nucleosomal association changes in relation to the DNA sequence and the activity of ATP-driven chromatin remodelling complexes, nucleosome positions will be determined by high-throughput sequencing. The resulting nucleosomal localization probability maps will be evaluated by a novel combination of analysis tools and innovative generic data ontologies. The relation to epigenetic modifications, to the activity of ATP-driven remodelling complexes and compaction degree of nucleosomes will be analysed to understand chromatin morphogenesis and fiber formation. In parallel, in WP2 (Grosveld/Knoch, Cook, Rippe, Längst; T1-T3) we determine by high-throughput monitoring of intra/inter chromosomal contacts and architecture absolute DNA-DNA interaction probability maps for the individual loci and their global context using a novel chromosome conformation capture approach based on deep sequencing. From these the 3D conformation of the chromatin fiber and its higher-order folding into loops and loop clusters can be derived using algorithms recently developed by us. WP3 (Cook, Grosveld/Knoch, Längst; T1-T5) focuses on the determination of transcription rates and structure by qRT-PCR, DNA and RNA fluorescence in situ hybridization using intronic probes and high-resolution laser-scanning and single molecule imaging with advanced image analysis tools. Transcription-dependent changes of active and inactive loci as well as rapid synchronous transcription alteration against the unchanged background is one main interest here. This will yield results in a detailed cartography of the structure-transcription-function dependency and its importance. To rationalize the experimental results theoretically, in WP4 (Wedemann Knoch/Grosveld, Rippe; T1-T3) simulations are made of nucleosomal structure, chromatin fiber conformation and chromosomal architecture using parallel and grid super-computers with ~10.000 CPUs. The impact of different nucleosomal positions and epigenetic modifications on the nucleosomal structure and the chromatin fiber conformation will be assessed by novel Monte Carlo approaches. To understand the higher-order architecture Brownian Dynamics simulations of entire cell nuclei with molecular re

    Are Metastases from Metastases Clinical Relevant? Computer Modelling of Cancer Spread in a Case of Hepatocellular Carcinoma

    Get PDF
    Background: Metastasis formation remains an enigmatic process and one of the main questions recently asked is whether metastases are able to generate further metastases. Different models have been proposed to answer this question; however, their clinical significance remains unclear. Therefore a computer model was developed that permits comparison of the different models quantitatively with clinical data and that additionally predicts the outcome of treatment interventions. Methods: The computer model is based on discrete events simulation approach. On the basis of a case from an untreated patient with hepatocellular carcinoma and its multiple metastases in the liver, it was evaluated whether metastases are able to metastasise and in particular if late disseminated tumour cells are still capable to form metastases. Additionally, the resection of the primary tumour was simulated. The simulation results were compared with clinical data. Results: The simulation results reveal that the number of metastases varies significantly between scenarios where metastases metastasise and scenarios where they do not. In contrast, the total tumour mass is nearly unaffected by the two different modes of metastasis formation. Furthermore, the results provide evidence that metastasis formation is an early event and that late disseminated tumour cells are still capable of forming metastases. Simulations also allow estimating how the resection of the primary tumour delays the patient’s death. Conclusion: The simulation results indicate that for this particular case of a hepatocellular carcinoma late metastases, i.e.

    Scrum as a Teaching Method in a Course of Software Architecture

    No full text
    We used Scrum in a course of software architecture as a teaching method. We evaluated the success of the course: The group processes were monitored by questionnaires based on Theme Centered Interaction. The learning outcome was tracked by written tests and small project work. The success of the course was compared to the courses of the last years with respect to formal evaluation and grades

    Computer simulation of the 30-nanometer chromatin fiber.

    Get PDF
    A new Monte Carlo model for the structure of chromatin is presented here. Based on our previous work on superhelical DNA and polynucleosomes, it reintegrates aspects of the "solenoid" and the "zig-zag" models. The DNA is modeled as a flexible elastic polymer chain, consisting of segments connected by elastic bending, torsional, and stretching springs. The electrostatic interaction between the DNA segments is described by the Debye-Hückel approximation. Nucleosome core particles are represented by oblate ellipsoids; their interaction potential has been parameterized by a comparison with data from liquid crystals of nucleosome solutions. DNA and chromatosomes are linked either at the surface of the chromatosome or through a rigid nucleosome stem. Equilibrium ensembles of 100-nucleosome chains at physiological ionic strength were generated by a Metropolis-Monte Carlo algorithm. For a DNA linked at the nucleosome stem and a nucleosome repeat of 200 bp, the simulated fiber diameter of 32 nm and the mass density of 6.1 nucleosomes per 11 nm fiber length are in excellent agreement with experimental values from the literature. The experimental value of the inclination of DNA and nucleosomes to the fiber axis could also be reproduced. Whereas the linker DNA connects chromatosomes on opposite sides of the fiber, the overall packing of the nucleosomes leads to a helical aspect of the structure. The persistence length of the simulated fibers is 265 nm. For more random fibers where the tilt angles between two nucleosomes are chosen according to a Gaussian distribution along the fiber, the persistence length decreases to 30 nm with increasing width of the distribution, whereas the other observable parameters such as the mass density remain unchanged. Polynucleosomes with repeat lengths of 212 bp also form fibers with the expected experimental properties. Systems with larger repeat length form fibers, but the mass density is significantly lower than the measured value. The theoretical characteristics of a fiber with a repeat length of 192 bp where DNA and nucleosomes are connected at the core particle are in agreement with the experimental values. Systems without a stem and a repeat length of 217 bp do not form fibers

    BMC Bioinformatics BioMed Central Methodology article

    No full text
    Modeling genomic data with type attributes, balancing stability and maintainabilit

    Modeling genomic data with type attributes, balancing stability and maintainability

    No full text
    Abstract Background Molecular biology (MB) is a dynamic research domain that benefits greatly from the use of modern software technology in preparing experiments, analyzing acquired data, and even performing "in-silico" analyses. As ever new findings change the face of this domain, software for MB has to be sufficiently flexible to accommodate these changes. At the same time, however, the efficient development of high-quality and interoperable software requires a stable model of concepts for the subject domain and their relations. The result of these two contradictory requirements is increased complexity in the development of MB software. A common means to reduce complexity is to consider only a small part of the domain, instead of the domain as a whole. As a result, small, specialized programs develop their own domain understanding. They often use one of the numerous data formats or implement proprietary data models. This makes it difficult to incorporate the results of different programs, which is needed by many users in order to work with the software efficiently. The data conversions required to achieve interoperability involve more than just type conversion. Usually they also require complex data mappings and lead to a loss of information. Results To address these problems, we have developed a flexible computer model for the MB domain that supports both changeability and interoperability. This model describes concepts of MB in a formal manner and provides a comprehensive view on it. In this model, we adapted the design pattern "Dynamic Object Model" by using meta data and association classes. A small, highly abstract class model, named "operational model," defines the scope of the software system. An object model, named "knowledge model," describes concrete concepts of the MB domain. The structure of the knowledge model is described by a meta model. We proved our model to be stable, flexible, and useful by implementing a prototype of an MB software framework based on the proposed model. Conclusion Stability and flexibility of the domain model is achieved by its separation into two model parts, the operational model and the knowledge model. These parts are connected by the meta model of the knowledge model to the whole domain model. This approach makes it possible to comply with the requirements of interoperability and flexibility in MB.</p
    corecore