1,608 research outputs found

    Single-molecule real-time sequencing combined with optical mapping yields completely finished fungal genome

    Get PDF
    Next-generation sequencing (NGS) technologies have increased the scalability, speed, and resolution of genomic sequencing and, thus, have revolutionized genomic studies. However, eukaryotic genome sequencing initiatives typically yield considerably fragmented genome assemblies. Here, we assessed various state-of-the-art sequencing and assembly strategies in order to produce a contiguous and complete eukaryotic genome assembly, focusing on the filamentous fungus Verticillium dahliae. Compared with Illumina-based assemblies of the V. dahliae genome, hybrid assemblies that also include PacBio- generated long reads establish superior contiguity. Intriguingly, provided that sufficient sequence depth is reached, assemblies solely based on PacBio reads outperform hybrid assemblies and even result in fully assembled chromosomes. Furthermore, the addition of optical map data allowed us to produce a gapless and complete V. dahliae genome assembly of the expected eight chromosomes from telomere to telomere. Consequently, we can now study genomic regions that were previously not assembled or poorly assembled, including regions that are populated by repetitive sequences, such as transposons, allowing us to fully appreciate an organism’s biological complexity. Our data show that a combination of PacBio-generated long reads and optical mapping can be used to generate complete and gapless assemblies of fungal genomes. IMPORTANCE Studying whole-genome sequences has become an important aspect of biological research. The advent of nextgeneration sequencing (NGS) technologies has nowadays brought genomic science within reach of most research laboratories, including those that study nonmodel organisms. However, most genome sequencing initiatives typically yield (highly) fragmented genome assemblies. Nevertheless, considerable relevant information related to genome structure and evolution is likely hidden in those nonassembled regions. Here, we investigated a diverse set of strategies to obtain gapless genome assemblies, using the genome of a typical ascomycete fungus as the template. Eventually, we were able to show that a combination of PacBiogenerated long reads and optical mapping yields a gapless telomere-to-telomere genome assembly, allowing in-depth genome sanalyses to facilitate functional studies into an organism’s biology

    Fusion energy from the Moon for the twenty-first century

    Get PDF
    It is shown in this paper that the D-He-3 fusion fuel cycle is not only credible from a physics standpoint, but that its breakeven and ignition characteristics could be developed on roughly the same time schedule as the DT cycle. It was also shown that the extremely low fraction of power in neutrons, the lack of significant radioactivity in the reactants, and the potential for very high conversion efficiencies, can result in definite advantages for the D-He-3 cycle with respect to DT fusion and fission reactors in the twenty-first century. More specifically, the D-He-3 cycle can accomplish the following: (1) eliminate the need for deep geologic waste burial facilities and the wastes can qualify for Class A, near-surface land burial; (2) allow 'inherently safe' reactors to be built that, under the worst conceivable accident, cannot cause a civilian fatality or result in a significant (greater than 100 mrem) exposure to a member of the public; (3) reduce the radiation damage levels to a point where no scheduled replacement of reactor structural components is required, i.e., full reactor lifetimes (approximately 30 FPY) can be credibly claimed; (4) increase the reliability and availability of fusion reactors compared to DT systems because of the greatly reduced radioactivity, the low neutron damage, and the elimination of T breeding; and (5) greatly reduce the capital costs of fusion power plants (compared to DT systems) by as much as 50 percent and present the potential for a significant reduction on the COE. The concepts presented in this paper tie together two of the most ambitious high-technology endeavors of the twentieth century: the development of controlled thermonuclear fusion for civilian power applications and the utilization of outer space for the benefit of mankind on Earth

    Spike Timing Dependent Plasticity: A Consequence of More Fundamental Learning Rules

    Get PDF
    Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse – the “first law” of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity

    Inferred changes in El Niño–Southern Oscillation variance over the past six centuries

    Get PDF
    It is vital to understand how the El Niño–Southern Oscillation (ENSO) has responded to past changes in natural and anthropogenic forcings, in order to better understand and predict its response to future greenhouse warming. To date, however, the instrumental record is too brief to fully characterize natural ENSO variability, while large discrepancies exist amongst paleo-proxy reconstructions of ENSO. These paleo-proxy reconstructions have typically attempted to reconstruct ENSO's temporal evolution, rather than the variance of these temporal changes. Here a new approach is developed that synthesizes the variance changes from various proxy data sets to provide a unified and updated estimate of past ENSO variance. The method is tested using surrogate data from two coupled general circulation model (CGCM) simulations. It is shown that in the presence of dating uncertainties, synthesizing variance information provides a more robust estimate of ENSO variance than synthesizing the raw data and then identifying its running variance. We also examine whether good temporal correspondence between proxy data and instrumental ENSO records implies a good representation of ENSO variance. In the climate modeling framework we show that a significant improvement in reconstructing ENSO variance changes is found when combining information from diverse ENSO-teleconnected source regions, rather than by relying on a single well-correlated location. This suggests that ENSO variance estimates derived from a single site should be viewed with caution. Finally, synthesizing existing ENSO reconstructions to arrive at a better estimate of past ENSO variance changes, we find robust evidence that the ENSO variance for any 30 yr period during the interval 1590–1880 was considerably lower than that observed during 1979–2009

    A Clinically Relevant Method of Analyzing Continuous Change in Robotic Upper Extremity Chronic Stroke Rehabilitation

    Get PDF
    Background. Robots designed for rehabilitation of the upper extremity after stroke facilitate high rates of repetition during practice of movements and record precise kinematic data, providing a method to investigate motor recovery profiles over time. Objective. To determine how motor recovery profiles during robotic interventions provide insight into improving clinical gains. Methods. A convenience sample (n = 22), from a larger randomized control trial, was taken of chronic stroke participants completing 12 sessions of arm therapy. One group received 60 minutes of robotic therapy (Robot only) and the other group received 45 minutes on the robot plus 15 minutes of translation-to-task practice (Robot + TTT). Movement time was assessed using the robot without powered assistance. Analyses (ANOVA, random coefficient modeling [RCM] with 2-term exponential function) were completed to investigate changes across the intervention, between sessions, and within a session. Results. Significant improvement (P < .05) in movement time across the intervention (pre vs post) was similar between the groups but there were group differences for changes between and within sessions (P < .05). The 2-term exponential function revealed a fast and slow component of learning that described performance across consecutive blocks. The RCM identified individuals who were above or below the marginal model. Conclusions. The expanded analyses indicated that changes across time can occur in different ways but achieve similar goals and may be influenced by individual factors such as initial movement time. These findings will guide decisions regarding treatment planning based on rates of motor relearning during upper extremity stroke robotic interventions

    A family of cyclin homologs that control the G1 phase in yeast.

    Full text link

    Eccrine porocarcinoma of the head: An important differential diagnosis in the elderly patient

    Get PDF
    Background: Eccrine porocarcinoma is a rare malignant tumor of the sweat gland, characterized by a broad spectrum of clinicopathologic presentations. Surprisingly, unlike its benign counterpart eccrine poroma, eccrine porocarcinoma is seldom found in areas with a high density of eccrine sweat glands, like the palms or soles. Instead, eccrine porocarcinoma frequently occurs on the lower extremities, trunk and abdomen, but also on the head, resembling various other skin tumors, as illustrated in the patients described herein. Observations: We report 5 cases of eccrine porocarcinoma of the head. All patients were initially diagnosed as having epidermal or melanocytic skin tumors. Only after histopathologic examination were they classified as eccrine porocarcinoma, showing features of epithelial tumors with abortive ductal differentiation. Characteristic clinical, histopathologic and immunohistochemical findings of eccrine porocarcinomas are illustrated. Conclusion: Eccrine porocarcinomas are potentially fatal adnexal malignancies, in which extensive metastatic dissemination may occur. Porocarcinomas are commonly overlooked, or misinterpreted as squamous or basal cell carcinomas as well as other common malignant and even benign skin tumors. Knowledge of the clinical pattern and histologic findings, therefore, is crucial for an early therapeutic intervention, which can reduce the risk of tumor recurrence and serious complications. Copyright (c) 2008 S. Karger AG, Basel

    Parameter estimation in spatially extended systems: The Karhunen-Loeve and Galerkin multiple shooting approach

    Get PDF
    Parameter estimation for spatiotemporal dynamics for coupled map lattices and continuous time domain systems is shown using a combination of multiple shooting, Karhunen-Loeve decomposition and Galerkin's projection methodologies. The resulting advantages in estimating parameters have been studied and discussed for chaotic and turbulent dynamics using small amounts of data from subsystems, availability of only scalar and noisy time series data, effects of space-time parameter variations, and in the presence of multiple time-scales.Comment: 11 pages, 5 figures, 4 Tables Corresponding Author - V. Ravi Kumar, e-mail address: [email protected]

    Extensive Chaos in the Nikolaevskii Model

    Get PDF
    We carry out a systematic study of a novel type of chaos at onset ("soft-mode turbulence") based on numerical integration of the simplest one dimensional model. The chaos is characterized by a smooth interplay of different spatial scales, with defect generation being unimportant. The Lyapunov exponents are calculated for several system sizes for fixed values of the control parameter ϵ\epsilon. The Lyapunov dimension and the Kolmogorov-Sinai entropy are calculated and both shown to exhibit extensive and microextensive scaling. The distribution functional is shown to satisfy Gaussian statistics at small wavenumbers and small frequency.Comment: 4 pages (including 5 figures) LaTeX file. Submitted to Phys. Rev. Let
    corecore