7,086 research outputs found

    Limited Lifespan of Fragile Regions in Mammalian Evolution

    Full text link
    An important question in genome evolution is whether there exist fragile regions (rearrangement hotspots) where chromosomal rearrangements are happening over and over again. Although nearly all recent studies supported the existence of fragile regions in mammalian genomes, the most comprehensive phylogenomic study of mammals (Ma et al. (2006) Genome Research 16, 1557-1565) raised some doubts about their existence. We demonstrate that fragile regions are subject to a "birth and death" process, implying that fragility has limited evolutionary lifespan. This finding implies that fragile regions migrate to different locations in different mammals, explaining why there exist only a few chromosomal breakpoints shared between different lineages. The birth and death of fragile regions phenomenon reinforces the hypothesis that rearrangements are promoted by matching segmental duplications and suggests putative locations of the currently active fragile regions in the human genome

    Why polymer chains in a melt are not random walks

    Full text link
    A cornerstone of modern polymer physics is the `Flory ideality hypothesis' which states that a chain in a polymer melt adopts `ideal' random-walk-like conformations. Here we revisit theoretically and numerically this pivotal assumption and demonstrate that there are noticeable deviations from ideality. The deviations come from the interplay of chain connectivity and the incompressibility of the melt, leading to an effective repulsion between chain segments of all sizes ss. The amplitude of this repulsion increases with decreasing ss where chain segments become more and more swollen. We illustrate this swelling by an analysis of the form factor F(q)F(q), i.e. the scattered intensity at wavevector qq resulting from intramolecular interferences of a chain. A `Kratky plot' of q2F(q)q^2F(q) {\em vs.} qq does not exhibit the plateau for intermediate wavevectors characteristic of ideal chains. One rather finds a conspicuous depression of the plateau, δ(F1(q))=q3/32ρ\delta(F^{-1}(q)) = |q|^3/32\rho, which increases with qq and only depends on the monomer density ρ\rho.Comment: 4 pages, 4 figures, EPL, accepted January 200

    A new approach to upscaling fracture network models while preserving geostatistical and geomechanical characteristics

    Get PDF
    A new approach to upscaling two-dimensional fracture network models is proposed for preserving geostatistical and geomechanical characteristics of a smaller-scale “source” fracture pattern. First, the scaling properties of an outcrop system are examined in terms of spatial organization, lengths, connectivity, and normal/shear displacements using fractal geometry and power law relations. The fracture pattern is observed to be nonfractal with the fractal dimension D ≈ 2, while its length distribution tends to follow a power law with the exponent 2 < a < 3. To introduce a realistic distribution of fracture aperture and shear displacement, a geomechanical model using the combined finite-discrete element method captures the response of a fractured rock sample with a domain size L = 2 m under in situ stresses. Next, a novel scheme accommodating discrete-time random walks in recursive self-referencing lattices is developed to nucleate and propagate fractures together with their stress- and scale-dependent attributes into larger domains of up to 54 m × 54 m. The advantages of this approach include preserving the nonplanarity of natural cracks, capturing the existence of long fractures, retaining the realism of variable apertures, and respecting the stress dependency of displacement-length correlations. Hydraulic behavior of multiscale growth realizations is modeled by single-phase flow simulation, where distinct permeability scaling trends are observed for different geomechanical scenarios. A transition zone is identified where flow structure shifts from extremely channeled to distributed as the network scale increases. The results of this paper have implications for upscaling network characteristics for reservoir simulation

    A decision-theoretic approach for segmental classification

    Full text link
    This paper is concerned with statistical methods for the segmental classification of linear sequence data where the task is to segment and classify the data according to an underlying hidden discrete state sequence. Such analysis is commonplace in the empirical sciences including genomics, finance and speech processing. In particular, we are interested in answering the following question: given data yy and a statistical model π(x,y)\pi(x,y) of the hidden states xx, what should we report as the prediction x^\hat{x} under the posterior distribution π(xy)\pi (x|y)? That is, how should you make a prediction of the underlying states? We demonstrate that traditional approaches such as reporting the most probable state sequence or most probable set of marginal predictions can give undesirable classification artefacts and offer limited control over the properties of the prediction. We propose a decision theoretic approach using a novel class of Markov loss functions and report x^\hat{x} via the principle of minimum expected loss (maximum expected utility). We demonstrate that the sequence of minimum expected loss under the Markov loss function can be enumerated exactly using dynamic programming methods and that it offers flexibility and performance improvements over existing techniques. The result is generic and applicable to any probabilistic model on a sequence, such as Hidden Markov models, change point or product partition models.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS657 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    DEFORMATION BEHAVIOR OF NANO/MICRO REINFORCED PMMA

    Get PDF
    Práce sleduje vliv velikosti částic na elastický modul a deformační chování za mezí kluzu. Bylo pozorováno, že jak elastická oblast, tak oblast za mezí kluzu ukazuje silnou závislost chování na velikosti částic. Cílem této práce je korelovat experimentální data a teoretické předpoklady které bylo odvozeny pro deformační chování v elastické oblasti a v oblasti za mezí kluzu pro amorfní polymery a konkrétně pro PMMA. Vše je motivováno propojit zatím oddělené oblasti kontinuální mikromechaniky a diskrétní nanomechaniky. Deformační chovaní PMMA plněného nano a mikro plnivem bylo pozorováno v elastické a plastické oblasti. Byl zkoumán vliv velikosti částic na velikost modulu a deformačního zpevnění. Mechanizmus vyztužení je interpretován s použitím teorie imobilizace řetězců, nanočástice mají silný vliv na molekulární dynamiku a kinetiku zapletenin. Mým příspěvkem k tomuto tématu je ukázat výraznou závislost na mechanizmu vyztužení v závislosti na velikosti částic. A to jak pod teplotou skelného přechodu tak nad teplotou skelného přechodu. Ačkoli pro velikost modulu byla publikována značná množství dat, která byla následně i interpretována, vliv částic na deformační zpevnění je poskrovnu. Během elastické deformace je primární struktura materiálu neměnná, jedná se o elastickou deformaci, za mezí kluzu již toto neplatí a primární struktura je zde nevratně poškozena. Bylo ukázáno, že obsah nano částic vede ke zvýšení meze kluzu a vyššímu deformačnímu zpevnění. Tento nárůst deformačního zpevnění je v korelaci s Guth-Gold rovnicí. Je předpokládáno, že nanočástice slouží jako další fyzikální zapleteniny a vedou k fyzikálně více zapletenému systému. Stejný efekt jako v elastické oblast tj. vliv velikosti částic na modul, byla pozorována i během deformačního zpevnění.The effect of particle size on magnitude of modulus and the post-yield response is highlighted. It is shown that both regions show a pronounced particle size dependence. The object of this work is to correlate a number of experimental facts and theoretical considerations regarding the mechanism of elastic and plastic deformation of amorphous polymers in general and of glassy PMMA particularly. Deformation behavior of PMMA filled with spherical particles will be observed in elastic and plastic region. The effect of particle size dependence on the modulus and strain hardening response was observed. The interpretation of reinforcing effect in nanocomposites is building up with the concept of immobilization, the nanoparticles show significant affect on the molecular dynamics and the kinetics of the disentanglement. While for the modulus measurement a large extent of data were published trying to interpret the observed trends, the strain-hardening region is somehow omitted. In an elastic region, the structure of the material maintains the same as prepared, but after passing the yield point, the primary structure is destroyed. It is shown that incorporation of the nanoparticles yields to the increase of strain hardening slope and that this slope is in a good correlation with Guth-Gold equation. It is assumed that the particles can serve as physical crosslinks yielding to physically denser entangled network.

    Speech vocoding for laboratory phonology

    Get PDF
    Using phonological speech vocoding, we propose a platform for exploring relations between phonology and speech processing, and in broader terms, for exploring relations between the abstract and physical structures of a speech signal. Our goal is to make a step towards bridging phonology and speech processing and to contribute to the program of Laboratory Phonology. We show three application examples for laboratory phonology: compositional phonological speech modelling, a comparison of phonological systems and an experimental phonological parametric text-to-speech (TTS) system. The featural representations of the following three phonological systems are considered in this work: (i) Government Phonology (GP), (ii) the Sound Pattern of English (SPE), and (iii) the extended SPE (eSPE). Comparing GP- and eSPE-based vocoded speech, we conclude that the latter achieves slightly better results than the former. However, GP - the most compact phonological speech representation - performs comparably to the systems with a higher number of phonological features. The parametric TTS based on phonological speech representation, and trained from an unlabelled audiobook in an unsupervised manner, achieves intelligibility of 85% of the state-of-the-art parametric speech synthesis. We envision that the presented approach paves the way for researchers in both fields to form meaningful hypotheses that are explicitly testable using the concepts developed and exemplified in this paper. On the one hand, laboratory phonologists might test the applied concepts of their theoretical models, and on the other hand, the speech processing community may utilize the concepts developed for the theoretical phonological models for improvements of the current state-of-the-art applications

    A new weighted NMF algorithm for missing data interpolation and its application to speech enhancement

    Get PDF
    In this paper we present a novel weighted NMF (WNMF) algorithm for interpolating missing data. The proposed approach has a computational cost equivalent to that of standard NMF and, additionally, has the flexibility to control the degree of interpolation in the missing data regions. Existing WNMF methods do not offer this capability and, thereby, tend to overestimate the values in the masked regions. By constraining the estimates of the missing-data regions, the proposed approach allows for a better trade-off in the interpolation. We further demonstrate the applicability of WNMF and missing data estimation to the problem of speech enhancement. In this preliminary work, we consider the improvement obtainable by applying the proposed method to ideal binary mask-based gain functions. The instrumental quality metrics (PESQ and SNR) clearly indicate the added benefit of the missing data interpolation, compared to the output of the ideal binary mask. This preliminary work opens up novel possibilities not only in the field of speech enhancement but also, more generally, in the field of missing data interpolation using NMF

    Adam-Gibbs model in the density scaling regime and its implications for the configurational entropy scaling

    Get PDF
    To solve a long-standing problem of condensed matter physics with determining a proper description of the thermodynamic evolution of the time scale of molecular dynamics near the glass transition, we extend the well-known Adam-Gibbs model to describe the temperature-volume dependence of structural relaxation times, τα(T,V){\tau}_{\alpha} (T,V). We employ the thermodynamic scaling idea reflected in the density scaling power law, τα=f(T1Vγ){\tau}_{\alpha}=f(T^{-1} V^{-\gamma } ) , recently acknowledged as a valid unifying concept in the glass transition physics, to discriminate between physically relevant and irrelevant attempts at formulating the temperature-volume representations of the Adam-Gibbs model. As a consequence, we determine a straightforward relation between the structural relaxation time τα{\tau}_{\alpha} and the configurational entropy ScS_c, giving evidence that also Sc(T,V)=g(T1Vγ)S_c (T,V)=g(T^{-1} V^{-\gamma} ) with the exponent {\gamma} that enables to scale τα(T,V){\tau}_{\alpha} (T,V). This important finding has meaningful implications for the linkage between thermodynamics and molecular dynamics near the glass transition, because it implies that τα{\tau}_{\alpha} can be scaled with ScS_c
    corecore