2,792 research outputs found
Earthquake forecasting based on data assimilation: sequential Monte Carlo methods for renewal point processes
Data assimilation is routinely employed in meteorology, engineering and computer sciences to optimally combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant for the seismic gap hypothesis, models of characteristic earthquakes and recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating arbitrary posterior distributions. We perform extensive numerical simulations to demonstrate the feasibility and benefits of forecasting earthquakes based on data assimilation
Uniform random generation of large acyclic digraphs
Directed acyclic graphs are the basic representation of the structure
underlying Bayesian networks, which represent multivariate probability
distributions. In many practical applications, such as the reverse engineering
of gene regulatory networks, not only the estimation of model parameters but
the reconstruction of the structure itself is of great interest. As well as for
the assessment of different structure learning algorithms in simulation
studies, a uniform sample from the space of directed acyclic graphs is required
to evaluate the prevalence of certain structural features. Here we analyse how
to sample acyclic digraphs uniformly at random through recursive enumeration,
an approach previously thought too computationally involved. Based on
complexity considerations, we discuss in particular how the enumeration
directly provides an exact method, which avoids the convergence issues of the
alternative Markov chain methods and is actually computationally much faster.
The limiting behaviour of the distribution of acyclic digraphs then allows us
to sample arbitrarily large graphs. Building on the ideas of recursive
enumeration based sampling we also introduce a novel hybrid Markov chain with
much faster convergence than current alternatives while still being easy to
adapt to various restrictions. Finally we discuss how to include such
restrictions in the combinatorial enumeration and the new hybrid Markov chain
method for efficient uniform sampling of the corresponding graphs.Comment: 15 pages, 2 figures. To appear in Statistics and Computin
Cognitive function and oral health among ageing adults
Objectives: There is inconclusive evidence that cognitive function is associated with oral health in older adults. This study investigated the association between cognitive function and oral health among older adults in England.
Methods: This longitudinal cohort study included 4416 dentate participants aged 50 years or older in the English Longitudinal Study of Ageing during 2002‐2014. Cognitive function was assessed at baseline in 2002/2003 using a battery of cognitive function tests. The self‐reported number of teeth remaining and self‐rated general oral health status was reported in 2014/2015. Ordinal logistic regression was applied to model the association between cognitive function at baseline and tooth loss or self‐rated oral health.
Results: Cognitive function at baseline was negatively associated with the risk of tooth loss (per each 1 standard deviation lower in cognitive function score, OR: 1.13, 95% CI: 1.05‐1.21). When cognitive function score was categorized into quintiles, there was a clear gradient association between cognitive function and tooth loss (P‐trend = 0.003); people in the lowest quintile of cognitive function had higher risk of tooth loss than those in the highest quintile (OR: 1.39, 95% CI: 1.12‐1.74). A similar magnitude and direction of association were evident between cognitive function and self‐rated oral health.
Conclusion: This longitudinal study in an English ageing population has demonstrated that poor cognitive function at early stage was associated with poorer oral health and higher risk of tooth loss in later life. The gradient relationship suggests that an improvement in cognitive function could potentially improve oral health and reduce the risk of tooth loss in the ageing population
Attribute-aware Semantic Segmentation of Road Scenes for Understanding Pedestrian Orientations
Semantic segmentation is an interesting task for many deep learning researchers for scene understanding. However, recognizing details about objects' attributes can be more informative and also helpful for a better scene understanding in intelligent vehicle use cases. This paper introduces a method for simultaneous semantic segmentation and pedestrian attributes recognition. A modified dataset built on top of the Cityscapes dataset is created by adding attribute classes corresponding to pedestrian orientation attributes. The proposed method extends the SegNet model and is trained by using both the original and the attribute-enriched datasets. Based on an experiment, the proposed attribute-aware semantic segmentation approach shows the ability to slightly improve the performance on the Cityscapes dataset, which is capable of expanding its classes in this case through additional data training
Tailoring teleportation to the quantum alphabet
We introduce a refinement of the standard continuous variable teleportation
measurement and displacement strategies. This refinement makes use of prior
knowledge about the target state and the partial information carried by the
classical channel when entanglement is non-maximal. This gives an improvement
in the output quality of the protocol. The strategies we introduce could be
used in current continuous variable teleportation experiments.Comment: 16 pages, 6 figures, RevTeX, made changes as recommended by referee,
other minor textual corrections, resubmitted to Phys. Rev.
The optical depth of the Universe to ultrahigh energy cosmic ray scattering in the magnetized large scale structure
This paper provides an analytical description of the transport of ultrahigh
energy cosmic rays in an inhomogeneously magnetized intergalactic medium. This
latter is modeled as a collection of magnetized scattering centers such as
radio cocoons, magnetized galactic winds, clusters or magnetized filaments of
large scale structure, with negligible magnetic fields in between. Magnetic
deflection is no longer a continuous process, it is rather dominated by
scattering events. We study the interaction between high energy cosmic rays and
the scattering agents. We then compute the optical depth of the Universe to
cosmic ray scattering and discuss the phenomological consequences for various
source scenarios. For typical parameters of the scattering centers, the optical
depth is greater than unity at 5x10^{19}eV, but the total angular deflection is
smaller than unity. One important consequence of this scenario is the
possibility that the last scattering center encountered by a cosmic ray be
mistaken with the source of this cosmic ray. In particular, we suggest that
part of the correlation recently reported by the Pierre Auger Observatory may
be affected by such delusion: this experiment may be observing in part the last
scattering surface of ultrahigh energy cosmic rays rather than their source
population. Since the optical depth falls rapidly with increasing energy, one
should probe the arrival directions of the highest energy events beyond
10^{20}eV on an event by event basis to circumvent this effect.Comment: version to appear in PRD; substantial improvements: extended
introduction, sections added on angular images and on direction dependent
effects with sky maps of optical depth, enlarged discussion of Auger results
(conclusions unchanged); 27 pages, 9 figure
DNA replication stress restricts ribosomal DNA copy number
Ribosomal RNAs (rRNAs) in budding yeast are encoded by ~100–200 repeats of a 9.1kb sequence arranged in tandem on chromosome XII, the ribosomal DNA (rDNA) locus. Copy number of rDNA repeat units in eukaryotic cells is maintained far in excess of the requirement for ribosome biogenesis. Despite the importance of the repeats for both ribosomal and non-ribosomal functions, it is currently not known how “normal” copy number is determined or maintained. To identify essential genes involved in the maintenance of rDNA copy number, we developed a droplet digital PCR based assay to measure rDNA copy number in yeast and used it to screen a yeast conditional temperature-sensitive mutant collection of essential genes. Our screen revealed that low rDNA copy number is associated with compromised DNA replication. Further, subculturing yeast under two separate conditions of DNA replication stress selected for a contraction of the rDNA array independent of the replication fork blocking protein, Fob1. Interestingly, cells with a contracted array grew better than their counterparts with normal copy number under conditions of DNA replication stress. Our data indicate that DNA replication stresses select for a smaller rDNA array. We speculate that this liberates scarce replication factors for use by the rest of the genome, which in turn helps cells complete DNA replication and continue to propagate. Interestingly, tumors from mini chromosome maintenance 2 (MCM2)-deficient mice also show a loss of rDNA repeats. Our data suggest that a reduction in rDNA copy number may indicate a history of DNA replication stress, and that rDNA array size could serve as a diagnostic marker for replication stress. Taken together, these data begin to suggest the selective pressures that combine to yield a “normal” rDNA copy number
- …