6,305 research outputs found
The motion, stability and breakup of a stretching liquid bridge with a receding contact line
The complex behavior of drop deposition on a hydrophobic surface is
considered by looking at a model problem in which the evolution of a
constant-volume liquid bridge is studied as the bridge is stretched. The bridge
is pinned with a fixed diameter at the upper contact point, but the contact
line at the lower attachment point is free to move on a smooth substrate.
Experiments indicate that initially, as the bridge is stretched, the lower
contact line slowly retreats inwards. However at a critical radius, the bridge
becomes unstable, and the contact line accelerates dramatically, moving inwards
very quickly. The bridge subsequently pinches off, and a small droplet is left
on the substrate. A quasi-static analysis, using the Young-Laplace equation, is
used to accurately predict the shape of the bridge during the initial bridge
evolution, including the initial onset of the slow contact line retraction. A
stability analysis is used to predict the onset of pinch-off, and a
one-dimensional dynamical equation, coupled with a Tanner-law for the dynamic
contact angle, is used to model the rapid pinch-off behavior. Excellent
agreement between numerical predictions and experiments is found throughout the
bridge evolution, and the importance of the dynamic contact line model is
demonstrated.Comment: 37 pages, 12 figure
BAC transgene arrays as a model system for studying large-scale chromatin structure
The folding of interphase chromatin into large-scale chromatin structure and its spatial organization within nucleus has been suggested to have important roles in gene regulation. In this study, we created engineered chromatin regions consisting of tandem repeats of BAC transgenes, which contain 150-200 kb of defined genomic regions, and used them as a model system to study the mechanisms and functional significance of large-scale chromatin organization.
The BAC transgene arrays recapitulated several important features of endogenous chromatin, including transcription level and intranuclear positioning. Using this system, we showed that tandem arrays of housekeeping gene loci form open large-scale chromatin structure independent of their genomic integration sites, including insertions within centromeric heterochromatin. This BAC-specific large-scale chromatin conformation provided a permissive environment for transcription, as evidenced by the copy-number dependent and position independent expression of embedded reporter mini-genes. This leads to the development of a novel method for reliable transgene expression in mammalian cells, which should prove useful in a number of therapeutic and scientific applications.
We also demonstrated that BAC transgene arrays can be employed as an effective system for dissecting sequence determinants for intranuclear positioning of gene loci. We showed that in mouse ES and fibroblast cells a BAC carrying a 200 kb human genomic fragment containing the beta-globin locus autonomously targets to the nuclear periphery. Using BAC recombineering, we dissected this 200kb region and identified two genomic regions sufficient to target the BAC transgenes to nuclear periphery. This study represents a first step towards elucidation of the molecular mechanism for the nuclear peripheral localization of genes in mammalian cells
THE RELATIONSHIP BETWEEN COLLEGE STUDENTS\u27 NETWORK SOCIAL SUPPORT, NETWORK SECURITY AND SUBJECTIVE WELL-BEING
THE RELATIONSHIP BETWEEN COLLEGE STUDENTS\u27 NETWORK SOCIAL SUPPORT, NETWORK SECURITY AND SUBJECTIVE WELL-BEING
Type-III two Higgs doublet model plus a pseudoscalar confronted with , muon and dark matter
In this work, we introduce an extra singlet pseudoscalar into the Type-III
two Higgs doublet model (2HDM) which is supposed to solve a series of problems
in the modern particle-cosmology. With existence of a light pseudoscalar, the
excess measured at CMS and as well as the
anomaly could be simultaneously explained within certain parameter spaces that
can also tolerate the data on the flavor-violating processes
and Higgs decay gained at LHC. Within the same
parameter spaces, the DM relic abundance is well accounted. Moreover, the
recently observed Galactic Center gamma ray excess(GCE) is proposed to realize
through dark matter(DM) pair annihilations, and in this work, the scenario of
the annihilation being mediated by the pseudoscalar is also addressed.Comment: 14 pages, 8 figures, version to appear in NP
Analysis of Noisy Evolutionary Optimization When Sampling Fails
In noisy evolutionary optimization, sampling is a common strategy to deal
with noise. By the sampling strategy, the fitness of a solution is evaluated
multiple times (called \emph{sample size}) independently, and its true fitness
is then approximated by the average of these evaluations. Previous studies on
sampling are mainly empirical. In this paper, we first investigate the effect
of sample size from a theoretical perspective. By analyzing the (1+1)-EA on the
noisy LeadingOnes problem, we show that as the sample size increases, the
running time can reduce from exponential to polynomial, but then return to
exponential. This suggests that a proper sample size is crucial in practice.
Then, we investigate what strategies can work when sampling with any fixed
sample size fails. By two illustrative examples, we prove that using parent or
offspring populations can be better. Finally, we construct an artificial noisy
example to show that when using neither sampling nor populations is effective,
adaptive sampling (i.e., sampling with an adaptive sample size) can work. This,
for the first time, provides a theoretical support for the use of adaptive
sampling
Running Time Analysis of the (1+1)-EA for Robust Linear Optimization
Evolutionary algorithms (EAs) have found many successful real-world
applications, where the optimization problems are often subject to a wide range
of uncertainties. To understand the practical behaviors of EAs theoretically,
there are a series of efforts devoted to analyzing the running time of EAs for
optimization under uncertainties. Existing studies mainly focus on noisy and
dynamic optimization, while another common type of uncertain optimization,
i.e., robust optimization, has been rarely touched. In this paper, we analyze
the expected running time of the (1+1)-EA solving robust linear optimization
problems (i.e., linear problems under robust scenarios) with a cardinality
constraint . Two common robust scenarios, i.e., deletion-robust and
worst-case, are considered. Particularly, we derive tight ranges of the robust
parameter or budget allowing the (1+1)-EA to find an optimal solution
in polynomial running time, which disclose the potential of EAs for robust
optimization.Comment: 17 pages, 1 tabl
On the Robustness of Median Sampling in Noisy Evolutionary Optimization
In real-world optimization tasks, the objective (i.e., fitness) function
evaluation is often disturbed by noise due to a wide range of uncertainties.
Evolutionary algorithms (EAs) have been widely applied to tackle noisy
optimization, where reducing the negative effect of noise is a crucial issue.
One popular strategy to cope with noise is sampling, which evaluates the
fitness multiple times and uses the sample average to approximate the true
fitness. In this paper, we introduce median sampling as a noise handling
strategy into EAs, which uses the median of the multiple evaluations to
approximate the true fitness instead of the mean. We theoretically show that
median sampling can reduce the expected running time of EAs from exponential to
polynomial by considering the (1+1)-EA on OneMax under the commonly used
one-bit noise. We also compare mean sampling with median sampling by
considering two specific noise models, suggesting that when the 2-quantile of
the noisy fitness increases with the true fitness, median sampling can be a
better choice. The results provide us with some guidance to employ median
sampling efficiently in practice.Comment: 19 pages. arXiv admin note: text overlap with arXiv:1810.05045,
arXiv:1711.0095
- …