5,270 research outputs found
Spatial clustering and common regulatory elements correlate with coordinated gene expression
Many cellular responses to surrounding cues require temporally concerted
transcriptional regulation of multiple genes. In prokaryotic cells, a
single-input-module motif with one transcription factor regulating multiple
target genes can generate coordinated gene expression. In eukaryotic cells,
transcriptional activity of a gene is affected by not only transcription
factors but also the epigenetic modifications and three-dimensional chromosome
structure of the gene. To examine how local gene environment and transcription
factor regulation are coupled, we performed a combined analysis of time-course
RNA-seq data of TGF-\b{eta} treated MCF10A cells and related epigenomic and
Hi-C data. Using Dynamic Regulatory Events Miner (DREM), we clustered
differentially expressed genes based on gene expression profiles and associated
transcription factors. Genes in each class have similar temporal gene
expression patterns and share common transcription factors. Next, we defined a
set of linear and radial distribution functions, as used in statistical
physics, to measure the distributions of genes within a class both spatially
and linearly along the genomic sequence. Remarkably, genes within the same
class despite sometimes being separated by tens of million bases (Mb) along
genomic sequence show a significantly higher tendency to be spatially close
despite sometimes being separated by tens of Mb along the genomic sequence than
those belonging to different classes do. Analyses extended to the process of
mouse nervous system development arrived at similar conclusions. Future studies
will be able to test whether this spatial organization of chromosomes
contributes to concerted gene expression.Comment: 30 pages, 9 figures, accepted in PLoS Computational Biolog
A Numerical Approach to Solving an Inverse Heat Conduction Problem Using the Levenberg-Marquardt Algorithm
This chapter is intended to provide a numerical algorithm involving the combined use of the Levenberg-Marquardt algorithm and the Galerkin finite element method for estimating the diffusion coefficient in an inverse heat conduction problem (IHCP). In the present study, the functional form of the diffusion coefficient is an unknown priori. The unknown diffusion coefficient is approximated by the polynomial form and the present numerical algorithm is employed to find the solution. Numerical experiments are presented to show the efficiency of the proposed method
Careful at Estimation and Bold at Exploration
Exploration strategies in continuous action space are often heuristic due to
the infinite actions, and these kinds of methods cannot derive a general
conclusion. In prior work, it has been shown that policy-based exploration is
beneficial for continuous action space in deterministic policy reinforcement
learning(DPRL). However, policy-based exploration in DPRL has two prominent
issues: aimless exploration and policy divergence, and the policy gradient for
exploration is only sometimes helpful due to inaccurate estimation. Based on
the double-Q function framework, we introduce a novel exploration strategy to
mitigate these issues, separate from the policy gradient. We first propose the
greedy Q softmax update schema for Q value update. The expected Q value is
derived by weighted summing the conservative Q value over actions, and the
weight is the corresponding greedy Q value. Greedy Q takes the maximum value of
the two Q functions, and conservative Q takes the minimum value of the two
different Q functions. For practicality, this theoretical basis is then
extended to allow us to combine action exploration with the Q value update,
except for the premise that we have a surrogate policy that behaves like this
exploration policy. In practice, we construct such an exploration policy with a
few sampled actions, and to meet the premise, we learn such a surrogate policy
by minimizing the KL divergence between the target policy and the exploration
policy constructed by the conservative Q. We evaluate our method on the Mujoco
benchmark and demonstrate superior performance compared to previous
state-of-the-art methods across various environments, particularly in the most
complex Humanoid environment.Comment: 20 page
- …