5,361 research outputs found
Slug-based epithelial-mesenchymal transition gene signature is associated with prolonged time to recurrence in glioblastoma
Background
We previously identified a precise stage-associated gene expression signature of coordinately expressed genes, including the transcription factor Slug (SNAI2) and other epithelial mesenchymal transition (EMT) markers, present in samples from publicly available gene expression datasets in multiple cancer types. The expression levels of the co-expressed genes vary in a continuous and coordinate manner across the samples, ranging from absence of expression to strong co-expression of all genes. These data suggest that tumor cells may pass through an EMT like process of mesenchymal transition to varying degrees. 

Findings
Here we show that this signature in glioblastoma multiforme (GBM) is associated with time to recurrence following initial treatment. By analyzing data from The Cancer Genome Atlas (TCGA), we found that GBM patients who responded to therapy and had long time to recurrence had low levels of the signature in their tumor samples (P = 3x10^-7^). We also found that the signature is strongly correlated in gliomas with the putative stem cell marker CD44, and is highly enriched among the differentially expressed genes in glioblastomas vs. lower grade gliomas. 

Conclusions 
Our results suggest that long delay before tumor recurrence is associated with absence of the mesenchymal transition signature, raising the possibility that inhibiting this transition might improve the durability of therapy in glioma patients
Current-Induced Step Bending Instability on Vicinal Surfaces
We model an apparent instability seen in recent experiments on current
induced step bunching on Si(111) surfaces using a generalized 2D BCF model,
where adatoms have a diffusion bias parallel to the step edges and there is an
attachment barrier at the step edge. We find a new linear instability with
novel step patterns. Monte Carlo simulations on a solid-on-solid model are used
to study the instability beyond the linear regime.Comment: 4 pages, 4 figure
Dual Monte Carlo and Cluster Algorithms
We discuss the development of cluster algorithms from the viewpoint of
probability theory and not from the usual viewpoint of a particular model. By
using the perspective of probability theory, we detail the nature of a cluster
algorithm, make explicit the assumptions embodied in all clusters of which we
are aware, and define the construction of free cluster algorithms. We also
illustrate these procedures by rederiving the Swendsen-Wang algorithm,
presenting the details of the loop algorithm for a worldline simulation of a
quantum 1/2 model, and proposing a free cluster version of the
Swendsen-Wang replica method for the random Ising model. How the principle of
maximum entropy might be used to aid the construction of cluster algorithms is
also discussed.Comment: 25 pages, 4 figures, to appear in Phys.Rev.
On generalized cluster algorithms for frustrated spin models
Standard Monte Carlo cluster algorithms have proven to be very effective for
many different spin models, however they fail for frustrated spin systems.
Recently a generalized cluster algorithm was introduced that works extremely
well for the fully frustrated Ising model on a square lattice, by placing bonds
between sites based on information from plaquettes rather than links of the
lattice. Here we study some properties of this algorithm and some variants of
it. We introduce a practical methodology for constructing a generalized cluster
algorithm for a given spin model, and investigate apply this method to some
other frustrated Ising models. We find that such algorithms work well for
simple fully frustrated Ising models in two dimensions, but appear to work
poorly or not at all for more complex models such as spin glasses.Comment: 34 pages in RevTeX. No figures included. A compressed postscript file
for the paper with figures can be obtained via anonymous ftp to
minerva.npac.syr.edu in users/paulc/papers/SCCS-527.ps.Z. Syracuse University
NPAC technical report SCCS-52
Safe Zero-Shot Model-Based Learning and Control: A Wasserstein Distributionally Robust Approach
This paper explores distributionally robust zero-shot model-based learning
and control using Wasserstein ambiguity sets. Conventional model-based
reinforcement learning algorithms struggle to guarantee feasibility throughout
the online learning process. We address this open challenge with the following
approach. Using a stochastic model-predictive control (MPC) strategy, we
augment safety constraints with affine random variables corresponding to the
instantaneous empirical distributions of modeling error. We obtain these
distributions by evaluating model residuals in real time throughout the online
learning process. By optimizing over the worst case modeling error distribution
defined within a Wasserstein ambiguity set centered about our empirical
distributions, we can approach the nominal constraint boundary in a provably
safe way. We validate the performance of our approach using a case study of
lithium-ion battery fast charging, a relevant and safety-critical energy
systems control application. Our results demonstrate marked improvements in
safety compared to a basic learning model-predictive controller, with
constraints satisfied at every instance during online learning and control.Comment: In review for CDC2
Synchronization and Redundancy: Implications for Robustness of Neural Learning and Decision Making
Learning and decision making in the brain are key processes critical to
survival, and yet are processes implemented by non-ideal biological building
blocks which can impose significant error. We explore quantitatively how the
brain might cope with this inherent source of error by taking advantage of two
ubiquitous mechanisms, redundancy and synchronization. In particular we
consider a neural process whose goal is to learn a decision function by
implementing a nonlinear gradient dynamics. The dynamics, however, are assumed
to be corrupted by perturbations modeling the error which might be incurred due
to limitations of the biology, intrinsic neuronal noise, and imperfect
measurements. We show that error, and the associated uncertainty surrounding a
learned solution, can be controlled in large part by trading off
synchronization strength among multiple redundant neural systems against the
noise amplitude. The impact of the coupling between such redundant systems is
quantified by the spectrum of the network Laplacian, and we discuss the role of
network topology in synchronization and in reducing the effect of noise. A
range of situations in which the mechanisms we model arise in brain science are
discussed, and we draw attention to experimental evidence suggesting that
cortical circuits capable of implementing the computations of interest here can
be found on several scales. Finally, simulations comparing theoretical bounds
to the relevant empirical quantities show that the theoretical estimates we
derive can be tight.Comment: Preprint, accepted for publication in Neural Computatio
- …