1,435 research outputs found
Network Analysis of Breast Cancer Progression and Reversal Using a Tree-Evolving Network Algorithm
The HMT3522 progression series of human breast cells have been used to discover how tissue architecture, microenvironment and signaling molecules affect breast cell growth and behaviors. However, much remains to be elucidated about malignant and phenotypic reversion behaviors of the HMT3522-T4-2 cells of this series. We employed a "pan-cell-state" strategy, and analyzed jointly microarray profiles obtained from different state-specific cell populations from this progression and reversion model of the breast cells using a tree-lineage multi-network inference algorithm, Treegl. We found that different breast cell states contain distinct gene networks. The network specific to non-malignant HMT3522-S1 cells is dominated by genes involved in normal processes, whereas the T4-2-specific network is enriched with cancer-related genes. The networks specific to various conditions of the reverted T4-2 cells are enriched with pathways suggestive of compensatory effects, consistent with clinical data showing patient resistance to anticancer drugs. We validated the findings using an external dataset, and showed that aberrant expression values of certain hubs in the identified networks are associated with poor clinical outcomes. Thus, analysis of various reversion conditions (including non-reverted) of HMT3522 cells using Treegl can be a good model system to study drug effects on breast cancer. Β© 2014 Parikh et al
A path-oriented encoding evolutionary algorithm for network coding resource minimization
Network coding is an emerging telecommunication technique, where any intermediate node is allowed to recombine incoming data if necessary. This technique helps to increase the throughput, however, very likely at the cost of huge amount of computational overhead, due to the packet recombination performed (ie coding operations). Hence, it is of practical importance to reduce coding operations while retaining the benefits that network coding brings to us. In this paper, we propose a novel evolutionary algorithm (EA) to minimize the amount of coding operations involved. Different from the state-of-the-art EAs which all use binary encodings for the problem, our EA is based on path-oriented encoding. In this new encoding scheme, each chromosome is represented by a union of paths originating from the source and terminating at one of the receivers. Employing path-oriented encoding leads to a search space where all solutions are feasible, which fundamentally facilitates more efficient search of EAs. Based on the new encoding, we develop three basic operators, that is, initialization, crossover and mutation. In addition, we design a local search operator to improve the solution quality and hence the performance of our EA. The simulation results demonstrate that our EA significantly outperforms the state-of-the-art algorithms in terms of global exploration and computational time
Anti-epileptic effect of Ganoderma lucidum polysaccharides by inhibition of intracellular calcium accumulation and stimulation of expression of CaMKII a in epileptic hippocampal neurons
Purpose: To investigate the mechanism of the anti-epileptic effect of Ganoderma lucidum polysaccharides (GLP), the changes of intracellular calcium and CaMK II a expression in a model of epileptic neurons were investigated.
Method: Primary hippocampal neurons were divided into: 1) Control group, neurons were cultured with Neurobasal medium, for 3 hours; 2) Model group I: neurons were incubated with Mg2+ free medium for 3 hours; 3) Model group II: neurons were incubated with Mg2+ free medium for 3 hours then cultured with the normal medium for a further 3 hours; 4) GLP group I: neurons were incubated with Mg2+ free medium containing GLP (0.375 mg/ml) for 3 hours; 5) GLP group II: neurons were incubated with Mg2+ free medium for 3 hours then cultured with a normal culture medium containing GLP for a further 3 hours. The CaMK II a protein expression was assessed by Western-blot. Ca2+ turnover in neurons was assessed using Fluo-3/AM which was added into the replacement medium and Ca2+ turnover was observed under a laser scanning confocal microscope.
Results: The CaMK II a expression in the model groups was less than in the control groups, however, in the GLP groups, it was higher than that observed in the model group. Ca2+ fluorescence intensity in GLP group I was significantly lower than that in model group I after 30 seconds, while in GLP group II, it was reduced significantly compared to model group II after 5 minutes.
Conclusion: GLP may inhibit calcium overload and promote CaMK II a expression to protect epileptic neuron
Assessing architectural evolution: A case study
This is the post-print version of the Article. The official published can be accessed from the link below - Copyright @ 2011 SpringerThis paper proposes to use a historical perspective on generic laws, principles,
and guidelines, like Lehmanβs software evolution laws and Martinβs design principles, in order to achieve a multi-faceted process and structural assessment of a systemβs architectural evolution. We present a simple structural model with associated historical metrics and
visualizations that could form part of an architectβs dashboard. We perform such an assessment for the Eclipse SDK, as a case study of a large, complex, and long-lived system for which sustained effective architectural evolution is paramount. The twofold aim of checking generic principles on a well-know system is, on the one hand,
to see whether there are certain lessons that could be learned for best practice of architectural evolution, and on the other hand to get more insights about the applicability of such principles. We find that while the Eclipse SDK does follow several of the laws and principles, there are some deviations, and we discuss areas of architectural improvement and limitations of the assessment approach
Mixing of Active and Sterile Neutrinos
We investigate mixing of neutrinos in the MSM (neutrino Minimal Standard
Model), which is the MSM extended by three right-handed neutrinos. Especially,
we study elements of the mixing matrix between three
left-handed neutrinos () and two sterile
neutrinos () which are responsible to the seesaw mechanism
generating the suppressed masses of active neutrinos as well as the generation
of the baryon asymmetry of the universe (BAU). It is shown that
can be suppressed by many orders of magnitude compared with
and , when the Chooz angle is large in the
normal hierarchy of active neutrino masses. We then discuss the neutrinoless
double beta decay in this framework by taking into account the contributions
not only from active neutrinos but also from all the three sterile neutrinos.
It is shown that and give substantial, destructive contributions
when their masses are smaller than a few 100 MeV, and as a results receive no stringent constraint from the current bounds on such decay.
Finally, we discuss the impacts of the obtained results on the direct searches
of in meson decays for the case when are lighter than pion
mass. We show that there exists the allowed region for with such
small masses in the normal hierarchy case even if the current bound on the
lifetimes of from the big bang nucleosynthesis is imposed. It is also
pointed out that the direct search by using and might miss such since the branching ratios can be
extremely small due to the cancellation in , but the search by
can cover the whole allowed region by improving the
measurement of the branching ratio by a factor of 5.Comment: 30 pages, 32 figure
Characteristics of transposable element exonization within human and mouse
Insertion of transposed elements within mammalian genes is thought to be an
important contributor to mammalian evolution and speciation. Insertion of
transposed elements into introns can lead to their activation as alternatively
spliced cassette exons, an event called exonization. Elucidation of the
evolutionary constraints that have shaped fixation of transposed elements
within human and mouse protein coding genes and subsequent exonization is
important for understanding of how the exonization process has affected
transcriptome and proteome complexities. Here we show that exonization of
transposed elements is biased towards the beginning of the coding sequence in
both human and mouse genes. Analysis of single nucleotide polymorphisms (SNPs)
revealed that exonization of transposed elements can be population-specific,
implying that exonizations may enhance divergence and lead to speciation. SNP
density analysis revealed differences between Alu and other transposed
elements. Finally, we identified cases of primate-specific Alu elements that
depend on RNA editing for their exonization. These results shed light on TE
fixation and the exonization process within human and mouse genes.Comment: 11 pages, 4 figure
A Comprehensive Map of Mobile Element Insertion Polymorphisms in Humans
As a consequence of the accumulation of insertion events over evolutionary time, mobile elements now comprise nearly half of the human genome. The Alu, L1, and SVA mobile element families are still duplicating, generating variation between individual genomes. Mobile element insertions (MEI) have been identified as causes for genetic diseases, including hemophilia, neurofibromatosis, and various cancers. Here we present a comprehensive map of 7,380 MEI polymorphisms from the 1000 Genomes Project whole-genome sequencing data of 185 samples in three major populations detected with two detection methods. This catalog enables us to systematically study mutation rates, population segregation, genomic distribution, and functional properties of MEI polymorphisms and to compare MEI to SNP variation from the same individuals. Population allele frequencies of MEI and SNPs are described, broadly, by the same neutral ancestral processes despite vastly different mutation mechanisms and rates, except in coding regions where MEI are virtually absent, presumably due to strong negative selection. A direct comparison of MEI and SNP diversity levels suggests a differential mobile element insertion rate among populations
PepDist: A New Framework for Protein-Peptide Binding Prediction based on Learning Peptide Distance Functions
BACKGROUND: Many different aspects of cellular signalling, trafficking and targeting mechanisms are mediated by interactions between proteins and peptides. Representative examples are MHC-peptide complexes in the immune system. Developing computational methods for protein-peptide binding prediction is therefore an important task with applications to vaccine and drug design. METHODS: Previous learning approaches address the binding prediction problem using traditional margin based binary classifiers. In this paper we propose PepDist: a novel approach for predicting binding affinity. Our approach is based on learning peptide-peptide distance functions. Moreover, we suggest to learn a single peptide-peptide distance function over an entire family of proteins (e.g. MHC class I). This distance function can be used to compute the affinity of a novel peptide to any of the proteins in the given family. In order to learn these peptide-peptide distance functions, we formalize the problem as a semi-supervised learning problem with partial information in the form of equivalence constraints. Specifically, we propose to use DistBoost [1,2], which is a semi-supervised distance learning algorithm. RESULTS: We compare our method to various state-of-the-art binding prediction algorithms on MHC class I and MHC class II datasets. In almost all cases, our method outperforms all of its competitors. One of the major advantages of our novel approach is that it can also learn an affinity function over proteins for which only small amounts of labeled peptides exist. In these cases, our method's performance gain, when compared to other computational methods, is even more pronounced. We have recently uploaded the PepDist webserver which provides binding prediction of peptides to 35 different MHC class I alleles. The webserver which can be found at is powered by a prediction engine which was trained using the framework presented in this paper. CONCLUSION: The results obtained suggest that learning a single distance function over an entire family of proteins achieves higher prediction accuracy than learning a set of binary classifiers for each of the proteins separately. We also show the importance of obtaining information on experimentally determined non-binders. Learning with real non-binders generalizes better than learning with randomly generated peptides that are assumed to be non-binders. This suggests that information about non-binding peptides should also be published and made publicly available
Reduced Ordered Binary Decision Diagram with Implied Literals: A New knowledge Compilation Approach
Knowledge compilation is an approach to tackle the computational
intractability of general reasoning problems. According to this approach,
knowledge bases are converted off-line into a target compilation language which
is tractable for on-line querying. Reduced ordered binary decision diagram
(ROBDD) is one of the most influential target languages. We generalize ROBDD by
associating some implied literals in each node and the new language is called
reduced ordered binary decision diagram with implied literals (ROBDD-L). Then
we discuss a kind of subsets of ROBDD-L called ROBDD-i with precisely i implied
literals (0 \leq i \leq \infty). In particular, ROBDD-0 is isomorphic to ROBDD;
ROBDD-\infty requires that each node should be associated by the implied
literals as many as possible. We show that ROBDD-i has uniqueness over some
specific variables order, and ROBDD-\infty is the most succinct subset in
ROBDD-L and can meet most of the querying requirements involved in the
knowledge compilation map. Finally, we propose an ROBDD-i compilation algorithm
for any i and a ROBDD-\infty compilation algorithm. Based on them, we implement
a ROBDD-L package called BDDjLu and then get some conclusions from preliminary
experimental results: ROBDD-\infty is obviously smaller than ROBDD for all
benchmarks; ROBDD-\infty is smaller than the d-DNNF the benchmarks whose
compilation results are relatively small; it seems that it is better to
transform ROBDDs-\infty into FBDDs and ROBDDs rather than straight compile the
benchmarks.Comment: 18 pages, 13 figure
- β¦