10 research outputs found
An integrated analysis of miRNA and gene copy numbers in xenografts of Ewing's sarcoma
<p>Abstract</p> <p>Background</p> <p>Xenografts have been shown to provide a suitable source of tumor tissue for molecular analysis in the absence of primary tumor material. We utilized ES xenograft series for integrated microarray analyses to identify novel biomarkers.</p> <p>Method</p> <p>Microarray technology (array comparative genomic hybridization (aCGH) and micro RNA arrays) was used to screen and identify copy number changes and differentially expressed miRNAs of 34 and 14 passages, respectively. Incubated cells used for xenografting (Passage 0) were considered to represent the primary tumor. Four important differentially expressed miRNAs (miR-31, miR-31*, miR-145, miR-106) were selected for further validation by real time polymerase chain reaction (RT-PCR). Integrated analysis of aCGH and miRNA data was performed on 14 xenograft passages by bioinformatic methods.</p> <p>Results</p> <p>The most frequent losses and gains of DNA copy number were detected at 9p21.3, 16q and at 8, 15, 17q21.32-qter, 1q21.1-qter, respectively. The presence of these alterations was consistent in all tumor passages. aCGH profiles of xenograft passages of each series resembled their corresponding primary tumors (passage 0). MiR-21, miR-31, miR-31*, miR-106b, miR-145, miR-150*, miR-371-5p, miR-557 and miR-598 showed recurrently altered expression. These miRNAS were predicted to regulate many ES-associated genes, such as genes of the IGF1 pathway, <it>EWSR1, FLI1 </it>and their fusion gene (<it>EWS-FLI1</it>). Twenty differentially expressed miRNAs were pinpointed in regions carrying altered copy numbers.</p> <p>Conclusion</p> <p>In the present study, ES xenografts were successfully applied for integrated microarray analyses. Our findings showed expression changes of miRNAs that were predicted to regulate many ES associated genes, such as IGF1 pathway genes, <it>FLI1, EWSR1</it>, and the <it>EWS-FLI1 </it>fusion genes.</p
Context assisted information extraction
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Stochastic processes for canonical correlation analysis
We consider two stochastic process methods for performing canonical correlation analysis (CCA). The first uses a Gaussian Process formulation of regression in which we use the current projection of one data set as the target for the other and then repeat in the opposite direction. The second uses a Dirichlet process of Gaussian models where the Gaussian models are determined by Probabilistic CCA [1]. The latter method is more computationally intensive but has the advantages of non-parametric approaches.
Transfer learning using a nonparametric sparse topic model
In many domains data items are represented by vectors of counts: count data arises, for example, in bioinformatics or analysis of text documents represented as word count vectors. However, often the amount of data available from an interesting data source is too small to model the data source well. When several data sets are available from related sources, exploiting their similarities by transfer learning can improve the resulting models compared to modeling sources independently. We introduce a Bayesian generative transfer learning model which represents similarity across document collections by sparse sharing of latent topics controlled by an Indian buffet process. Unlike a prominent previous model, hierarchical Dirichlet process (HDP) based multi-task learning, our model decouples topic sharing probability from topic strength, making sharing of low-strength topics easier. In experiments, our model outperforms the HDP approach both on synthetic data and in first of the two case studies on text collections, and achieves similar performance as the HDP approach in the second case study.acceptedVersionPeer reviewe
Transfer learning using a nonparametric sparse topic model
In many domains data items are represented by vectors of counts: count data arises, for example, in bioinformatics or analysis of text documents represented as word count vectors. However, often the amount of data available from an interesting data source is too small to model the data source well. When several data sets are available from related sources, exploiting their similarities by transfer learning can improve the resulting models compared to modeling sources independently. We introduce a Bayesian generative transfer learning model which represents similarity across document collections by sparse sharing of latent topics controlled by an Indian buffet process. Unlike a prominent previous model, hierarchical Dirichlet process (HDP) based multi-task learning, our model decouples topic sharing probability from topic strength, making sharing of low-strength topics easier. In experiments, our model outperforms the HDP approach both on synthetic data and in first of the two case studies on text collections, and achieves similar performance as the HDP approach in the second case study.Peer reviewe