10,209 research outputs found

    Label-free isolation of prostate circulating tumor cells using Vortex microfluidic technology.

    Get PDF
    There has been increased interest in utilizing non-invasive "liquid biopsies" to identify biomarkers for cancer prognosis and monitoring, and to isolate genetic material that can predict response to targeted therapies. Circulating tumor cells (CTCs) have emerged as such a biomarker providing both genetic and phenotypic information about tumor evolution, potentially from both primary and metastatic sites. Currently, available CTC isolation approaches, including immunoaffinity and size-based filtration, have focused on high capture efficiency but with lower purity and often long and manual sample preparation, which limits the use of captured CTCs for downstream analyses. Here, we describe the use of the microfluidic Vortex Chip for size-based isolation of CTCs from 22 patients with advanced prostate cancer and, from an enumeration study on 18 of these patients, find that we can capture CTCs with high purity (from 1.74 to 37.59%) and efficiency (from 1.88 to 93.75 CTCs/7.5 mL) in less than 1 h. Interestingly, more atypical large circulating cells were identified in five age-matched healthy donors (46-77 years old; 1.25-2.50 CTCs/7.5 mL) than in five healthy donors <30 years old (21-27 years old; 0.00 CTC/7.5 mL). Using a threshold calculated from the five age-matched healthy donors (3.37 CTCs/mL), we identified CTCs in 80% of the prostate cancer patients. We also found that a fraction of the cells collected (11.5%) did not express epithelial prostate markers (cytokeratin and/or prostate-specific antigen) and that some instead expressed markers of epithelial-mesenchymal transition, i.e., vimentin and N-cadherin. We also show that the purity and DNA yield of isolated cells is amenable to targeted amplification and next-generation sequencing, without whole genome amplification, identifying unique mutations in 10 of 15 samples and 0 of 4 healthy samples

    Fine-grained Search Space Classification for Hard Enumeration Variants of Subset Problems

    Full text link
    We propose a simple, powerful, and flexible machine learning framework for (i) reducing the search space of computationally difficult enumeration variants of subset problems and (ii) augmenting existing state-of-the-art solvers with informative cues arising from the input distribution. We instantiate our framework for the problem of listing all maximum cliques in a graph, a central problem in network analysis, data mining, and computational biology. We demonstrate the practicality of our approach on real-world networks with millions of vertices and edges by not only retaining all optimal solutions, but also aggressively pruning the input instance size resulting in several fold speedups of state-of-the-art algorithms. Finally, we explore the limits of scalability and robustness of our proposed framework, suggesting that supervised learning is viable for tackling NP-hard problems in practice.Comment: AAAI 201

    A Comparative Review of Dimension Reduction Methods in Approximate Bayesian Computation

    Get PDF
    Approximate Bayesian computation (ABC) methods make use of comparisons between simulated and observed summary statistics to overcome the problem of computationally intractable likelihood functions. As the practical implementation of ABC requires computations based on vectors of summary statistics, rather than full data sets, a central question is how to derive low-dimensional summary statistics from the observed data with minimal loss of information. In this article we provide a comprehensive review and comparison of the performance of the principal methods of dimension reduction proposed in the ABC literature. The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization. In addition, we introduce two new methods of dimension reduction. The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure. We illustrate the performance of these dimension reduction techniques through the analysis of three challenging models and data sets.Comment: Published in at http://dx.doi.org/10.1214/12-STS406 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Tensorizing Neural Networks

    Full text link
    Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times
    • …
    corecore