84,922 research outputs found

    A novel molecular analysis approach in colorectal cancer suggests new treatment opportunities

    Full text link
    Colorectal cancer (CRC) is a molecular and clinically heterogeneous disease. In 2015, the Colorectal Cancer Subtyping Consortium classified CRC into four consensus molecular subtypes (CMS), but these CMS have had little impact on clinical practice. The purpose of this study is to deepen the molecular characterization of CRC. A novel approach, based on probabilistic graphical models (PGM) and sparse k-means–consensus cluster layer analyses, was applied in order to functionally characterize CRC tumors. First, PGM was used to functionally characterize CRC, and then sparse k-means–consensus cluster was used to explore layers of biological information and establish classifications. To this aim, gene expression and clinical data of 805 CRC samples from three databases were analyzed. Three different layers based on biological features were identified: adhesion, immune, and molecular. The adhesion layer divided patients into high and low adhesion groups, with prognostic value. The immune layer divided patients into immune-high and immunelow groups, according to the expression of immune-related genes. The molecular layer established four molecular groups related to stem cells, metabolism, the Wnt signaling pathway, and extracellular functions. Immune-high patients, with higher expression of immune-related genes and genes involved in the viral mimicry response, may benefit from immunotherapy and viral mimicry-related therapies. Additionally, several possible therapeutic targets have been identified in each molecular group. Therefore, this improved CRC classification could be useful in searching for new therapeutic targets and specific therapeutic strategies in CRC diseas

    High-Girth Matrices and Polarization

    Full text link
    The girth of a matrix is the least number of linearly dependent columns, in contrast to the rank which is the largest number of linearly independent columns. This paper considers the construction of {\it high-girth} matrices, whose probabilistic girth is close to its rank. Random matrices can be used to show the existence of high-girth matrices with constant relative rank, but the construction is non-explicit. This paper uses a polar-like construction to obtain a deterministic and efficient construction of high-girth matrices for arbitrary fields and relative ranks. Applications to coding and sparse recovery are discussed

    Representation Learning: A Review and New Perspectives

    Full text link
    The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning

    A note on Probably Certifiably Correct algorithms

    Get PDF
    Many optimization problems of interest are known to be intractable, and while there are often heuristics that are known to work on typical instances, it is usually not easy to determine a posteriori whether the optimal solution was found. In this short note, we discuss algorithms that not only solve the problem on typical instances, but also provide a posteriori certificates of optimality, probably certifiably correct (PCC) algorithms. As an illustrative example, we present a fast PCC algorithm for minimum bisection under the stochastic block model and briefly discuss other examples

    Probabilistic Spectral Sparsification In Sublinear Time

    Full text link
    In this paper, we introduce a variant of spectral sparsification, called probabilistic (Δ,Ύ)(\varepsilon,\delta)-spectral sparsification. Roughly speaking, it preserves the cut value of any cut (S,Sc)(S,S^{c}) with an 1±Δ1\pm\varepsilon multiplicative error and a Ύ∣S∣\delta\left|S\right| additive error. We show how to produce a probabilistic (Δ,Ύ)(\varepsilon,\delta)-spectral sparsifier with O(nlog⁥n/Δ2)O(n\log n/\varepsilon^{2}) edges in time O~(n/Δ2Ύ)\tilde{O}(n/\varepsilon^{2}\delta) time for unweighted undirected graph. This gives fastest known sub-linear time algorithms for different cut problems on unweighted undirected graph such as - An O~(n/OPT+n3/2+t)\tilde{O}(n/OPT+n^{3/2+t}) time O(log⁥n/t)O(\sqrt{\log n/t})-approximation algorithm for the sparsest cut problem and the balanced separator problem. - A n1+o(1)/Δ4n^{1+o(1)}/\varepsilon^{4} time approximation minimum s-t cut algorithm with an Δn\varepsilon n additive error

    Optimising Spatial and Tonal Data for PDE-based Inpainting

    Full text link
    Some recent methods for lossy signal and image compression store only a few selected pixels and fill in the missing structures by inpainting with a partial differential equation (PDE). Suitable operators include the Laplacian, the biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The quality of such approaches depends substantially on the selection of the data that is kept. Optimising this data in the domain and codomain gives rise to challenging mathematical problems that shall be addressed in our work. In the 1D case, we prove results that provide insights into the difficulty of this problem, and we give evidence that a splitting into spatial and tonal (i.e. function value) optimisation does hardly deteriorate the results. In the 2D setting, we present generic algorithms that achieve a high reconstruction quality even if the specified data is very sparse. To optimise the spatial data, we use a probabilistic sparsification, followed by a nonlocal pixel exchange that avoids getting trapped in bad local optima. After this spatial optimisation we perform a tonal optimisation that modifies the function values in order to reduce the global reconstruction error. For homogeneous diffusion inpainting, this comes down to a least squares problem for which we prove that it has a unique solution. We demonstrate that it can be found efficiently with a gradient descent approach that is accelerated with fast explicit diffusion (FED) cycles. Our framework allows to specify the desired density of the inpainting mask a priori. Moreover, is more generic than other data optimisation approaches for the sparse inpainting problem, since it can also be extended to nonlinear inpainting operators such as EED. This is exploited to achieve reconstructions with state-of-the-art quality. We also give an extensive literature survey on PDE-based image compression methods
    • 

    corecore