40 research outputs found

    Determining and interpreting correlations in lipidomic networks found in glioblastoma cells

    Get PDF
    Background: Intelligent and multitiered quantitative analysis of biological systems rapidly evolves to a key technique in studying biomolecular cancer aspects. Newly emerging advances in both measurement as well as bio-inspired computational techniques have facilitated the development of lipidomics technologies and offer an excellent opportunity to understand regulation at the molecular level in many diseases. Results: We present computational approaches to study the response of glioblastoma U87 cells to gene- and chemo-therapy. To identify distinct biomarkers and differences in therapeutic outcomes, we develop a novel technique based on graph-clustering. This technique facilitates the exploration and visualization of co-regulations in glioblastoma lipid profiling data. We investigate the changes in the correlation networks for different therapies and study the success of novel gene therapies targeting aggressive glioblastoma. Conclusions: The novel computational paradigm provides unique “fingerprints” by revealing the intricate interactions at the lipidome level in glioblastoma U87 cells with induced apoptosis (programmed cell death) and thus opens a new window to biomedical frontiers

    Determining and interpreting correlations in lipidomic networks found in glioblastoma cells

    Get PDF
    Background: Intelligent and multitiered quantitative analysis of biological systems rapidly evolves to a key technique in studying biomolecular cancer aspects. Newly emerging advances in both measurement as well as bio-inspired computational techniques have facilitated the development of lipidomics technologies and offer an excellent opportunity to understand regulation at the molecular level in many diseases. Results: We present computational approaches to study the response of glioblastoma U87 cells to gene- and chemo-therapy. To identify distinct biomarkers and differences in therapeutic outcomes, we develop a novel technique based on graph-clustering. This technique facilitates the exploration and visualization of co-regulations in glioblastoma lipid profiling data. We investigate the changes in the correlation networks for different therapies and study the success of novel gene therapies targeting aggressive glioblastoma. Conclusions: The novel computational paradigm provides unique “fingerprints” by revealing the intricate interactions at the lipidome level in glioblastoma U87 cells with induced apoptosis (programmed cell death) and thus opens a new window to biomedical frontiers. Background Glioblastoma are highly invasive brain tumors. Th

    Analysis of static and dynamic test-to-code traceability information

    Get PDF
    Unit test development has some widely accepted guidelines. Two of them concern the test and code relationship, namely isolation (unit tests should examine only a single unit) and separation (they should be placed next to this unit). These guidelines are not always kept by the developers. They can however be checked by investigating the relationship between tests and the source code, which is described by test-to-code traceability links. Still, these links perhaps cannot be inferred unambiguously from the test and production code. We developed a method that is based on the computation of traceability links for different aspects and report Structural Unit Test Smells where the traceability links for the different aspects do not match. The two aspects are the static structure of the code that reflects the intentions of the developers and testers and the dynamic coverage which reveals the actual behavior of the code during test execution. In this study, we investigated this method on real programs. We manually checked the reported Structural Unit Test Smells to find out whether they are real violations of the unit testing rules. Furthermore, the smells were analyzed to determine their root causes and possible ways of correction

    Information-based Event Coreference

    Get PDF
    Event Coreference is an important module in the event extraction task, which has been shown to be difficult to solve. The goal is to link mentions talking about the same event together so that the information could be aggregated. This task could further be split into two slightly different subtasks: Within-Doc Event Coreference and Cross-Doc Event Coreference. Most of the related publications tried to solve the problem of Event Coreference in a two-step manner: Train or design a similarity metric for event mention pairs, then apply some clustering algorithm to the event mention space using the similarity metric as distance. In this work, we identify two major problems people have neglected: One is that coreference does not imply full event mention similarity due to the fact that event mentions tend to contain partial and even complementary information. The other problem is that the order to compare event mentions pair could be important, because instead of comparing event mentions pairs that have incomplete and trustless information, comparing those who have complete and trustworthy information first could prune the error rate. We propose Core Similarity, a new argument-based similarity metric, to solve the first problem, and two information-based clustering algorithms for the second problem - Informative-First Clustering (IFC) for within-doc situation and Topic-Side Event Clustering (TSEC) for cross-doc situation. These clustering algorithms are based on the idea of Event Information which is defined in this work. Finally, the EVCO system is delivered with all of these details implemented

    Density-Constrained Graph Clustering

    Get PDF

    Dynamic Graph Clustering Combining Modularity and Smoothness

    Get PDF

    An Algorithmic Walk from Static to Dynamic Graph Clustering

    Get PDF

    Engineering Graph Clustering Algorithms

    Get PDF
    Networks in the sense of objects that are related to each other are ubiquitous. In many areas, groups of objects that are particularly densely connected, so called clusters, are semantically interesting. In this thesis, we investigate two different approaches to partition the vertices of a network into clusters. The first quantifies the goodness of a clustering according to the sparsity of the cuts induced by the clusters, whereas the second is based on the recently proposed measure surprise

    Guarantees for Efficient and Adaptive Online Learning

    Get PDF
    In this thesis, we study the problem of adaptive online learning in several different settings. We first study the problem of predicting graph labelings online which are assumed to change over time. We develop the machinery of cluster specialists which probabilistically exploit any cluster structure in the graph. We give a mistake-bounded algorithm that surprisingly requires only O(log n) time per trial for an n-vertex graph, an exponential improvement over existing methods. We then consider the model of non-stationary prediction with expert advice with long-term memory guarantees in the sense of Bousquet and Warmuth, in which we learn a small pool of experts. We consider relative entropy projection-based algorithms, giving a linear-time algorithm that improves on the best known regret bound. We show that such projection updates may be advantageous over previous "weight-sharing" approaches when weight updates come with implicit costs such as in portfolio optimization. We give an algorithm to compute the relative entropy projection onto the simplex with non-uniform (lower) box constraints in linear time, which may be of independent interest. We finally extend the model of long-term memory by introducing a new model of adaptive long-term memory. Here the small pool is assumed to change over time, with the trial sequence being partitioned into epochs and a small pool associated with each epoch. We give an efficient linear-time regret-bounded algorithm for this setting and present results in the setting of contextual bandits
    corecore