10 research outputs found

    Planar Induced Subgraphs of Sparse Graphs

    Full text link
    We show that every graph has an induced pseudoforest of at least nm/4.5n-m/4.5 vertices, an induced partial 2-tree of at least nm/5n-m/5 vertices, and an induced planar subgraph of at least nm/5.2174n-m/5.2174 vertices. These results are constructive, implying linear-time algorithms to find the respective induced subgraphs. We also show that the size of the largest KhK_h-minor-free graph in a given graph can sometimes be at most nm/6+o(m)n-m/6+o(m).Comment: Accepted by Graph Drawing 2014. To appear in Journal of Graph Algorithms and Application

    TAF6δ orchestrates an apoptotic transcriptome profile and interacts functionally with p53

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>TFIID is a multiprotein complex that plays a pivotal role in the regulation of RNA polymerase II (Pol II) transcription owing to its core promoter recognition and co-activator functions. TAF6 is a core TFIID subunit whose splice variants include the major TAF6α isoform that is ubiquitously expressed, and the inducible TAF6δ. In contrast to TAF6α, TAF6δ is a pro-apoptotic isoform with a 10 amino acid deletion in its histone fold domain that abolishes its interaction with TAF9. TAF6δ expression can dictate life versus death decisions of human cells.</p> <p>Results</p> <p>Here we define the impact of endogenous TAF6δ expression on the global transcriptome landscape. TAF6δ was found to orchestrate a transcription profile that included statistically significant enrichment of genes of apoptotic function. Interestingly, gene expression patterns controlled by TAF6δ share similarities with, but are not equivalent to, those reported to change following TAF9 and/or TAF9b depletion. Finally, because TAF6δ regulates certain p53 target genes, we tested and demonstrated a physical and functional interaction between TAF6δ and p53.</p> <p>Conclusion</p> <p>Together our data define a TAF6δ-driven apoptotic gene expression program and show crosstalk between the p53 and TAF6δ pathways.</p

    GRNsight: a web application and service for visualizing models of small- to medium-scale gene regulatory networks

    Get PDF
    GRNsight is a web application and service for visualizing models of gene regulatory networks (GRNs). A gene regulatory network (GRN) consists of genes, transcription factors, and the regulatory connections between them which govern the level of expression of mRNA and protein from genes. The original motivation came from our efforts to perform parameter estimation and forward simulation of the dynamics of a differential equations model of a small GRN with 21 nodes and 31 edges. We wanted a quick and easy way to visualize the weight parameters from the model which represent the direction and magnitude of the influence of a transcription factor on its target gene, so we created GRNsight. GRNsight automatically lays out either an unweighted or weighted network graph based on an Excel spreadsheet containing an adjacency matrix where regulators are named in the columns and target genes in the rows, a Simple Interaction Format (SIF) text file, or a GraphML XML file. When a user uploads an input file specifying an unweighted network, GRNsight automatically lays out the graph using black lines and pointed arrowheads. For a weighted network, GRNsight uses pointed and blunt arrowheads, and colors the edges and adjusts their thicknesses based on the sign (positive for activation or negative for repression) and magnitude of the weight parameter. GRNsight is written in JavaScript, with diagrams facilitated by D3.js, a data visualization library. Node.js and the Express framework handle server-side functions. GRNsight’s diagrams are based on D3.js’s force graph layout algorithm, which was then extensively customized to support the specific needs of GRNs. Nodes are rectangular and support gene labels of up to 12 characters. The edges are arcs, which become straight lines when the nodes are close together. Self-regulatory edges are indicated by a loop. When a user mouses over an edge, the numerical value of the weight parameter is displayed. Visualizations can be modified by sliders that adjust the force graph layout parameters and through manual node dragging. GRNsight is best-suited for visualizing networks of fewer than 35 nodes and 70 edges, although it accepts networks of up to 75 nodes or 150 edges. GRNsight has general applicability for displaying any small, unweighted or weighted network with directed edges for systems biology or other application domains. GRNsight serves as an example of following and teaching best practices for scientific computing and complying with FAIR principles, using an open and test-driven development model with rigorous documentation of requirements and issues on GitHub. An exhaustive unit testing framework using Mocha and the Chai assertion library consists of around 160 automated unit tests that examine nearly 530 test files to ensure that the program is running as expected. The GRNsight application (http://dondi.github.io/GRNsight/) and code (https://github.com/dondi/GRNsight) are available under the open source BSD license

    Vocabulary Evolution on the Semantic Web: From Changes to Evolution of Vocabularies and its Impact on the Data

    Get PDF
    The main objective of the Semantic Web is to provide data on the web well-defined meaning. Vocabularies are used for modeling data in the web, provide a shared understanding of a domain and consist of a collection of types and properties. These types and properties are so-called terms. A vocabulary can import terms from other vocabularies, and data publishers use vocabulary terms for modeling data. Importing terms via vocabularies results in a Network of Linked vOcabularies (NeLO). Vocabularies are subject to change during their lifetime. When vocabularies change, the published data become a problem if they are not updated based on these changes. So far, there has been no study that analyzes vocabulary changes over time. Furthermore, it is unknown how data publishers reflect on such vocabulary changes. Ontology engineers and data publishers may not be aware of the changes in the vocabulary terms that have already happened since they occur rather rarely. This work addresses the problem of vocabulary changes and their impact on other vocabularies and the published data. We analyzed the changes of vocabularies and their reuse. We selected the most dominant vocabularies, based on their use by data publishers. Additionally, we analyzed the changes of 994 vocabularies. Furthermore, we analyzed various vocabularies to better understand by whom and how they are used in the modeled data, and how these changes are adopted in the Linked Open Data cloud. We computed the state of the NeLO from the available versions of vocabularies for over 17 years. We analyzed the static parameters of the NeLO such as its size, density, average degree, and the most important vocabularies at certain points in time. We further investigated how NeLO changes over time, specifically measuring the impact of a change in one vocabulary on others, how the reuse of terms changes, and the importance of vocabulary changes. Our results show that the vocabularies are highly static, and many of the changes occurred in annotation properties. Additionally, 16% of the existing terms are reused by other vocabularies, and some of the deprecated and deleted terms are still reused. Furthermore, most of the newly coined terms are adopted immediately. Our results show that even if the change frequency of terms is rather low, it can have a high impact on the data due to a large amount of data on the web. Moreover, due to a large number of vocabularies in the NeLO, and therefore the increase of available terms, the percentage of imported terms compared with the available ones has decreased over time. Additionally, based on the scores of the average number of exports for the vocabularies in the NeLO, some vocabularies have become more popular over time. Overall, understanding the evolution of vocabulary terms is important for ontology engineers and data publishers to avoid wrong assumptions about the data published on the web. Furthermore, it may foster a better understanding of the impact of the changes in vocabularies and how they are adopted to possibly learn from previous experience. Our results provide for the first time in-depth insights into the structure and evolution of the NeLO. Supported by proper tools exploiting the analysis of this thesis, it may help ontology engineers to identify data modeling shortcomings and assess the dependencies implied by the reusing of a specific vocabulary.Das Hauptziel des Semantic Web ist es, den Daten im Web eine klar definierte Bedeutung zu geben. Vokabulare werden zum Modellieren von Daten im Web verwendet. Vokabulare vermitteln ein gemeinsames Verständnis einer Domäne und bestehen aus einer Sammlung von Typen und Eigenschaften. Diese Typen und Eigenschaften sind sogenannte Begriffe. Ein Vokabular kann Begriffe aus anderen Vokabularen importieren, und Datenverleger verwenden die Begriffe der Vokabulare zum Modellieren von Daten. Durch das Importieren von Begriffen entsteht ein Netzwerk verknüpfter Vokabulare (NeLO). Vokabulare können sich im Laufe der Zeit ändern. Wenn sich Vokabulare ändern, kann dies zu Problemen mit bereits veröffentlichten Daten führen, falls diese nicht entsprechend angepasst werden. Bisher gibt es keine Studie, die die Veränderung der Vokabulare im Laufe der Zeit analysiert. Darüber hinaus ist nicht bekannt, inwiefern bereits veröffentlichte Daten an diese Veränderungen angepasst werden. Verantwortliche für Ontologien und Daten sind sich möglicherweise der Änderungen in den Vokabularen nicht bewusst, da solche Änderungen eher selten vorkommen. Diese Arbeit befasst sich mit dem Problem der Änderung von Vokabularen und deren Auswirkung auf andere Vokabulare sowie die Daten. Wir analysieren die Änderung von Vokabularen und deren Wiederverwendung. Für unsere Analyse haben wir diejenigen Vokabulare ausgewählt, die am häufigsten verwendet werden. Zusätzlich analysieren wir die Änderungen von 994 Vokabularen aus dem Verzeichnis "Linked Open Vocabulary". Wir analysieren die Vokabulare, um zu verstehen, von wem und wie sie in den modellierten Daten verwendet werden und inwiefern Änderungen in die Linked Open Data Cloud übernommen werden. Wir beobachten den Status von NeLO aus den verfügbaren Versionen der Vokabulare über einen Zeitraum von 17 Jahren. Wir analysieren statische Parameter von NeLO wie Größe, Dichte, Durchschnittsgrad und die wichtigsten Vokabulare zu bestimmten Zeitpunkten. Wir untersuchen weiter, wie sich NeLO mit der Zeit ändert. Insbesondere messen wir die Auswirkung einer Änderung in einem Vokabular auf andere, wie sich die Wiederverwendung von Begriffen ändert und wie wichtig Änderungen im Vokabular sind. Unsere Ergebnisse zeigen, dass die Vokabulare sehr statisch sind und viele Änderungen an sogenannten Annotations-Properties vorgenommen wurden. Darüber hinaus werden 16% der vorhandenen Begriffen von anderen Vokabularen wiederverwendet, und einige der veralteten und gelöschten Begriffe werden weiterhin wiederverwendet. Darüber hinaus werden die meisten neu erstellten Begriffe unmittelbar verwendet. Unsere Ergebnisse zeigen, dass selbst wenn die Häufigkeit von Änderungen an Vokabularen eher gering ist, so kann dies aufgrund der großen Datenmenge im Web erhebliche Auswirkungen haben. Darüber hinaus hat sich aufgrund einer großen Anzahl von Vokabularen in NeLO und damit der Zunahme der verfügbaren Begriffe der Prozentsatz der importierten Begriffe im Vergleich zu den verfügbaren Begriffen im Laufe der Zeit verringert. Basierend auf den Ergebnissen der durchschnittlichen Anzahl von Exporten für die Vokabulare in NeLO sind einige Vokabulare im Laufe der Zeit immer beliebter geworden. Insgesamt ist es für Verantwortliche für Ontologien und Daten wichtig, die Entwicklung der Vokabulare zu verstehen, um falsche Annahmen über die im Web veröffentlichten Daten zu vermeiden. Darüber hinaus ermöglichen unsere Ergebnisse ein besseres Verständnis der Auswirkungen von Änderungen in Vokabularen, sowie deren Nachnutzung, um möglicherweise aus früheren Erfahrungen zu lernen. Unsere Ergebnisse bieten erstmals detaillierte Einblicke in die Struktur und Entwicklung des Netzwerks der verknüpften Vokabularen. Unterstützt von geeigneten Tools für die Analyse in dieser Arbeit, kann es Verantwortlichen für Ontologien helfen, Mängel in der Datenmodellierung zu identifizieren und Abhängigkeiten zu bewerten, die durch die Wiederverwendung eines bestimmten Vokabulars entstehenden

    Entwicklung von Methoden zur automatischen Generierung, grafischen Darstellung und interaktiven Analyse von metabolischen Netzwerken

    Get PDF
    Die große Anzahl der metabolischen Reaktionen und der an ihnen beteiligten Komponenten kann nur mit Methoden zur automatisierten Darstellung und Analyse von metabolischen Netzwerken erforscht werden. Mit dem CUBIC Pathway Editor "Cupe" existiert jetzt ein Programm, das auf einzigartige Art und Weise die Generierung, Darstellung und Analyse von Stoffwechselnetzwerken mit den aktuellsten Methoden zum automatischen Zeichnen von Graphen verbindet. Dank eines integrierten Datenbestandes von ca. 32.000 Reaktionen und ca 47.000 Komponenten bietet Cupe zudem die größte Reaktionsdatenbank unter allen bisher bekannten Programmen für die Stoffwechselanalyse. Der Anwender wird sowohl bei der manuellen als auch bei der automatischen Erzeugung von metabolischen Netzwerken unterstützt und hat mit Cupe die Möglichkeit, alle Netzwerkelemente völlig frei zu formatieren. Besonders die Fähigkeit, von Cupe Subnetzwerke in Superknoten bzw. Clustern zusammenzufassen und einen Metabolitpool zu verwalten, erlaubt es, den Graphen eines Reaktionsnetzwerks erheblich zu vereinfachen. Dadurch ist es möglich geworden, die Lesbarkeit automatisch gezeichneter Graphen deutlich zu verbessern. Mit der Entwicklung der "Cubic-Sparse-Matrix" verfügt "Cupe" über ein neuartiges und effizientes Datenmodell. Auf der Grundlage dieses Datenmodells konnte die "Mengenlehre für metabolische Netzwerke" für die Vergleichende Analyse von Reaktionsnetzwerken entwickelt werden. Weitere Analysemethoden können über die einfach zu bedienende Erweiterungsschnittstelle zu Cupe hinzugefügt werden. Die verschiedenen Module von Cupe verknüpfen so zahlreiche Forschungsgebiete und bilden dadurch eine interdisziplinäre Forschungs- und Ausbildungsplattform, deren Weiterentwicklung über das im Internet bereitgestellte "Cupe Knowledge Portal" koordiniert wird

    Every normal logic program has a 2-valued semantics: theory, extensions, applications, implementations

    Get PDF
    Trabalho apresentado no âmbito do Doutoramento em Informática, como requisito parcial para obtenção do grau de Doutor em InformáticaAfter a very brief introduction to the general subject of Knowledge Representation and Reasoning with Logic Programs we analyse the syntactic structure of a logic program and how it can influence the semantics. We outline the important properties of a 2-valued semantics for Normal Logic Programs, proceed to define the new Minimal Hypotheses semantics with those properties and explore how it can be used to benefit some knowledge representation and reasoning mechanisms. The main original contributions of this work, whose connections will be detailed in the sequel, are: • The Layering for generic graphs which we then apply to NLPs yielding the Rule Layering and Atom Layering — a generalization of the stratification notion; • The Full shifting transformation of Disjunctive Logic Programs into (highly nonstratified)NLPs; • The Layer Support — a generalization of the classical notion of support; • The Brave Relevance and Brave Cautious Monotony properties of a 2-valued semantics; • The notions of Relevant Partial Knowledge Answer to a Query and Locally Consistent Relevant Partial Knowledge Answer to a Query; • The Layer-Decomposable Semantics family — the family of semantics that reflect the above mentioned Layerings; • The Approved Models argumentation approach to semantics; • The Minimal Hypotheses 2-valued semantics for NLP — a member of the Layer-Decomposable Semantics family rooted on a minimization of positive hypotheses assumption approach; • The definition and implementation of the Answer Completion mechanism in XSB Prolog — an essential component to ensure XSB’s WAM full compliance with the Well-Founded Semantics; • The definition of the Inspection Points mechanism for Abductive Logic Programs;• An implementation of the Inspection Points workings within the Abdual system [21] We recommend reading the chapters in this thesis in the sequence they appear. However, if the reader is not interested in all the subjects, or is more keen on some topics rather than others, we provide alternative reading paths as shown below. 1-2-3-4-5-6-7-8-9-12 Definition of the Layer-Decomposable Semantics family and the Minimal Hypotheses semantics (1 and 2 are optional) 3-6-7-8-10-11-12 All main contributions – assumes the reader is familiarized with logic programming topics 3-4-5-10-11-12 Focus on abductive reasoning and applications.FCT-MCTES (Fundação para a Ciência e Tecnologia do Ministério da Ciência,Tecnologia e Ensino Superior)- (no. SFRH/BD/28761/2006

    Order-Related Problems Parameterized by Width

    Get PDF
    In the main body of this thesis, we study two different order theoretic problems. The first problem, called Completion of an Ordering, asks to extend a given finite partial order to a complete linear order while respecting some weight constraints. The second problem is an order reconfiguration problem under width constraints. While the Completion of an Ordering problem is NP-complete, we show that it lies in FPT when parameterized by the interval width of ρ. This ordering problem can be used to model several ordering problems stemming from diverse application areas, such as graph drawing, computational social choice, and computer memory management. Each application yields a special partial order ρ. We also relate the interval width of ρ to parameterizations for these problems that have been studied earlier in the context of these applications, sometimes improving on parameterized algorithms that have been developed for these parameterizations before. This approach also gives some practical sub-exponential time algorithms for ordering problems. In our second main result, we combine our parameterized approach with the paradigm of solution diversity. The idea of solution diversity is that instead of aiming at the development of algorithms that output a single optimal solution, the goal is to investigate algorithms that output a small set of sufficiently good solutions that are sufficiently diverse from one another. In this way, the user has the opportunity to choose the solution that is most appropriate to the context at hand. It also displays the richness of the solution space. There, we show that the considered diversity version of the Completion of an Ordering problem is fixed-parameter tractable with respect to natural paramaters that capture the notion of diversity and the notion of sufficiently good solutions. We apply this algorithm in the study of the Kemeny Rank Aggregation class of problems, a well-studied class of problems lying in the intersection of order theory and social choice theory. Up to this point, we have been looking at problems where the goal is to find an optimal solution or a diverse set of good solutions. In the last part, we shift our focus from finding solutions to studying the solution space of a problem. There we consider the following order reconfiguration problem: Given a graph G together with linear orders τ and τ ′ of the vertices of G, can one transform τ into τ ′ by a sequence of swaps of adjacent elements in such a way that at each time step the resulting linear order has cutwidth (pathwidth) at most w? We show that this problem always has an affirmative answer when the input linear orders τ and τ ′ have cutwidth (pathwidth) at most w/2. Using this result, we establish a connection between two apparently unrelated problems: the reachability problem for two-letter string rewriting systems and the graph isomorphism problem for graphs of bounded cutwidth. This opens an avenue for the study of the famous graph isomorphism problem using techniques from term rewriting theory. In addition to the main part of this work, we present results on two unrelated problems, namely on the Steiner Tree problem and on the Intersection Non-emptiness problem from automata theory.Doktorgradsavhandlin
    corecore