39 research outputs found
Learnable Spectral Wavelets on Dynamic Graphs to Capture Global Interactions
Learning on evolving(dynamic) graphs has caught the attention of researchers
as static methods exhibit limited performance in this setting. The existing
methods for dynamic graphs learn spatial features by local neighborhood
aggregation, which essentially only captures the low pass signals and local
interactions. In this work, we go beyond current approaches to incorporate
global features for effectively learning representations of a dynamically
evolving graph. We propose to do so by capturing the spectrum of the dynamic
graph. Since static methods to learn the graph spectrum would not consider the
history of the evolution of the spectrum as the graph evolves with time, we
propose a novel approach to learn the graph wavelets to capture this evolving
spectra. Further, we propose a framework that integrates the dynamically
captured spectra in the form of these learnable wavelets into spatial features
for incorporating local and global interactions. Experiments on eight standard
datasets show that our method significantly outperforms related methods on
various tasks for dynamic graphs.Comment: Accepted for publication in AAAI 202
Evaluating the Impact of Knowledge Graph Context on Entity Disambiguation Models
Pretrained Transformer models have emerged as state-of-the-art approaches
that learn contextual information from text to improve the performance of
several NLP tasks. These models, albeit powerful, still require specialized
knowledge in specific scenarios. In this paper, we argue that context derived
from a knowledge graph (in our case: Wikidata) provides enough signals to
inform pretrained transformer models and improve their performance for named
entity disambiguation (NED) on Wikidata KG. We further hypothesize that our
proposed KG context can be standardized for Wikipedia, and we evaluate the
impact of KG context on state-of-the-art NED model for the Wikipedia knowledge
base. Our empirical results validate that the proposed KG context can be
generalized (for Wikipedia), and providing KG context in transformer
architectures considerably outperforms the existing baselines, including the
vanilla transformer models.Comment: to appear in proceedings of CIKM 202
RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network
In this paper, we present a novel method named RECON, that automatically
identifies relations in a sentence (sentential relation extraction) and aligns
to a knowledge graph (KG). RECON uses a graph neural network to learn
representations of both the sentence as well as facts stored in a KG, improving
the overall extraction quality. These facts, including entity attributes
(label, alias, description, instance-of) and factual triples, have not been
collectively used in the state of the art methods. We evaluate the effect of
various forms of representing the KG context on the performance of RECON. The
empirical evaluation on two standard relation extraction datasets shows that
RECON significantly outperforms all state of the art methods on NYT Freebase
and Wikidata datasets. RECON reports 87.23 F1 score (Vs 82.29 baseline) on
Wikidata dataset whereas on NYT Freebase, reported values are 87.5(P@10) and
74.1(P@30) compared to the previous baseline scores of 81.3(P@10) and
63.1(P@30).Comment: The Web Conference 2021 (WWW'21) full pape
Randomized Clinical Trial of High-Dose Rifampicin With or Without Levofloxacin Versus Standard of Care for Pediatric Tuberculous Meningitis: The TBM-KIDS Trial
Background. Pediatric tuberculous meningitis (TBM) commonly causes death or disability. In adults, high-dose rifampicin may reduce mortality. The role of fluoroquinolones remains unclear. There have been no antimicrobial treatment trials for pediatric TBM.
Methods. TBM-KIDS was a phase 2 open-label randomized trial among children with TBM in India and Malawi. Participants received isoniazid and pyrazinamide plus: (i) high-dose rifampicin (30 mg/kg) and ethambutol (R30HZE, arm 1); (ii) high-dose rifampicin
and levofloxacin (R30HZL, arm 2); or (iii) standard-dose rifampicin and ethambutol (R15HZE, arm 3) for 8 weeks, followed by 10 months of standard treatment. Functional and neurocognitive outcomes were measured longitudinally using Modified Rankin Scale (MRS) and Mullen Scales of Early Learning (MSEL).
Results. Of 2487 children prescreened, 79 were screened and 37 enrolled. Median age was 72 months; 49%, 43%, and 8% had stage I, II, and III disease, respectively. Grade 3 or higher adverse events occurred in 58%, 55%, and 36% of children in arms 1, 2, and 3, with 1 death (arm 1) and 6 early treatment discontinuations (4 in arm 1, 1 each in arms 2 and 3). By week 8, all children recovered to MRS score of 0 or 1. Average MSEL scores were significantly better in arm 1 than arm 3 in fine motor, receptive language, and expressive language domains (P < .01).
Conclusions. In a pediatric TBM trial, functional outcomes were excellent overall. The trend toward higher frequency of adverse events but better neurocognitive outcomes in children receiving high-dose rifampicin requires confirmation in a larger trial.
Clinical Trials Registration. NCT02958709
KGPool: Dynamic Knowledge Graph Context Selection for Relation Extraction
We present a novel method for relation extraction (RE) from a single sentence, mapping the sentence and two given entities to a canonical fact in a knowledge graph (KG). Especially in this presumed sentential RE setting, the context of a single sentence is often sparse. This paper introduces the KGPool method to address this sparsity, dynamically expanding the context with additional facts from the KG. It learns the representation of these facts (entity alias, entity descriptions, etc.) using neural methods, supplementing the sentential context. Unlike existing methods that statically use all expanded facts, KGPool conditions this expansion on the sentence. We study the efficacy of KGPool by evaluating it with different neural models and KGs (Wikidata and NYT Freebase). Our experimental evaluation on standard datasets shows that by feeding the KGPool representation into a Graph Neural Network, the overall method is significantly more accurate than state-of-the-art methods. © 2021 Association for Computational Linguistic
Selective synthesis of p-hydroxybenzaldehyde by liquid-phase catalytic oxidation of p-cresol
Liquid-phase oxidation of p-cresol over insoluble cobalt oxide (Co<SUB>3</SUB>O<SUB>4</SUB>) catalyst under elevated pressure of air gave 95% selectivity to p-hydroxybenzaldehyde, an important flavoring intermediate. The selectivity to p-hydroxybenzaldehyde could be enhanced by manipulating the concentrations of p-cresol, sodium hydroxide, and catalyst and the partial pressure of oxygen in such a way that the byproducts normally encountered in this oxidation process were eliminated or minimized significantly
How Expressive are Transformers in Spectral Domain for Graphs?
The recent works proposing transformer-based models for graphs have proven
the inadequacy of Vanilla Transformer for graph representation learning. To
understand this inadequacy, there is a need to investigate if spectral analysis
of the transformer will reveal insights into its expressive power. Similar
studies already established that spectral analysis of Graph neural networks
(GNNs) provides extra perspectives on their expressiveness. In this work, we
systematically study and establish the link between the spatial and spectral
domain in the realm of the transformer. We further provide a theoretical
analysis and prove that the spatial attention mechanism in the transformer
cannot effectively capture the desired frequency response, thus, inherently
limiting its expressiveness in spectral space. Therefore, we propose FeTA, a
framework that aims to perform attention over the entire graph spectrum (i.e.,
actual frequency components of the graphs) analogous to the attention in
spatial space. Empirical results suggest that FeTA provides homogeneous
performance gain against vanilla transformer across all tasks on standard
benchmarks and can easily be extended to GNN-based models with low-pass
characteristics (e.g., GAT).Comment: Accepted in Transactions on Machine Learning Researc
Role of a co-metal in bimetallic Ni-Pt catalyst for hydrogenation of m-dinitrobenzene to m-phenylenediamine
Bimetallic Ni-Pt catalysts supported on carbon were found to give very high turn over frequency numbers and almost complete selectivity to m-phenylenediamine in m-dinitrobenzene hydrogenation as compared to the monometallic nickel catalysts. The XRD and XPS characterization revealed that most of the nickel remains as Ni<SUP>2+</SUP> in a monometallic catalyst while, the addition of platinum leads to the stabilization of Ni<SUP>0</SUP> state, in case of bimetallic catalysts