3,755 research outputs found
Neural Collective Entity Linking
Entity Linking aims to link entity mentions in texts to knowledge bases, and
neural models have achieved recent success in this task. However, most existing
methods rely on local contexts to resolve entities independently, which may
usually fail due to the data sparsity of local information. To address this
issue, we propose a novel neural model for collective entity linking, named as
NCEL. NCEL applies Graph Convolutional Network to integrate both local
contextual features and global coherence information for entity linking. To
improve the computation efficiency, we approximately perform graph convolution
on a subgraph of adjacent entity mentions instead of those in the entire text.
We further introduce an attention scheme to improve the robustness of NCEL to
data noise and train the model on Wikipedia hyperlinks to avoid overfitting and
domain bias. In experiments, we evaluate NCEL on five publicly available
datasets to verify the linking performance as well as generalization ability.
We also conduct an extensive analysis of time complexity, the impact of key
modules, and qualitative results, which demonstrate the effectiveness and
efficiency of our proposed method.Comment: 12 pages, 3 figures, COLING201
KGAT: Knowledge Graph Attention Network for Recommendation
To provide more accurate, diverse, and explainable recommendation, it is
compulsory to go beyond modeling user-item interactions and take side
information into account. Traditional methods like factorization machine (FM)
cast it as a supervised learning problem, which assumes each interaction as an
independent instance with side information encoded. Due to the overlook of the
relations among instances or items (e.g., the director of a movie is also an
actor of another movie), these methods are insufficient to distill the
collaborative signal from the collective behaviors of users. In this work, we
investigate the utility of knowledge graph (KG), which breaks down the
independent interaction assumption by linking items with their attributes. We
argue that in such a hybrid structure of KG and user-item graph, high-order
relations --- which connect two items with one or multiple linked attributes
--- are an essential factor for successful recommendation. We propose a new
method named Knowledge Graph Attention Network (KGAT) which explicitly models
the high-order connectivities in KG in an end-to-end fashion. It recursively
propagates the embeddings from a node's neighbors (which can be users, items,
or attributes) to refine the node's embedding, and employs an attention
mechanism to discriminate the importance of the neighbors. Our KGAT is
conceptually advantageous to existing KG-based recommendation methods, which
either exploit high-order relations by extracting paths or implicitly modeling
them with regularization. Empirical results on three public benchmarks show
that KGAT significantly outperforms state-of-the-art methods like Neural FM and
RippleNet. Further studies verify the efficacy of embedding propagation for
high-order relation modeling and the interpretability benefits brought by the
attention mechanism.Comment: KDD 2019 research trac
Many-Task Computing and Blue Waters
This report discusses many-task computing (MTC) generically and in the
context of the proposed Blue Waters systems, which is planned to be the largest
NSF-funded supercomputer when it begins production use in 2012. The aim of this
report is to inform the BW project about MTC, including understanding aspects
of MTC applications that can be used to characterize the domain and
understanding the implications of these aspects to middleware and policies.
Many MTC applications do not neatly fit the stereotypes of high-performance
computing (HPC) or high-throughput computing (HTC) applications. Like HTC
applications, by definition MTC applications are structured as graphs of
discrete tasks, with explicit input and output dependencies forming the graph
edges. However, MTC applications have significant features that distinguish
them from typical HTC applications. In particular, different engineering
constraints for hardware and software must be met in order to support these
applications. HTC applications have traditionally run on platforms such as
grids and clusters, through either workflow systems or parallel programming
systems. MTC applications, in contrast, will often demand a short time to
solution, may be communication intensive or data intensive, and may comprise
very short tasks. Therefore, hardware and software for MTC must be engineered
to support the additional communication and I/O and must minimize task dispatch
overheads. The hardware of large-scale HPC systems, with its high degree of
parallelism and support for intensive communication, is well suited for MTC
applications. However, HPC systems often lack a dynamic resource-provisioning
feature, are not ideal for task communication via the file system, and have an
I/O system that is not optimized for MTC-style applications. Hence, additional
software support is likely to be required to gain full benefit from the HPC
hardware
Preparing to Deepen Action: A Funder Collaborative Finds its Way
The formation of the Jewish Teen Education and Engagement Funder Collaborative was the result of a process begun by the Jim Joseph Foundation in 2013. At that time, in an effort to spawn innovative, locally sustainable teen engagement programs, the Jim Joseph Foundation brought together an array of funders to explore various approaches. The first 24 months of this deliberate process in which ten local and five national funders undertook to educate themselves, build relationships and co-invest in community-based Jewish teen education and engagement initiatives was thoughtfully documented in a case study issued in January 2015 by Informing Change, entitled, Finding New Paths for Teen Engagement and Learning: A Funder Collaborative Leads the Way.The first case study highlighted several important achievements of the collaborative in its early years:* Strong leadership from the convening funder which enabled old and new colleagues to engage in open discussions about possible collaborations;* Early commitment of significant financial resources;* Provision of operational and substantive support by an array of consultants;* Development of mutual expectations and articulating shared measures of success.This case study by Rosov Consulting documents the next stage of the Funder Collaborative's development, roughly the 21-month period from January 2015 through October 2016 and reflects the Collaborative's commitment to share its process with others who may choose to embark on their own co-funding endeavor
Draft Regional Recommendations for the Pacific Northwest on Water Quality Trading
In March 2013, water quality agency staff from Idaho, Oregon, and Washington, U.S. EPA Region 10, Willamette Partnership, and The Freshwater Trust convened a working group for the first of a series of four interagency workshops on water quality trading in the Pacific Northwest. Facilitated by Willamette Partnership through a USDA-NRCS Conservation Innovation Grant, those who assembled over the subsequent eight months discussed and evaluated water quality trading policies, practices, and programs across the country in an effort to better understand and draw from EPA's January 13, 2003, Water Quality Trading Policy, and its 2007 Permit Writers' Toolkit, as well as existing state guidance and regulations on water quality trading. All documents presented at those conversations and meeting summaries are posted on the Willamette Partnership's website.The final product is intended to be a set of recommended practices for each state to consider as they develop water quality trading. The goals of this effort are to help ensure that water quality "trading programs" have the quality, credibility, and transparency necessary to be consistent with the "Clean Water Act" (CWA), its implementing regulations and state and local water quality laws
MLBiNet: A Cross-Sentence Collective Event Detection Network
We consider the problem of collectively detecting multiple events,
particularly in cross-sentence settings. The key to dealing with the problem is
to encode semantic information and model event inter-dependency at a
document-level. In this paper, we reformulate it as a Seq2Seq task and propose
a Multi-Layer Bidirectional Network (MLBiNet) to capture the document-level
association of events and semantic information simultaneously. Specifically, a
bidirectional decoder is firstly devised to model event inter-dependency within
a sentence when decoding the event tag vector sequence. Secondly, an
information aggregation module is employed to aggregate sentence-level semantic
and event tag information. Finally, we stack multiple bidirectional decoders
and feed cross-sentence information, forming a multi-layer bidirectional
tagging architecture to iteratively propagate information across sentences. We
show that our approach provides significant improvement in performance compared
to the current state-of-the-art results.Comment: Accepted by ACL 202
- …