123 research outputs found
Deep Memory Networks for Attitude Identification
We consider the task of identifying attitudes towards a given set of entities
from text. Conventionally, this task is decomposed into two separate subtasks:
target detection that identifies whether each entity is mentioned in the text,
either explicitly or implicitly, and polarity classification that classifies
the exact sentiment towards an identified entity (the target) into positive,
negative, or neutral.
Instead, we show that attitude identification can be solved with an
end-to-end machine learning architecture, in which the two subtasks are
interleaved by a deep memory network. In this way, signals produced in target
detection provide clues for polarity classification, and reversely, the
predicted polarity provides feedback to the identification of targets.
Moreover, the treatments for the set of targets also influence each other --
the learned representations may share the same semantics for some targets but
vary for others. The proposed deep memory network, the AttNet, outperforms
methods that do not consider the interactions between the subtasks or those
among the targets, including conventional machine learning methods and the
state-of-the-art deep learning models.Comment: Accepted to WSDM'1
Towards Computing Inferences from English News Headlines
Newspapers are a popular form of written discourse, read by many people,
thanks to the novelty of the information provided by the news content in it. A
headline is the most widely read part of any newspaper due to its appearance in
a bigger font and sometimes in colour print. In this paper, we suggest and
implement a method for computing inferences from English news headlines,
excluding the information from the context in which the headlines appear. This
method attempts to generate the possible assumptions a reader formulates in
mind upon reading a fresh headline. The generated inferences could be useful
for assessing the impact of the news headline on readers including children.
The understandability of the current state of social affairs depends greatly on
the assimilation of the headlines. As the inferences that are independent of
the context depend mainly on the syntax of the headline, dependency trees of
headlines are used in this approach, to find the syntactical structure of the
headlines and to compute inferences out of them.Comment: PACLING 2019 Long paper, 15 page
Recommended from our members
Reactive plasma cleaning and restoration of transition metal dichalcogenide monolayers
The cleaning of two-dimensional (2D) materials is an essential step in the fabrication of future devices, leveraging their unique physical, optical, and chemical properties. Part of these emerging 2D materials are transition metal dichalcogenides (TMDs). So far there is limited understanding of the cleaning of “monolayer” TMD materials. In this study, we report on the use of downstream H2 plasma to clean the surface of monolayer WS2 grown by MOCVD. We demonstrate that high-temperature processing is essential, allowing to maximize the removal rate of polymers and to mitigate damage caused to the WS2 in the form of sulfur vacancies. We show that low temperature in situ carbonyl sulfide (OCS) soak is an efficient way to resulfurize the material, besides high-temperature H2S annealing. The cleaning processes and mechanisms elucidated in this work are tested on back-gated field-effect transistors, confirming that transport properties of WS2 devices can be maintained by the combination of H2 plasma cleaning and OCS restoration. The low-damage plasma cleaning based on H2 and OCS is very reproducible, fast (completed in a few minutes) and uses a 300 mm industrial plasma etch system qualified for standard semiconductor pilot production. This process is, therefore, expected to enable the industrial scale-up of 2D-based devices, co-integrated with silicon technology
Semantically linking molecular entities in literature through entity relationships
Background Text mining tools have gained popularity to process the vast amount of available research articles in the biomedical literature. It is crucial that such tools extract information with a sufficient level of detail to be applicable in real life scenarios. Studies of mining non-causal molecular relations attribute to this goal by formally identifying the relations between genes, promoters, complexes and various other molecular entities found in text. More importantly, these studies help to enhance integration of text mining results with database facts. Results We describe, compare and evaluate two frameworks developed for the prediction of non-causal or 'entity' relations (REL) between gene symbols and domain terms. For the corresponding REL challenge of the BioNLP Shared Task of 2011, these systems ranked first (57.7% F-score) and second (41.6% F-score). In this paper, we investigate the performance discrepancy of 16 percentage points by benchmarking on a related and more extensive dataset, analysing the contribution of both the term detection and relation extraction modules. We further construct a hybrid system combining the two frameworks and experiment with intersection and union combinations, achieving respectively high-precision and high-recall results. Finally, we highlight extremely high-performance results (F-score > 90%) obtained for the specific subclass of embedded entity relations that are essential for integrating text mining predictions with database facts. Conclusions The results from this study will enable us in the near future to annotate semantic relations between molecular entities in the entire scientific literature available through PubMed. The recent release of the EVEX dataset, containing biomolecular event predictions for millions of PubMed articles, is an interesting and exciting opportunity to overlay these entity relations with event predictions on a literature-wide scale
Learning perceptually grounded word meanings from unaligned parallel data
In order for robots to effectively understand natural language commands, they must be able to acquire meaning representations that can be mapped to perceptual features in the external world. Previous approaches to learning these grounded meaning representations require detailed annotations at training time. In this paper, we present an approach to grounded language acquisition which is capable of jointly learning a policy for following natural language commands such as “Pick up the tire pallet,” as well as a mapping between specific phrases in the language and aspects of the external world; for example the mapping between the words “the tire pallet” and a specific object in the environment. Our approach assumes a parametric form for the policy that the robot uses to choose actions in response to a natural language command that factors based on the structure of the language. We use a gradient method to optimize model parameters. Our evaluation demonstrates the effectiveness of the model on a corpus of commands given to a robotic forklift by untrained users.U.S. Army Research Laboratory (Collaborative Technology Alliance Program, Cooperative Agreement W911NF-10-2-0016)United States. Office of Naval Research (MURIs N00014-07-1-0749)United States. Army Research Office (MURI N00014-11-1-0688)United States. Defense Advanced Research Projects Agency (DARPA BOLT program under contract HR0011-11-2-0008
Benchmarking natural-language parsers for biological applications using dependency graphs
BACKGROUND: Interest is growing in the application of syntactic parsers to natural language processing problems in biology, but assessing their performance is difficult because differences in linguistic convention can falsely appear to be errors. We present a method for evaluating their accuracy using an intermediate representation based on dependency graphs, in which the semantic relationships important in most information extraction tasks are closer to the surface. We also demonstrate how this method can be easily tailored to various application-driven criteria. RESULTS: Using the GENIA corpus as a gold standard, we tested four open-source parsers which have been used in bioinformatics projects. We first present overall performance measures, and test the two leading tools, the Charniak-Lease and Bikel parsers, on subtasks tailored to reflect the requirements of a system for extracting gene expression relationships. These two tools clearly outperform the other parsers in the evaluation, and achieve accuracy levels comparable to or exceeding native dependency parsers on similar tasks in previous biological evaluations. CONCLUSION: Evaluating using dependency graphs allows parsers to be tested easily on criteria chosen according to the semantics of particular biological applications, drawing attention to important mistakes and soaking up many insignificant differences that would otherwise be reported as errors. Generating high-accuracy dependency graphs from the output of phrase-structure parsers also provides access to the more detailed syntax trees that are used in several natural-language processing techniques
- …