5,049 research outputs found
Detecting and Explaining Crisis
Individuals on social media may reveal themselves to be in various states of
crisis (e.g. suicide, self-harm, abuse, or eating disorders). Detecting crisis
from social media text automatically and accurately can have profound
consequences. However, detecting a general state of crisis without explaining
why has limited applications. An explanation in this context is a coherent,
concise subset of the text that rationalizes the crisis detection. We explore
several methods to detect and explain crisis using a combination of neural and
non-neural techniques. We evaluate these techniques on a unique data set
obtained from Koko, an anonymous emotional support network available through
various messaging applications. We annotate a small subset of the samples
labeled with crisis with corresponding explanations. Our best technique
significantly outperforms the baseline for detection and explanation.Comment: Accepted at CLPsych, ACL workshop. 8 pages, 5 figure
Probabilistic Bag-Of-Hyperlinks Model for Entity Linking
Many fundamental problems in natural language processing rely on determining
what entities appear in a given text. Commonly referenced as entity linking,
this step is a fundamental component of many NLP tasks such as text
understanding, automatic summarization, semantic search or machine translation.
Name ambiguity, word polysemy, context dependencies and a heavy-tailed
distribution of entities contribute to the complexity of this problem.
We here propose a probabilistic approach that makes use of an effective
graphical model to perform collective entity disambiguation. Input mentions
(i.e.,~linkable token spans) are disambiguated jointly across an entire
document by combining a document-level prior of entity co-occurrences with
local information captured from mentions and their surrounding context. The
model is based on simple sufficient statistics extracted from data, thus
relying on few parameters to be learned.
Our method does not require extensive feature engineering, nor an expensive
training procedure. We use loopy belief propagation to perform approximate
inference. The low complexity of our model makes this step sufficiently fast
for real-time usage. We demonstrate the accuracy of our approach on a wide
range of benchmark datasets, showing that it matches, and in many cases
outperforms, existing state-of-the-art methods
Affective computing for smart operations: a survey and comparative analysis of the available tools, libraries and web services
In this paper, we make a deep search of the available tools in the market, at the current state of the art of Sentiment Analysis. Our aim is to optimize the human response in Datacenter Operations, using a combination of research tools, that allow us to decrease human error in general operations, managing Complex Infrastructures. The use of Sentiment Analysis tools is the first step for extending our capabilities for optimizing the human interface. Using different data collections from a variety of data sources, our research provides a very interesting outcome. In our final testing, we have found that the three main commercial platforms (IBM Watson, Google Cloud and Microsoft Azure) get the same accuracy (89-90%). for the different datasets tested, based on Artificial Neural Network and Deep Learning techniques. The other stand-alone Applications or APIs, like Vader or MeaninCloud, get a similar accuracy level in some of the datasets, using a different approach, semantic Networks, such as Concepnet1, but the model can easily be optimized above 90% of accuracy, just adjusting some parameter of the semantic model. This paper points to future directions for optimizing DataCenter Operations Management and decreasing human error in complex environments
- …