928 research outputs found
Building a Sentiment Corpus of Tweets in Brazilian Portuguese
The large amount of data available in social media, forums and websites
motivates researches in several areas of Natural Language Processing, such as
sentiment analysis. The popularity of the area due to its subjective and
semantic characteristics motivates research on novel methods and approaches for
classification. Hence, there is a high demand for datasets on different domains
and different languages. This paper introduces TweetSentBR, a sentiment corpora
for Brazilian Portuguese manually annotated with 15.000 sentences on TV show
domain. The sentences were labeled in three classes (positive, neutral and
negative) by seven annotators, following literature guidelines for ensuring
reliability on the annotation. We also ran baseline experiments on polarity
classification using three machine learning methods, reaching 80.99% on
F-Measure and 82.06% on accuracy in binary classification, and 59.85% F-Measure
and 64.62% on accuracy on three point classification.Comment: Accepted for publication in 11th International Conference on Language
Resources and Evaluation (LREC 2018
Cross-Lingual Induction and Transfer of Verb Classes Based on Word Vector Space Specialisation
Existing approaches to automatic VerbNet-style verb classification are
heavily dependent on feature engineering and therefore limited to languages
with mature NLP pipelines. In this work, we propose a novel cross-lingual
transfer method for inducing VerbNets for multiple languages. To the best of
our knowledge, this is the first study which demonstrates how the architectures
for learning word embeddings can be applied to this challenging
syntactic-semantic task. Our method uses cross-lingual translation pairs to tie
each of the six target languages into a bilingual vector space with English,
jointly specialising the representations to encode the relational information
from English VerbNet. A standard clustering algorithm is then run on top of the
VerbNet-specialised representations, using vector dimensions as features for
learning verb classes. Our results show that the proposed cross-lingual
transfer approach sets new state-of-the-art verb classification performance
across all six target languages explored in this work.Comment: EMNLP 2017 (long paper
Methods for improving entity linking and exploiting social media messages across crises
Entity Linking (EL) is the task of automatically identifying entity mentions in texts and resolving them to a corresponding entity in a reference knowledge base (KB). There is a large number of tools available for different types of documents and domains, however the literature in entity linking has shown the quality of a tool varies across different corpus and depends on specific characteristics of the corpus it is applied to. Moreover the lack
of precision on particularly ambiguous mentions often spoils the usefulness of automated
disambiguation results in real world applications.
In the first part of this thesis I explore an approximation of the difficulty to link entity
mentions and frame it as a supervised classification task. Classifying difficult to disambiguate entity mentions can facilitate identifying critical cases as part of a semi-automated system, while detecting latent corpus characteristics that affect the entity linking performance. Moreover, despiteless the large number of entity linking tools that have been proposed throughout the past years, some tools work better on short mentions while others perform better when there is more contextual information. To this end, I proposed a solution by exploiting results from distinct entity linking tools on the same corpus by leveraging their individual strengths on a per-mention basis. The proposed solution demonstrated to be effective and outperformed the individual entity systems employed in a series of experiments.
An important component in the majority of the entity linking tools is the probability
that a mentions links to one entity in a reference knowledge base, and the computation of this probability is usually done over a static snapshot of a reference KB. However, an entity’s popularity is temporally sensitive and may change due to short term events. Moreover, these changes might be then reflected in a KB and EL tools can produce different results for a given mention at different times. I investigated the prior probability change over time and the overall disambiguation performance using different KB from different time periods. The second part of this thesis is mainly concerned with short texts. Social media has become an integral part of the modern society. Twitter, for instance, is one of the most popular social media platforms around the world that enables people to share their opinions and post short messages about any subject on a daily basis. At first I presented one
approach to identifying informative messages during catastrophic events using deep learning techniques. By automatically detecting informative messages posted by users during major events, it can enable professionals involved in crisis management to better estimate damages with only relevant information posted on social media channels, as well as to act immediately. Moreover I have also performed an analysis study on Twitter messages posted during the Covid-19 pandemic. Initially I collected 4 million tweets posted in Portuguese since the begining of the pandemic and provided an analysis of the debate aroud the pandemic. I used topic modeling, sentiment analysis and hashtags recomendation techniques to provide isights around the online discussion of the Covid-19 pandemic
Social Media Text Processing and Semantic Analysis for Smart Cities
With the rise of Social Media, people obtain and share information almost
instantly on a 24/7 basis. Many research areas have tried to gain valuable
insights from these large volumes of freely available user generated content.
With the goal of extracting knowledge from social media streams that might be
useful in the context of intelligent transportation systems and smart cities,
we designed and developed a framework that provides functionalities for
parallel collection of geo-located tweets from multiple pre-defined bounding
boxes (cities or regions), including filtering of non-complying tweets, text
pre-processing for Portuguese and English language, topic modeling, and
transportation-specific text classifiers, as well as, aggregation and data
visualization.
We performed an exploratory data analysis of geo-located tweets in 5
different cities: Rio de Janeiro, S\~ao Paulo, New York City, London and
Melbourne, comprising a total of more than 43 million tweets in a period of 3
months. Furthermore, we performed a large scale topic modelling comparison
between Rio de Janeiro and S\~ao Paulo. Interestingly, most of the topics are
shared between both cities which despite being in the same country are
considered very different regarding population, economy and lifestyle.
We take advantage of recent developments in word embeddings and train such
representations from the collections of geo-located tweets. We then use a
combination of bag-of-embeddings and traditional bag-of-words to train
travel-related classifiers in both Portuguese and English to filter
travel-related content from non-related. We created specific gold-standard data
to perform empirical evaluation of the resulting classifiers. Results are in
line with research work in other application areas by showing the robustness of
using word embeddings to learn word similarities that bag-of-words is not able
to capture
Supervised learning for the detection of negation and of its scope in French and Brazilian Portuguese biomedical corpora
International audienceAutomatic detection of negated content is often a prerequisite in information extraction systems in various domains. In the biomedical domain especially, this task is important because negation plays an important role. In this work, two main contributions are proposed. First, we work with languages which have been poorly addressed up to now: Brazilian Portuguese and French. Thus, we developed new corpora for these two languages which have been manually annotated for marking up the negation cues and their scope. Second, we propose automatic methods based on supervised machine learning approaches for the automatic detection of negation marks and of their scopes. The methods show to be robust in both languages (Brazilian Portuguese and French) and in cross-domain (general and biomedical languages) contexts. The approach is also validated on English data from the state of the art: it yields very good results and outperforms other existing approaches. Besides, the application is accessible and usable online. We assume that, through these issues (new annotated corpora, application accessible online, and cross-domain robustness), the reproducibility of the results and the robustness of the NLP applications will be augmented
A graph-based framework for data retrieved from criminal-related documents
A digitalização das empresas e dos serviços tem potenciado o tratamento e análise de um crescente volume
de dados provenientes de fontes heterogeneas, com desafios emergentes, nomeadamente ao nÃvel da representação
do conhecimento. Também os Órgãos de PolÃcia Criminal (OPC) enfrentam o mesmo desafio,
tendo em conta o volume de dados não estruturados, provenientes de relatórios policiais, sendo analisados
manualmente pelo investigadores criminais, consumindo tempo e recursos.
Assim, a necessidade de extrair e representar os dados não estruturados existentes em documentos relacionados
com o crime, de uma forma automática, permitindo a redução da análise manual efetuada pelos
investigadores criminais. Apresenta-se como um desafio para a ciência dos computadores, dando a possibilidade
de propor uma alternativa computacional que permita extrair e representar os dados, adaptando
ou propondo métodos computacionais novos.
Actualmente existem vários métodos computacionais aplicados ao domÃnio criminal, nomeadamente a identificação
e classificação de entidades nomeadas, por exemplo narcóticos, ou a extracção de relações entre
entidades relevantes para a investigação criminal. Estes métodos são maioritariamente aplicadas à lingua
inglesa, e em Portugal não há muita atenção à investigação nesta área, inviabilizando a sua aplicação no
contexto da investigação criminal.
Esta tese propõe uma solução integrada para a representação dos dados não estruturados existentes em
documentos, usando um conjunto de métodos computacionais: Preprocessamento de Documentos, que
agrupa uma tarefa de Extracção, Transformação e Carregamento adaptado aos documentos relacionados
com o crime, seguido por um pipeline de Processamento de Linguagem Natural aplicado à lingua portuguesa,
para uma análise sintática e semântica dos dados textuais; Método de Extracção de Informação 5W1H
que agrupa métodos de Reconhecimento de Entidades Nomeadas, a detecção da função semântica e a
extracção de termos criminais; Preenchimento da Base de Dados de Grafos e Enriquecimento, permitindo
a representação dos dados obtidos numa base de dados de grafos Neo4j. Globalmente a solução integrada apresenta resultados promissores, cujos resultados foram validados usando
protótipos desemvolvidos para o efeito. Demonstrou-se ainda a viabilidade da extracção dos dados não
estruturados, a sua interpretação sintática e semântica, bem como a representação na base de dados de
grafos; Abstract:
The digitalization of companies processes has enhanced the treatment and analysis of a growing volume
of data from heterogeneous sources, with emerging challenges, namely those related to knowledge representation.
The Criminal Police has similar challenges, considering the amount of unstructured data from
police reports manually analyzed by criminal investigators, with the corresponding time and resources.
There is a need to automatically extract and represent the unstructured data existing in criminal-related
documents and reduce the manual analysis by criminal investigators. Computer science faces a challenge
to apply emergent computational models that can be an alternative to extract and represent the data using
new or existing methods.
A broad set of computational methods have been applied to the criminal domain, such as the identification
and classification named-entities (NEs) or extraction of relations between the entities that are relevant for
the criminal investigation, like narcotics. However, these methods have mainly been used in the English
language. In Portugal, the research on this domain, applying computational methods, lacks related works,
making its application in criminal investigation unfeasible.
This thesis proposes an integrated solution for the representation of unstructured data retrieved from
documents, using a set of computational methods, such as Preprocessing Criminal-Related Documents
module. This module is supported by Extraction, Transformation, and Loading tasks. Followed by a
Natural Language Processing pipeline applied to the Portuguese language, for syntactic and semantic
analysis of textual data. Next, the 5W1H Information Extraction Method combines the Named-Entity
Recognition, Semantic Role Labelling, and Criminal Terms Extraction tasks. Finally, the Graph Database
Population and Enrichment allows us the representation of data retrieved into a Neo4j graph database.
Globally, the framework presents promising results that were validated using prototypes developed for this
purpose. In addition, the feasibility of extracting unstructured data, its syntactic and semantic interpretation,
and the graph database representation has also been demonstrated
- …