1,897 research outputs found
Perspectives on Improving the Working Conditions of NTT Faculty
Perspectives on Improving the Working Conditions of NTT Faculty discusses issues in working conditions for NTT (non-tenure track) faculty
Rethinking environmental peace-building in the context of resource extraction in Colombia
In this thesis I seek to explain the links between the governance of resources and the peace process in Colombia. To meet this aim, I explore the role of civil society movements in struggles related to extractive projects in the country.
In a first section of the thesis, I explore how extractive operations tend to reinforce the previous dynamics of violence that inhibit the peace-building process, and explain that the government is purposively leaving out issues related to the extractive sector in the peace agreements. Secondly, I explain the role of civil society movements in contesting extractive projects and in advancing alternative paths for resource governance.
I argue, that in contrast to the official commitment for peace, the alternative agendas on resource governance advanced by civil society movements contribute to the construction of long-term peace in the country. The movements’ chief claims include principles of food sovereignty and popular participation. In the chosen case study, which I analyze in the second section of the thesis, I address precisely the significance of a mechanism for popular democracy called consulta popular, i.e. local referendum on mining, in relation to “La Colosa” gold mining project. In exploring the significance of consulta popular I shed light on the factors that contribute, or not, to its implementation and legitimization. I show that the organization around a consulta popular contains some internal frictions, but overall, it unifies the civil-society movements committed in the mobilizations against “La Colosa”. I also explain that state and industry actors oppose the application of consultas populares on mining in an authoritarian manner, through legislative changes and threats to individuals.
From my findings, I argue that the increased use of the democratic mechanism of consulta popular in Colombia in recent years, represents a local response to contrast the violence of extractive exploitation, and reflects civil society’s claims for enhanced social justice along the national process for peace. While the first section of the thesis relies mostly on secondary data, the second section is a result of four months fieldwork conducted in Colombia in 2015. Finally, a theoretical aim of this study is to further an encounter between the fields of resource governance and politics with peace and conflict studies. In particular, the study draws from these fields’ critical concepts, which give importance to issues of participation and to sub-national dynamics of governance, and places centrality to the concept of environmental peace-building.M-IE
Life Events and Treatment Outcomes Among Individuals with Substance use Disorders: A Narrative Review
Substance use disorders are characterized by a variable course, in which multiple treatment attempts and relapses are typical. Consistent with conceptualizations of substance use and relapse, life events have been implicated in contributing to poor substance use disorders treatment outcomes. However, inconsistencies in empirical findings regarding the life events-substance use disorders outcome literature have been previously observed. This review provides an updated critique of the literature since the previous review published in 1987 (O\u27Doherty & Davies, 1987), examining the relationship between life events and substance use disorders treatment outcome among clinical samples of individuals. Review of 18 peer-reviewed articles suggested that data on the life events-outcome relationship continue to be inconclusive. Inconsistencies across studies in the operationalization of life events and substance use treatment outcomes and lack of theoretically driven designs may be contributing to differences in findings. Recommendations for future research that will increase the clinical utility of the life events construct are provided
Interactive inference: a multi-agent model of cooperative joint actions
We advance a novel computational model of multi-agent, cooperative joint
actions that is grounded in the cognitive framework of active inference. The
model assumes that to solve a joint task, such as pressing together a red or
blue button, two (or more) agents engage in a process of interactive inference.
Each agent maintains probabilistic beliefs about the goal of the joint task
(e.g., should we press the red or blue button?) and updates them by observing
the other agent's movements, while in turn selecting movements that make his
own intentions legible and easy to infer by the other agent (i.e., sensorimotor
communication). Over time, the interactive inference aligns both the beliefs
and the behavioral strategies of the agents, hence ensuring the success of the
joint action. We exemplify the functioning of the model in two simulations. The
first simulation illustrates a ''leaderless'' joint action. It shows that when
two agents lack a strong preference about their joint task goal, they jointly
infer it by observing each other's movements. In turn, this helps the
interactive alignment of their beliefs and behavioral strategies. The second
simulation illustrates a "leader-follower" joint action. It shows that when one
agent ("leader") knows the true joint goal, it uses sensorimotor communication
to help the other agent ("follower") infer it, even if doing this requires
selecting a more costly individual plan. These simulations illustrate that
interactive inference supports successful multi-agent joint actions and
reproduces key cognitive and behavioral dynamics of "leaderless" and
"leader-follower" joint actions observed in human-human experiments. In sum,
interactive inference provides a cognitively inspired, formal framework to
realize cooperative joint actions and consensus in multi-agent systems.Comment: 32 pages, 16 figure
What is the best spatial distribution to model base station density? A deep dive into two european mobile networks
This paper studies the base station (BS) spatial distributions across different scenarios in urban, rural, and coastal zones, based on real BS deployment data sets obtained from two European countries (i.e., Italy and Croatia). Basically, this paper takes into account different representative statistical distributions to characterize the probability density function of the BS spatial density, including Poisson, generalized Pareto, Weibull, lognormal, and \alpha -Stable. Based on a thorough comparison with real data sets, our results clearly assess that the \alpha -Stable distribution is the most accurate one among the other candidates in urban scenarios. This finding is confirmed across different sample area sizes, operators, and cellular technologies (GSM/UMTS/LTE). On the other hand, the lognormal and Weibull distributions tend to fit better the real ones in rural and coastal scenarios. We believe that the results of this paper can be exploited to derive fruitful guidelines for BS deployment in a cellular network design, providing various network performance metrics, such as coverage probability, transmission success probability, throughput, and delay
A sampling strategy of the radiation operator in near-zone based on an asymptotic kernel
In this paper, we address the problem of discretizing the singular system of the radiation operator concerning the case of a magnetic strip current whose radiated field is observed in near-zone on a bounded line parallel to the source. This question has been already addressed in previous articles with the limitation that the extension of the observation domain does not overcome the source size. In this article, we remove such limitation, hence, we provide a discrete model that well approximates the singular values of the radiation operator in the case where the observation domain is larger than the source
Soil quality of a degraded urban area
Human activities cause modifications of the soil characteristics, leading to a significant reduction of the soil fertil-
ity and quality.
The aim of this study was to evaluate the relationships between microbial activity or biomass and chemical char-
acteristics (i.e. heavy metal and organic matter contents) of a degraded urban soil.
The study area is located in an urban park (about 10 ha, called Quarantena) near to the Fusaro Lake of Campi Fle-
grei (Southern Italy); the Park was established in 1953 to shelter animals coming from any place of the Planet and
execute veterinary checks before their delivery to different European zoos. In 1997, the park was abandoned and
nowadays in it a large amount of urban wastes accumulates. Surface soils (0-10 cm) were sampled at three points:
two of them covered by Holm Oak specimens (P1 and P2) and one covered by herbaceous species, particularly
legumes (P3). P1 was localized at the border of the park and next to a busy road; P2 at the centre of the Quarantena
Park; P3 at a gap area near the Fusaro Lake.
The results showed that the soil sampled at P1 showed the highest Cr and Ni concentrations; the soil sampled at P3
had high levels of Cu and Pb, exceeding the threshold values of 100 μg g-1 d.w. fixed by the Italian law for urban
soils, probably due to boat traffic, fishing practice and agricultural activities; the soil sampled at P2 had interme-
diate values of metal concentrations but the highest amount of organic matter (more than 20% d.w.). Despite of
metal contamination, P1 and P3 showed higher soil microbial biomass and activity as compared to P2. Therefore,
at this site, the organic matter accumulation could be due to the scarce litter degradation.
In conclusion, although the studied area was not too large, a wide heterogeneity of soil quality (in terms of the
investigated chemical and biological characteristics) was detected, depending on the local human impact
Decomposizione di lettiere singole e miste di Quercus ilex L., Pistacia lentiscus L., Phillyrea angustifolia L., e Cistus spp. in un area a macchia bassa della Riserva di Castel Volturno (Sud Italia)
La maggior parte dei lavori sulla decomposizione riguardano lettiera di foglie di singole specie; pochissimi sono gli studi su lettiere miste più adeguati a rappresentare gli effetti della diversità delle comunità vegetali su questo processo. In questa ricerca la decomposizione di Quercus ilex L., Pistacia lentiscus L., Phillyrea angustifolia e Cistus spp., è stata studiata utilizzando sacchetti di lettiera di una sola specie e sacchetti di lettiere miste per un totale di 10 tipologie di sacchetti. Le proporzioni delle singole specie nelle miscele erano 33:33:33 e 50:25:25. I sacchetti di lettiera sono stati incubati nella macchia bassa della Riserva Naturale di Castel Volturno, nella stessa area nella quale era stata effettuata la raccolta di lettiera. La decomposizione e la colonizzazione fungina sono state determinate dopo 96 giorni di incubazione. La lettiera pura di cisto in circa 3 mesi perde il 25 % del peso iniziale; in miscela con fillirea e con lentisco presenta valori significativamente più bassi di decomposizione e di colonizzazione fungina. Nello stesso periodo la fillirea perde il 23 % del suo peso iniziale. Le lettiere di leccio e di lentisco, caratterizzate da un più alto contenuto iniziale di lignina presentano una decomposizione più lenta e perdono rispettivamente il 18% ed il 14% del peso iniziale. Non sono state evidenziate per fillirea, leccio e lentisco effetti delle miscele sulla decomposizione
A Hybrid Framework for Text Analysis
2015 - 2016In Computational Linguistics there is an essential dichotomy between Linguists
and Computer Scientists. The rst ones, with a strong knowledge of
language structures, have not engineering skills. The second ones, contrariwise,
expert in computer and mathematics skills, do not assign values to basic
mechanisms and structures of language. Moreover, this discrepancy, especially
in the last decades, has increased due to the growth of computational
resources and to the gradual computerization of the world; the use of Machine
Learning technologies in Arti cial Intelligence problems solving, which
allows for example the machines to learn , starting from manually generated
examples, has been more and more often used in Computational Linguistics
in order to overcome the obstacle represented by language structures and its
formal representation.
The dichotomy has resulted in the birth of two main approaches to Computational
Linguistics that respectively prefers:
rule-based methods, that try to imitate the way in which man uses and
understands the language, reproducing syntactic structures on which
the understanding process is based on, building lexical resources as electronic
dictionaries, taxonomies or ontologies;
statistic-based methods that, conversely, treat language as a group of
elements, quantifying words in a mathematical way and trying to extract
information without identifying syntactic structures or, in some
algorithms, trying to confer to the machine the ability to learn these
structures.
One of the main problems is the lack of communication between these two
di erent approaches, due to substantial di erences characterizing them: on
the one hand there is a strong focus on how language works and on language
characteristics, there is a tendency to analytical and manual work. From other
hand, engineering perspective nds in language an obstacle, and recognizes in
the algorithms the fastest way to overcome this problem.
However, the lack of communication is not only an incompatibility: following
Harris, the best way to approach natural language, could result by taking the
best of both.
At the moment, there is a large number of open-source tools that perform
text analysis and Natural Language Processing. A great part of these tools are
based on statistical models and consist on separated modules which could be
combined in order to create a pipeline for the processing of the text. Many of these resources consist in code packages which have not a GUI (Graphical User
Interface) and they result impossible to use for users without programming
skills. Furthermore, the vast majority of these open-source tools support only
English language and, when Italian language is included, the performances
of the tools decrease signi cantly. On the other hand, open source tools for
Italian language are very few.
In this work we want to ll this gap by present a new hybrid framework
for the analysis of Italian texts. It must not be intended as a commercial tool,
but the purpose for which it was built is to help linguists and other scholars to
perform rapid text analysis and to produce linguistic data. The framework,
that performs both statistical and rule-based analysis, is called LG-Starship.
The idea is to built a modular software that includes, in the beginning, the
basic algorithms to perform di erent kind of analysis. Modules will perform
the following tasks:
Preprocessing Module: a module with which it is possible to charge a
text, normalize it or delete stop-words. As output, the module presents
the list of tokens and letters which compose the texts with respective
occurrences count and the processed text.
Mr. Ling Module: a module with which POS tagging and Lemmatization
are performed. The module also returns the table of lemmas
with the count of occurrences and the table with the quanti cation of
grammatical tags.
Statistic Module: with which it is possible to calculate Term Frequency
and TF-IDF of tokens or lemmas, extract bi-grams and tri-grams units
and export results as tables.
Semantic Module: which use The Hyperspace Analogue to Language
algorithm to calculate semantic similarity between words. The module
returns similarity matrices of words per word which can be exported
and analyzed.
SyntacticModule: which analyze syntax structures of a selected sentence
and tag the verbs and its arguments with semantic labels.
The objective of the Framework is to build an all-in-one platform for NLP
which allows any kind of users to perform basic and advanced text analysis.
With the purpose of make the Framework accessible to users who have not
speci c computer science and programming language skills, the modules have
been provided with an intuitive GUI. The framework can be considered hybrid in a double sense: as explained
in the previous lines, it uses both statistical and rule/based methods, by relying
on standard statistical algorithms or techniques, and, at the same time,
on Lexicon-Grammar syntactic theory. In addition, it has been written in
both Java and Python programming languages. LG-Starship Framework has
a simple Graphic User Interface but will be also released as separated modules
which may be included in any NLP pipelines independently.
There are many resources of this kind, but the large majority works for English.
There are very few free resources for Italian language and this work tries
to cover this need by proposing a tool which can be used both by linguists
or other scientist interested in language and text analysis who have no idea
about programming languages, as by computer scientists, who can use free
modules in their own code or in combination with di erent NLP algorithms.
The Framework takes the start from a text or corpus written directly by
the user or charged from an external resource. The LG-Starship Framework
work ow is described in the owchart shown in g. 1. The pipeline shows that the Pre-Processing Module is applied on original
imported or generated text in order to produce a clean and normalized preprocessed
text. This module includes a function for text splitting, a stop-word
list and a tokenization method. On the text preprocessed the Statistic Module
or the Mr. Ling Module can be applied. The rst one, which includes basic statistics algorithm as Term Frequency, tf-idf and n-grams extraction, produces
as output databases of lexical and numerical data which can be used to
produce charts or perform more external analysis; the second one, is divided
in two main task: a Pos tagger, based on the Averaged Perceptron Tagger [?]
and trained on the Paisà Corpus [Lyding et al., 2014], perform the Part-Of-
Speech Tagging and produce an annotated text. A lemmatization method,
which relies on a set of electronic dictionaries developed at the University of
Salerno [Elia, 1995, Elia et al., 2010], take as input the Postagged text and
produces a new lemmatized version of original text with information about
syntactic and semantic properties.
This lemmatized text, which can also be processed with the Statistic Module,
serves as input for two deeper level of text analysis carried out by both
the Syntactic Module and the Semantic Module.
The rst one lays on the Lexicon Grammar Theory [Gross, 1971, 1975] and
use a database of Predicate structures in development at the Department of
Political, Social and Communication Science. Its objective is to produce a
Dependency Graph of the sentences that compose the text.
The Semantic Module uses the Hyperspace Analogue to Language distributional
semantics algorithm [Lund and Burgess, 1996] trained on the Paisà
Corpus to produce a semantic network of the words of the text.
These work ow has been included in two di erent experiments in which
two User Generated Corpora have been involved.
The rst experiment represent a statistical study of the language of Rap
Music in Italy through the analysis of a great corpus of Rap Song lyrics downloaded
from on line databases of user generated lyrics.
The second experiment is a Feature-Based Sentiment Analysis project performed
on user product reviews. For this project we integrated a large domain
database of linguistic resources for Sentiment Analysis, developed in the past
years by the Department of Political, Social and Communication Science of
the University of Salerno, which consists of polarized dictionaries of Verbs,
Adjectives, Adverbs and Nouns.
These two experiment underline how the linguistic framework can be applied
to di erent level of analysis and to produce both Qualitative data and Quantitative
data.
For what concern the obtained results, the Framework, which is only at
a Beta Version, obtain discrete results both in terms of processing time that
in terms of precision. Nevertheless, the work is far from being considered
complete. More algorithms will be added to the Statistic Module and the
Syntactic Module will be completed. The GUI will be improved and made more attractive and modern and, in addiction, an open-source on-line version
of the modules will be published. [edited by author]XV n.s
- …