4,736 research outputs found

    Better Safe Than Sorry: An Adversarial Approach to Improve Social Bot Detection

    Full text link
    The arm race between spambots and spambot-detectors is made of several cycles (or generations): a new wave of spambots is created (and new spam is spread), new spambot filters are derived and old spambots mutate (or evolve) to new species. Recently, with the diffusion of the adversarial learning approach, a new practice is emerging: to manipulate on purpose target samples in order to make stronger detection models. Here, we manipulate generations of Twitter social bots, to obtain - and study - their possible future evolutions, with the aim of eventually deriving more effective detection techniques. In detail, we propose and experiment with a novel genetic algorithm for the synthesis of online accounts. The algorithm allows to create synthetic evolved versions of current state-of-the-art social bots. Results demonstrate that synthetic bots really escape current detection techniques. However, they give all the needed elements to improve such techniques, making possible a proactive approach for the design of social bot detection systems.Comment: This is the pre-final version of a paper accepted @ 11th ACM Conference on Web Science, June 30-July 3, 2019, Boston, U

    Team automata for security analysis

    Get PDF
    We show that team automata (TA) are well suited for security analysis by reformulating the Generalized Non-Deducibility on Compositions (GNDC) schema in terms of TA. We then use this to show that integrity is guaranteed for a case study in which TA model an instance of the Efficient Multi-chained Stream Signature (EMSS) protocol

    'Possunt, quia posse videntur': They can because they think they can. Development and Validation of the Work Self-Efficacy Scale: Evidence from two Studies

    Get PDF
    Self-efficacy (SE) has been recognised as a pervasive mechanism of human agency influencing motivation, performance and well-being. In the organisational literature, it has been mainly assessed in relation to job tasks, leaving the emotional and interpersonal domains quite unexplored, despite their relevance. We aim to fill this gap by presenting a multidimensional work self-efficacy (W-SE) scale that assesses employees' perceived capability to manage tasks (task SE), negative emotions in stressful situations (negative emotional SE), and their conduct in social interactions, in terms of both defending their own point of view (assertive SE) and understanding others' states and needs (empathic SE). Results from two independent studies (Study 1, N=2192 employees; Study 2, N=700 employees) adopting both variable- and person-centred approaches support the validity of the scale. Findings of factor analyses suggest a bi-factor model positing a global W-SE factor and four specific W-SEs, which are invariant across gender and career stages. Multiple regressions show that global W-SE is associated with all considered criteria, task SE is associated positively with in-role behaviours and negatively with counterproductive behaviours; negative emotional SE is negatively associated with negative emotions and health-related symptoms; empathic SE is positively associated with extra-role behaviour; and, unexpectedly, assertive SE is positively associated with counterproductive work behaviour. However, results from a Latent Profile Analysis showed that the relationship between the SEs and criteria is complex, and that W-SE dimensions combine into different patterns, identifying four SE configurations associated with different levels of adjustment

    Education and conflict recovery : the case of Timor Leste

    Get PDF
    The Timor Leste secession conflict lasted for 25 years. Its last wave of violence in 1999, following the withdrawal of Indonesian troops, generated massive displacement and destruction with widespread consequences for the economic and social development of the country. This paper analyzes the impact of the conflict on the level and access to education of boys and girls in Timor Leste. The authors examine the short-term impact of the 1999 violence on school attendance and grade deficit rates in 2001, and the longer-term impact of the conflict on primary school completion of cohorts of children observed in 2007. They compare the educational impact of the 1999 wave of violence with the impact of other periods of high-intensity violence during the 25 years of Indonesian occupation. The short-term effects of the conflict are mixed. In the longer term, the analysis finds a strong negative impact of the conflict on primary school completion among boys of school age exposed to peaks of violence during the 25-year long conflict. The effect is stronger for boys attending the last three grades of primary school. This result shows a substantial loss of human capital among young males in Timor Leste since the early 1970s, resulting from household investment trade-offs between education and economic survival.Adolescent Health,Youth and Governance,Education For All,Primary Education,Post Conflict Reconstruction

    Sorting suffixes of a text via its Lyndon Factorization

    Full text link
    The process of sorting the suffixes of a text plays a fundamental role in Text Algorithms. They are used for instance in the constructions of the Burrows-Wheeler transform and the suffix array, widely used in several fields of Computer Science. For this reason, several recent researches have been devoted to finding new strategies to obtain effective methods for such a sorting. In this paper we introduce a new methodology in which an important role is played by the Lyndon factorization, so that the local suffixes inside factors detected by this factorization keep their mutual order when extended to the suffixes of the whole word. This property suggests a versatile technique that easily can be adapted to different implementative scenarios.Comment: Submitted to the Prague Stringology Conference 2013 (PSC 2013

    From phonetics to phonology : The emergence of first words in Italian

    Get PDF
    This study assesses the extent of phonetic continuity between babble and words in four Italian children followed longitudinally from 0; 9 or 0; 10 to 2;0-two with relatively rapid and two with slower lexical growth. Prelinguistic phonetic characteristics, including both (a) consistent use of specific consonants and (b) age of onset and extent of consonant variegation in babble, are found to predict rate of lexical advance and to relate to the form of the early words. In addition, each child's lexical profile is analyzed to test the hypothesis of non-linearity in phonological development. All of the children show the expected pattern of phonological advance: 'Relatively accurate first word production is followed by lexical expansion, characterized by a decrease in accuracy and an increase of similarity between word forms. We interpret such a profile as reflecting the emergence of word templates, a first step in phonological organization

    Lightweight LCP Construction for Very Large Collections of Strings

    Full text link
    The longest common prefix array is a very advantageous data structure that, combined with the suffix array and the Burrows-Wheeler transform, allows to efficiently compute some combinatorial properties of a string useful in several applications, especially in biological contexts. Nowadays, the input data for many problems are big collections of strings, for instance the data coming from "next-generation" DNA sequencing (NGS) technologies. In this paper we present the first lightweight algorithm (called extLCP) for the simultaneous computation of the longest common prefix array and the Burrows-Wheeler transform of a very large collection of strings having any length. The computation is realized by performing disk data accesses only via sequential scans, and the total disk space usage never needs more than twice the output size, excluding the disk space required for the input. Moreover, extLCP allows to compute also the suffix array of the strings of the collection, without any other further data structure is needed. Finally, we test our algorithm on real data and compare our results with another tool capable to work in external memory on large collections of strings.Comment: This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ The final version of this manuscript is in press in Journal of Discrete Algorithm

    In search of a meaningful set of macroregions for the "New Europe"

    Full text link
    This paper aims at showing that the new enlargement of EU borders calls for a revision of the principal spatial paradigm that has been used to analyze economic development processes up to now: the centre/periphery paradigm. More specifically, it is maintained that a new macrostructure must be identified for the EU economic space, on which to project development spatial strategies. The regional delimitations adopted for the INTERREG programs don't seem appropriate for this puropose, because they are identified in order to promote transnational cooperation rather than to control the spatial components of European integration. Our analysis moves off from the delimitation introduced in the Second Report on Economic and Social Cohesion of the European Commission, which distinguishes among central, peripheral and intermediate regions. An alternative delimitation in five macroregions is proposed that disaggregates both intermediate and peripheral regions in two subregions by introducing a North/South dimension. In so doing we adhere to the idea of K.Peschel (1981), according to which the distance variable reflects the influence of the past on the contemporary spatial pattern of production and trade, more than transportation and communication costs. In other words, distance could be interpreted as a proxy for the influence of historical, cultural and linguistic affinity. We proceed by comparing the two delimitations in terms of a key variable that conditions regional inequalities: productivity per employed worker. We use the data base of Cambridge Econometrics for 127 regions of 15 EU member states and 15 sectors in 1995 and 1999. By developing a variant of Shift-and-Share analysis, we are able to disaggregate the structural and differential components of productivity differences between and within macroregions. The results are quite different for the two delimitations considered. For the first one (Second Report on Cohesion) the greatest part of the variability of regional productivity (75%) is absorbed by the differential components within macroregions. For the alternative delimitation in five macroregions that we propose, both within- and between- macroregions differential components result important, absorbing respectively 40% and 42% of the same variability. We conclude that our delimitation represents a more meaningful basis to analyze European regional inequality problems. In the final part of the paper we try to identify some factors that could help to explain the differential components of productivity in European regions and macroregions, like human capital, infrastructure, urban development. References Peschel K. (1981), "On the impact of geographical distance on the interregional patterns of production and trade", Envinronment and Planning A, 13, 198
    • 

    corecore