679 research outputs found

    Mapping bilateral information interests using the activity of Wikipedia editors

    Full text link
    We live in a global village where electronic communication has eliminated the geographical barriers of information exchange. The road is now open to worldwide convergence of information interests, shared values, and understanding. Nevertheless, interests still vary between countries around the world. This raises important questions about what today's world map of in- formation interests actually looks like and what factors cause the barriers of information exchange between countries. To quantitatively construct a world map of information interests, we devise a scalable statistical model that identifies countries with similar information interests and measures the countries' bilateral similarities. From the similarities we connect countries in a global network and find that countries can be mapped into 18 clusters with similar information interests. Through regression we find that language and religion best explain the strength of the bilateral ties and formation of clusters. Our findings provide a quantitative basis for further studies to better understand the complex interplay between shared interests and conflict on a global scale. The methodology can also be extended to track changes over time and capture important trends in global information exchange.Comment: 11 pages, 3 figures in Palgrave Communications 1 (2015

    The role of bot squads in the political propaganda on Twitter

    Get PDF
    Social Media are nowadays the privileged channel for information spreading and news checking. Unexpectedly for most of the users, automated accounts, also known as social bots, contribute more and more to this process of news spreading. Using Twitter as a benchmark, we consider the traffic exchanged, over one month of observation, on a specific topic, namely the migration flux from Northern Africa to Italy. We measure the significant traffic of tweets only, by implementing an entropy-based null model that discounts the activity of users and the virality of tweets. Results show that social bots play a central role in the exchange of significant content. Indeed, not only the strongest hubs have a number of bots among their followers higher than expected, but furthermore a group of them, that can be assigned to the same political tendency, share a common set of bots as followers. The retwitting activity of such automated accounts amplifies the presence on the platform of the hubs' messages.Comment: Under Submissio

    Proceedings of SAT Competition 2021 : Solver and Benchmark Descriptions

    Get PDF
    Non peer reviewe

    Exploiting structure to cope with NP-hard graph problems: Polynomial and exponential time exact algorithms

    Get PDF
    An ideal algorithm for solving a particular problem always finds an optimal solution, finds such a solution for every possible instance, and finds it in polynomial time. When dealing with NP-hard problems, algorithms can only be expected to possess at most two out of these three desirable properties. All algorithms presented in this thesis are exact algorithms, which means that they always find an optimal solution. Demanding the solution to be optimal means that other concessions have to be made when designing an exact algorithm for an NP-hard problem: we either have to impose restrictions on the instances of the problem in order to achieve a polynomial time complexity, or we have to abandon the requirement that the worst-case running time has to be polynomial. In some cases, when the problem under consideration remains NP-hard on restricted input, we are even forced to do both. Most of the problems studied in this thesis deal with partitioning the vertex set of a given graph. In the other problems the task is to find certain types of paths and cycles in graphs. The problems all have in common that they are NP-hard on general graphs. We present several polynomial time algorithms for solving restrictions of these problems to specific graph classes, in particular graphs without long induced paths, chordal graphs and claw-free graphs. For problems that remain NP-hard even on restricted input we present exact exponential time algorithms. In the design of each of our algorithms, structural graph properties have been heavily exploited. Apart from using existing structural results, we prove new structural properties of certain types of graphs in order to obtain our algorithmic results

    Handling fairness issues in time-relaxed tournaments with availability constraints

    Get PDF
    Sports timetables determine who will play against whom, where, and on which time slot. In contrast to time-constrained sports timetables, time-relaxed timetables utilize (many) more time slots than there are games per team. This offers time-relaxed timetables additional flexibility to take into account venue availability constraints, stating that a team can only play at home when its venue is available, and player availability constraints stating that a team can only play when its players are available. Despite their flexibility, time-relaxed timetables have the drawback that the rest period between teams’ consecutive games can vary considerably, and the difference in the number of games played at any point in the season can become large. Besides, it can be important to timetable home and away games alternately. In this paper, we first establish the computational complexity of time-relaxed timetabling with availability constraints. Naturally, when one also incorporates fairness objectives on top of availability, the problem becomes even more challenging. We present two heuristics that can handle these fairness objectives. First, we propose an adaptive large neighborhood method that repeatedly destroys and repairs a timetable. Second, we propose a memetic algorithm that makes use of local search to schedule or reschedule all home games of a team. For numerous artificial and real-life instances, these heuristics generate high-quality timetables using considerably less computational resources compared to integer programming models solved using a state-of-the-art solver

    A Measure of Segregation Based on Social Interactions

    Get PDF
    We develop an index of segregation based on two premises: (1) a measure of segregation should disaggregate to the level of individuals, and (2) an individual is more segregated the more segregated are the agents with whom she interacts. We present an index that satisfies (1) and (2) and that is based on agents' social interactions: the extent to which blacks interact with blacks, whites with whites, etc. We use the index to measure school and residential segregation. Using detailed data on friendship networks, we calculate levels of within-school racial segregation in a sample of U. S. schools. We also calculate residential segregation across major U. S. cities, using block-level data from the 2000 U. S. Census
    corecore