172,882 research outputs found

    Mining social network data for personalisation and privacy concerns: A case study of Facebook’s Beacon

    Get PDF
    This is the post-print version of the final published paper that is available from the link below.The popular success of online social networking sites (SNS) such as Facebook is a hugely tempting resource of data mining for businesses engaged in personalised marketing. The use of personal information, willingly shared between online friends' networks intuitively appears to be a natural extension of current advertising strategies such as word-of-mouth and viral marketing. However, the use of SNS data for personalised marketing has provoked outrage amongst SNS users and radically highlighted the issue of privacy concern. This paper inverts the traditional approach to personalisation by conceptualising the limits of data mining in social networks using privacy concern as the guide. A qualitative investigation of 95 blogs containing 568 comments was collected during the failed launch of Beacon, a third party marketing initiative by Facebook. Thematic analysis resulted in the development of taxonomy of privacy concerns which offers a concrete means for online businesses to better understand SNS business landscape - especially with regard to the limits of the use and acceptance of personalised marketing in social networks

    Why forums? An empirical analysis into the facilitating factors of carding forums

    No full text
    Over the last decade, the nature of cybercrime has transformed from naive vandalism to profit-driven, leading to the emergence of a global underground economy. A noticeable trend which has surfaced in this economy is the repeated use of forums to operate online stolen data markets. Using interaction data from three prominent carding forums: Shadowcrew, Cardersmarket and Darkmarket, this study sets out to understand why forums are repeatedly chosen to operate online stolen data markets despite numerous successful infiltrations by law enforcement in the past. Drawing on theories from criminology, social psychology, economics and network science, this study has identified four fundamental socio-economic mechanisms offered by carding forums: (1) formal control and coordination; (2) social networking; (3) identity uncertainty mitigation; (4) quality uncertainty mitigation. Together, they give rise to a sophisticated underground market regulatory system that facilitates underground trading over the Internet and thus drives the expansion of the underground economy

    Towards a non-hierarchical campaign? Testing for interactivity as a tool of election campaigning in France, the US, Germany and the UK.

    Get PDF
    Interest in the Internet and its role within political communication and election campaigning has now an established body of theoretical and empirical history, with mixed predictions and findings. The bulk of the empirical research has been in single countries, and where there has been comparative research it has tended to use a range of methodologies conducted by different authors. Largely, empirical studies have agreed with the politics as usual thesis, that political communication online is of a similar if not identical style to offline: top-down, information heavy and designed to persuade rather than consult with voters. The mass take-up of web 2.0 tools and platforms challenges this approach, however. Internet users now have opportunities to interact with a range of individuals and organisations, and it is argued that such tools reduce societal hierarchies and allow for symmetrical relationships to build. Theoretically democratic politics is a fertile environment for exploring the opportunities potentiated by web 2.0, in particular the notion of interactivity between the campaign (candidate, party and staff) and their audiences (activists, members, supporters and potential voters). Conceptually, web 2.0 encourages co-production of content. This research focuses on the extent to which interactivity is encouraged through the use of web 2.0 tools and platforms across a four year period focusing on four discrete national elections; determining take up and the link to national context as well as assessing lesson learning between nations. Using the Gibson and Ward coding scheme, though adapted to include web 2.0, we operationalise the models of interactivity proposed by McMillan (2002) and Ferber, Foltz and Pugiliese (2007). This methodology allows us to assess whether election campaigns are showing evidence of adopting co-created campaigns based around conversations with visitors to their websites or online presences, or whether websites remain packaged to persuade offering interactivity with site features (hyperlinks, web feeds, search engines) only. Indications are that the French election was largely politics as usual, however the Obama campaign took a clear step towards a more co-produced and interactive model. There may well be a clear Obama effect within the German and UK contests, or parties may adopt the look if not the practice of the US election. This paper will assess the extent to which an interactive model of campaigning is emerging as well as detailing a methodology which can capture and rate the levels and types of interactivity used across the Internet. Whilst specific political cultural and systematic factors will shape the use of Web technologies in each election, we suggest that an era of web 2.0 is gradually replacing that of Web 1.0. Within this era there is some evidence that campaigners learn from previous elections on how best to utilise the technology

    Reliable online social network data collection

    Get PDF
    Large quantities of information are shared through online social networks, making them attractive sources of data for social network research. When studying the usage of online social networks, these data may not describe properly users’ behaviours. For instance, the data collected often include content shared by the users only, or content accessible to the researchers, hence obfuscating a large amount of data that would help understanding users’ behaviours and privacy concerns. Moreover, the data collection methods employed in experiments may also have an effect on data reliability when participants self-report inacurrate information or are observed while using a simulated application. Understanding the effects of these collection methods on data reliability is paramount for the study of social networks; for understanding user behaviour; for designing socially-aware applications and services; and for mining data collected from such social networks and applications. This chapter reviews previous research which has looked at social network data collection and user behaviour in these networks. We highlight shortcomings in the methods used in these studies, and introduce our own methodology and user study based on the Experience Sampling Method; we claim our methodology leads to the collection of more reliable data by capturing both those data which are shared and not shared. We conclude with suggestions for collecting and mining data from online social networks.Postprin

    On the anonymity risk of time-varying user profiles.

    Get PDF
    Websites and applications use personalisation services to profile their users, collect their patterns and activities and eventually use this data to provide tailored suggestions. User preferences and social interactions are therefore aggregated and analysed. Every time a user publishes a new post or creates a link with another entity, either another user, or some online resource, new information is added to the user profile. Exposing private data does not only reveal information about single users’ preferences, increasing their privacy risk, but can expose more about their network that single actors intended. This mechanism is self-evident in social networks where users receive suggestions based on their friends’ activities. We propose an information-theoretic approach to measure the differential update of the anonymity risk of time-varying user profiles. This expresses how privacy is affected when new content is posted and how much third-party services get to know about the users when a new activity is shared. We use actual Facebook data to show how our model can be applied to a real-world scenario.Peer ReviewedPostprint (published version

    Supporting Online Social Networks

    No full text
    corecore