885 research outputs found

    Spam on the Internet: can it be eradicated or is it here to stay?

    Get PDF
    A discussion of the rise in unsolicited bulk e-mail, its effect on tertiary education, and some of the methods being used or developed to combat it. Includes an examination of block listing, protocol change, economic and computational solutions, e-mail aliasing, sender warranted e-mail, collaborative filtering, rule-based and statistical solutions, and legislation

    Spam

    Get PDF
    With the advent of the electronic mail system in the 1970s, a new opportunity for direct marketing using unsolicited electronic mail became apparent. In 1978, Gary Thuerk compiled a list of those on the Arpanet and then sent out a huge mailing publicising Digital Equipment Corporation (DEC—now Compaq) systems. The reaction from the Defense Communications Agency (DCA), who ran Arpanet, was very negative, and it was this negative reaction that ensured that it was a long time before unsolicited e-mail was used again (Templeton, 2003). As long as the U.S. government controlled a major part of the backbone, most forms of commercial activity were forbidden (Hayes, 2003). However, in 1993, the Internet Network Information Center was privatized, and with no central government controls, spam, as it is now called, came into wider use. The term spam was taken from the Monty Python Flying Circus (a UK comedy group) and their comedy skit that featured the ironic spam song sung in praise of spam (luncheon meat)—“spam, spam, spam, lovely spam”—and it came to mean mail that was unsolicited. Conversely, the term ham came to mean e-mail that was wanted. Brad Templeton, a UseNet pioneer and chair of the Electronic Frontier Foundation, has traced the first usage of the term spam back to MUDs (Multi User Dungeons), or real-time multi-person shared environment, and the MUD community. These groups introduced the term spam to the early chat rooms (Internet Relay Chats). The first major UseNet (the world’s largest online conferencing system) spam sent in January 1994 and was a religious posting: “Global alert for all: Jesus is coming soon.” The term spam was more broadly popularised in April 1994, when two lawyers, Canter and Siegel from Arizona, posted a message that advertized their information and legal services for immigrants applying for the U.S. Green Card scheme. The message was posted to every newsgroup on UseNet, and after this incident, the term spam became synonymous with junk or unsolicited e-mail. Spam spread quickly among the UseNet groups who were easy targets for spammers simply because the e-mail addresses of members were widely available (Templeton, 2003)

    Making the Most of Tweet-Inherent Features for Social Spam Detection on Twitter

    Get PDF
    Social spam produces a great amount of noise on social media services such as Twitter, which reduces the signal-to-noise ratio that both end users and data mining applications observe. Existing techniques on social spam detection have focused primarily on the identification of spam accounts by using extensive historical and network-based data. In this paper we focus on the detection of spam tweets, which optimises the amount of data that needs to be gathered by relying only on tweet-inherent features. This enables the application of the spam detection system to a large set of tweets in a timely fashion, potentially applicable in a real-time or near real-time setting. Using two large hand-labelled datasets of tweets containing spam, we study the suitability of five classification algorithms and four different feature sets to the social spam detection task. Our results show that, by using the limited set of features readily available in a tweet, we can achieve encouraging results which are competitive when compared against existing spammer detection systems that make use of additional, costly user features. Our study is the first that attempts at generalising conclusions on the optimal classifiers and sets of features for social spam detection over different datasets

    The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race

    Full text link
    Recent studies in social media spam and automation provide anecdotal argumentation of the rise of a new generation of spambots, so-called social spambots. Here, for the first time, we extensively study this novel phenomenon on Twitter and we provide quantitative evidence that a paradigm-shift exists in spambot design. First, we measure current Twitter's capabilities of detecting the new social spambots. Later, we assess the human performance in discriminating between genuine accounts, social spambots, and traditional spambots. Then, we benchmark several state-of-the-art techniques proposed by the academic literature. Results show that neither Twitter, nor humans, nor cutting-edge applications are currently capable of accurately detecting the new social spambots. Our results call for new approaches capable of turning the tide in the fight against this raising phenomenon. We conclude by reviewing the latest literature on spambots detection and we highlight an emerging common research trend based on the analysis of collective behaviors. Insights derived from both our extensive experimental campaign and survey shed light on the most promising directions of research and lay the foundations for the arms race against the novel social spambots. Finally, to foster research on this novel phenomenon, we make publicly available to the scientific community all the datasets used in this study.Comment: To appear in Proc. 26th WWW, 2017, Companion Volume (Web Science Track, Perth, Australia, 3-7 April, 2017

    Escalating The War On SPAM Through Practical POW Exchange

    Get PDF
    Proof-of-work (POW) schemes have been proposed in the past. One prominent system is HASHCASH (Back, 2002) which uses cryptographic puzzles . However, work by Laurie and Clayton (2004) has shown that for a uniform proof-of-work scheme on email to have an impact on SPAM, it would also be onerous enough to impact on senders of "legitimate" email. I suggest that a non-uniform proof-of-work scheme on email may be a solution to this problem, and describe a framework that has the potential to limit SPAM, without unduly penalising legitimate senders, and is constructed using only current SPAM filter technology, and a small change to the SMTP (Simple Mail Transfer Protocol). Specifically, I argue that it is possible to make sending SPAM 1,000 times more expensive than sending "legitimate" email (so called HAM). Also, unlike the system proposed by Debin Liu and Jean Camp (2006), it does not require the complications of maintaining a reputation system.Comment: To be presented at the IEEE Conference On Networking, Adelaide, Australia, November 19-21, 200
    • 

    corecore