2,239 research outputs found

    Comparing Social Media Content Regulation in the US and the EU: How the US Can Move Forward With Section 230 to Bolster Social Media Users’ Freedom of Expression

    Get PDF
    This Article will compare 47 U.S.C. § 230 (“Section 230”), the United States law governing civil claims that prevent social media companies from being treated like the publishers of their own users’ posts and the companies’ abilities to remove user posts, with the European Union’s (“EU”) equivalent governing law, the E-commerce Directive. The E-Commerce Directive will be used as an example of a governmental regulation that better prevents viewpoint discrimination, but at the cost of a lower standard of user expression. A lower standard of user expression means diminished rights in exercising free speech, as exemplified by the EU outlawing broader categories of speech than the US (Section III covers this point in detail). Then, this Article will demonstrate how the US may achieve the goal of decreasing discretionary power of platforms’ content removal abilities, thereby minimizing viewpoint discrimination of lawful user-posted content, while preserving private governance of social media business practices. Section II provides background on social media users, platform content regulation, and content removal practices. It continues with a discussion of the enormous amount of content social media platforms are responsible for monitoring and governing. Additionally, the relationships of social media companies, governments, and users are explained in connection with social media content moderation. Lastly, Section II summarizes the First Amendment’s boundaries on protection of speech and clarifies freedom of expression for US citizens only from government actors, leaving private platforms content removal practices currently out of the First Amendment’s reach. Section III lays out the social media content regulation laws governing both the US and the EU. Historically, the US’s Section 230 has been referred to as the “26 words that created the internet” due to its thorough protection of private online platforms from third-party (“intermediary”) liability arising from civil suits like defamation, and general allowance for platforms to leave up or take down content voluntarily. Contrarily, the EU’s E-commerce Directive offers platforms safe harbor from legal liability with two main requirements: the platform must (1) not have “actual knowledge of illegal activity,” and (2) “act expeditiously to remove” illegal activity once actual knowledge is obtained. This section concludes by illustrating the EU’s approach to social media content regulation and reviewing its implications on viewpoint discrimination in social platform content moderation. Section IV discusses the deficiencies of Section 230 in its approach to platform content moderation. The analysis will continue with the three main problems arising from Section 230’s current application, which allows social media companies: (1) overbroad discretionary authority, (2) the ability to operate with limited transparency, and (3) the ability to discriminate based on viewpoint. Additionally, the Article will explore the implementation and significance of the ground-breaking independent Facebook Oversight Board on providing an appellate process for wrongful censorship of posts. Section V proposes two solutions to the three previously listed issues of Section 230(c) addressed in this Article. The first solution is statutory revision of Section 230(c)(2). There are two statutory revisions proposed in the first solution: (1) revision of the statute to grant immunity to social media platforms only if the platforms remove content that is illegal or otherwise unprotected by the First Amendment, and (2) introduction of a “bad faith” clause that removes platform immunity if the plaintiff can prove their lawful post was removed as a result of viewpoint discrimination. The second solution suggests federal statutes mandating large social media platforms create their own independent oversight boards. These Social Media Oversight Boards will be primarily based on Facebook’s Oversight Board, with the new Oversight Boards’ purpose being independent review of platform censorship practices through a board review process

    Machine learning approach for detection of nonTor traffic

    Get PDF
    Intrusion detection has attracted a considerable interest from researchers and industry. After many years of research the community still faces the problem of building reliable and efficient intrusion detection systems (IDS) capable of handling large quantities of data with changing patterns in real time situations. The Tor network is popular in providing privacy and security to end user by anonymizing the identity of internet users connecting through a series of tunnels and nodes. This work identifies two problems; classification of Tor traffic and nonTor traffic to expose the activities within Tor traffic that minimizes the protection of users in using the UNB-CIC Tor Network Traffic dataset and classification of the Tor traffic flow in the network. This paper proposes a hybrid classifier; Artificial Neural Network in conjunction with Correlation feature selection algorithm for dimensionality reduction and improved classification performance. The reliability and efficiency of the propose hybrid classifier is compared with Support Vector Machine and naĂŻve Bayes classifiers in detecting nonTor traffic in UNB-CIC Tor Network Traffic dataset. Experimental results show the hybrid classifier, ANN-CFS proved a better classifier in detecting nonTor traffic and classifying the Tor traffic flow in UNB-CIC Tor Network Traffic dataset
    • …
    corecore