60 research outputs found

    Cyberspace: A Venue for Terrorism

    Get PDF
    This paper discusses how cyberspace has become a venue for terrorists groups for recruiting and proliferating propaganda and terrorism. Moreover, this study explores how the low cost Internet infrastructure and social media sites (such as Facebook, Twitter, and YouTube) have contributed to their networking and operations due to the convenience, in terms of availability, accessibility, message redundancy, ease of use, and the inability to censor content. Concepts such as cyber-weapons, cyber-attacks, cyber-war, and cyber-terrorism are presented and explored to assess how terrorist groups are exploiting cyberspace

    Detection of EEG Signal Post-Stroke Using FFT and Convolutional Neural Network

    Get PDF
    Stroke is a condition that occurs when the blood supply to the brain is disrupted or reduced. It may be caused by a blockage (ischemic stroke) or rupture of a blood vessel (hemorrhagic stroke) so that it can cause disability. Therefore patients need to undergo rehabilitation. One of the procedures of monitoring of the recovery of stroke patients using the National Institutes of Health Stroke Scale (NIHSS) method, but sometimes subjectively. Electroencephalogram (EEG) is an instrument that can measure electrical activity in the brain, including abnormalities caused by stroke. This study investigates EEG signal detection in post-stroke patients using Fast Fourier Transform (FFT) and 1D Convolutional Neural Network (1D CNN). Fast Fourier Transform (FFT) extraction can increase accuracy from 60% to 80.3% from the use of Adam's optimization model. Meanwhile, the AdaDelta model gave 20% accuracy without FFT. And its condition increased to 79.9% with FFT extraction. Therefore, Adam's stability has the advantage of remembering to use hyper-parameter. On the other hand, FFT is beneficial for directing information used for the use of 1D CNN, thus increasing accuracy. The results showed that using of Fast Fourier Transform (FFT) in identification could increase accuracy by 45-80% compared to identification using only 1D CNN. Meanwhile, the results of the study show that the relative weight correction model using Adaptive Moment Estimation (Adam) provided higher accuracy compared to the Adaptive learning rate (AdaDelta)

    An investigation into the efficacy of URL content filtering systems

    Get PDF
    Content filters are used to restrict to restrict minors from accessing to online content deemed inappropriate. While much research and evaluation has been done on the efficiency of content filters, there is little in the way of empirical research as to their efficacy. The accessing of inappropriate material by minors, and the role content filtering systems can play in preventing the accessing of inappropriate material, is largely assumed with little or no evidence. This thesis investigates if a content filter implemented with the stated aim of restricting specific Internet content from high school students achieved the goal of stopping students from accessing the identified material. The case is of a high school in Western Australia where the logs of a proxy content filter that included all Internet traffic requested by students were examined to determine the efficacy of the content filter. Using text extraction and pattern matching techniques to look for evidence of access to restricted content within this study, the results demonstrate that the belief that content filtering systems reliably prevent access to restricted content is misplaced. in this study there is direct evidence of circumvention of the content filter. This is single case study in one school and as such, the results are not generalisable to all schools or even through subsequent systems that replaced the content filter examined in this study, but it does raise the issue of the ability of these content filter systems to restrict content from high school students. Further studies across multiple schools and more complex circumvention methods would be required to identify if circumvention of content filters is a widespread issue

    How Google Perceives Customer Privacy, Cyber, E-Commerce, Political and Regulatory Compliance Risks

    Full text link
    By now, almost every business has an Internet presence. What are the major risks perceived by those engaged in the universe of Internet businesses? What potential risks, if they become reality, may cause substantial increases in operating costs or threaten the very survival of the enterprise? This Article discusses the relevant annual report disclosures from Alphabet, Inc. (parent of Google), along with other Google documents, as a potentially powerful teaching device. Most of the descriptive language to follow is excerpted directly from Alphabet’s (Google) regulatory filings. My additions about these entities include weaving their disclosure materials into a logical presentation and providing supplemental sources for those who desire a deeper look (usually in my footnotes) at any particular aspect. I have sought to present a roadmap with these materials that shows Google’s struggle to optimize their business performance while navigating through a complicated maze of regulatory compliance concerns and issues involving governmental jurisdictions throughout the world. International cybercrime and risk issues follow, with an examination of anti–money laundering, counterterrorist, and other potential illegal activity laws. The value proposition offered here is disarmingly simple—at no out-of-pocket cost, the reader has an opportunity to invest probably just a few hours to read and reflect upon the Alphabet, Inc. (Google) multiple-million-dollar research, investment and documentation of perceived Internet, e-commerce, cyber, IT, and electronic payment system risks. Hopefully, this will prove of value to those either interested in the rapidly changing dynamics of (1) electronic payment systems, (2) those engaged in Internet site operations, or (3) those engaged in fighting cybercrime activities

    Internet censorship in the European Union

    Get PDF
    Diese Arbeit befasst sich mit Internetzensur innnerhalb der EU, und hier insbesondere mit der technischen Umsetzung, das heißt mit den angewandten Sperrmethoden und Filterinfrastrukturen, in verschiedenen EU-Ländern. Neben einer Darstellung einiger Methoden und Infrastrukturen wird deren Nutzung zur Informationskontrolle und die Sperrung des Zugangs zu Websites und anderen im Internet verfügbaren Netzdiensten untersucht. Die Arbeit ist in drei Teile gegliedert. Zunächst werden Fälle von Internetzensur in verschiedenen EU-Ländern untersucht, insbesondere in Griechenland, Zypern und Spanien. Anschließend wird eine neue Testmethodik zur Ermittlung der Zensur mittels einiger Anwendungen, welche in mobilen Stores erhältlich sind, vorgestellt. Darüber hinaus werden alle 27 EU-Länder anhand historischer Netzwerkmessungen, die von freiwilligen Nutzern von OONI aus der ganzen Welt gesammelt wurden, öffentlich zugänglichen Blocklisten der EU-Mitgliedstaaten und Berichten von Netzwerkregulierungsbehörden im jeweiligen Land analysiert.This is a thesis on Internet censorship in the European Union (EU), specifically regarding the technical implementation of blocking methodologies and filtering infrastructure in various EU countries. The analysis examines the use of this infrastructure for information controls and the blocking of access to websites and other network services available on the Internet. The thesis follows a three-part structure. Firstly, it examines the cases of Internet censorship in various EU countries, specifically Greece, Cyprus, and Spain. Subsequently, this paper presents a new testing methodology for determining censorship of mobile store applications. Additionally, it analyzes all 27 EU countries using historical network measurements collected by Open Observatory of Network Interference (OONI) volunteers from around the world, publicly available blocklists used by EU member states, and reports issued by network regulators in each country

    Monitoring Internet censorship: the case of UBICA

    Get PDF
    As a consequence of the recent debate about restrictions in the access to content on the Internet, a strong motivation has arisen for censorship monitoring: an independent, publicly available and global watch on Internet censorship activities is a necessary goal to be pursued in order to guard citizens' right of access to information. Several techniques to enforce censorship on the Internet are known in literature, differing in terms of transparency towards the user, selectivity in blocking specific resources or whole groups of services, collateral effects outside the administrative borders of their intended application. Monitoring censorship is also complicated by the dynamic nature of multiple aspects of this phenomenon, the number and diversity of resources targeted by censorship and its global scale. In the present Thesis an analysis of literature on internet censorship and available solutions for censorship detection has been performed, characterizing censorship enforcement techniques and censorship detection techniques and tools. The available platforms and tools for censorship detection have been found falling short of providing a comprehensive monitoring platform able to manage a diverse set of measurement vantage points and a reporting interface continuously updated with the results of automated censorship analysis. The candidate proposes a design of such a platform, UBICA, along with a prototypical implementation whose effectiveness has been experimentally validated in global monitoring campaigns. The results of the validation are discussed, confirming the effectiveness of the proposed design and suggesting future enhancements and research

    Mercury and the Case for Plural Planetary Traditions in Early Imperial China

    Get PDF
    International audienceA paper on the tension between the astral sciences tianwen 天文 'heavenly patterns' and li 曆 'sequencing' as concerns their respective treatments of planetary behaviour--a tension foregrounding others between epistemologies, genres, and authorial cultures

    Tra parole d’odio e odio per le parole. Metamorfosi della censura

    Get PDF
    Nel mondo moderno la più grande minaccia alla libertà d’espressione proveniva dalla censura di Stato. Benché tale libertà, grazie al pensiero illuminista e liberale, sia ormai stabilmente incorporata in ogni costituzione del mondo occidentale e in ogni dichiarazione dei diritti, il suo futuro appare tutt’altro che roseo. Da un lato, la censura di Stato nonè del tutto tramontata neppure nelle democrazie costituzionali, ed anzi ad essa si è ormai affiancata quella, pervasiva, delle piattaforme digitali, spesso operanti come proxy dei poteri pubblici. Dall’altro lato, vari fattori stanno oggi inducendo a smantellare il privilegio tradizionalmente ascritto alla libertà d’espressione ed a legittimarne compressioni sempre più pervasive. Tra questi giocano un ruolo importante le riflessioni filosofiche che negli ultimi decenni hanno prodotto una vera e propria metamorfosi del concetto tradizionale di ‘censura’, dilatandolo enormemente e depotenziandone alquanto la carica negativa. Vanno poi considerate le sempre più pressanti richieste di protezione dai cosiddetti discorsi d’odio (hate speech), anche con l’ausilio della sanzione penale, delle identità collettive e dei gruppi sociali più vulnerabili. Tra i vari addebiti mossi a tali discorsi c’è quello di produrre un vulnus all’uguaglianza e alla stessa libertà di espressione delle categorie che ne rappresentano il bersaglio. Ma la libertà di parola è uno strumento sia di espressione individuale che di controllo del potere troppo prezioso, da limitare perciò con oculatezza e solo nei casi di comprovata lesione di diritti dei singoli. Le antiche ragioni liberali in suo favore, benché oggi fuori moda, permangono intatte
    corecore