88 research outputs found

    Link-based similarity search to fight web spam

    Get PDF
    www.ilab.sztaki.hu/websearch We investigate the usability of similarity search in fighting Web spam based on the assumption that an unknown spam page is more similar to certain known spam pages than to honest pages. In order to be successful, search engine spam never appears in isolation: we observe link farms and alliances for the sole purpose of search engine ranking manipulation. The artificial nature and strong inside connectedness however gave rise to successful algorithms to identify search engine spam. One example is trust and distrust propagation, an idea originating in recommender systems and P2P networks, that yields spam classificators by spreading information along hyperlinks from white and blacklists. While most previous results use PageRank variants for propagation, we form classifiers by investigating similarity top lists of an unknown page along various measures such as co-citation, companion, nearest neighbors in low dimensional projections and SimRank. We test our method over two data sets previously used to measure spam filtering algorithms. 1

    Designing Proof of Human-work Puzzles for Cryptocurrency and Beyond

    Get PDF
    We introduce the novel notion of a Proof of Human-work (PoH) and present the first distributed consensus protocol from hard Artificial Intelligence problems. As the name suggests, a PoH is a proof that a {\em human} invested a moderate amount of effort to solve some challenge. A PoH puzzle should be moderately hard for a human to solve. However, a PoH puzzle must be hard for a computer to solve, including the computer that generated the puzzle, without sufficient assistance from a human. By contrast, CAPTCHAs are only difficult for other computers to solve --- not for the computer that generated the puzzle. We also require that a PoH be publicly verifiable by a computer without any human assistance and without ever interacting with the agent who generated the proof of human-work. We show how to construct PoH puzzles from indistinguishability obfuscation and from CAPTCHAs. We motivate our ideas with two applications: HumanCoin and passwords. We use PoH puzzles to construct HumanCoin, the first cryptocurrency system with human miners. Second, we use proofs of human work to develop a password authentication scheme which provably protects users against offline attacks

    Autoscopy Jr.: Intrusion Detection for Embedded Control Systems

    Get PDF
    Securing embedded control systems within the power grid presents a unique challenge: on top of the resource restrictions inherent to these devices, SCADA systems must also accommodate strict timing requirements that are non-negotiable, and their massive scale greatly amplifies costs such as power consumption. These constraints make the conventional approach to host intrusion detection--namely, employing virtualization in some manner--too costly or impractical for embedded control systems within critical infrastructure. Instead, we take an in-kernel approach to system protection, building upon the Autoscopy system developed by Ashwin Ramaswamy that places probes on indirectly-called functions and uses them to monitor its host system for behavior characteristic of control-flow-altering malware, such as rootkits. In this thesis, we attempt to show that such a method would indeed be a viable method of protecting embedded control systems. We first identify several issues with the original prototype, and present a new version of the program (dubbed Autoscopy Jr.) that uses trusted location lists to verify that control is coming from a known, trusted location inside our kernel. Although we encountered additional performance overhead when testing our new design, we developed a kernel profiler that allowed us to identify the probes responsible for this overhead and discard them, leaving us with a final probe list that generated less than 5% overhead on every one of our benchmark tests. Finally, we attempted to run Autoscopy Jr. on two specialized kernels (one with an optimized probing framework, and another with a hardening patch installed), finding that the former did not produce enough performance benefits to preclude using our profiler, and that the latter required a different method of scanning for indirect functions for Autoscopy Jr. to operate. We argue that Autoscopy Jr. is indeed a feasible intrusion detection system for embedded control systems, as it can adapt easily to a variety of system architectures and allows us to intelligently balance security and performance on these critical devices

    Unauthorized Access

    Get PDF
    Going beyond current books on privacy and security, this book proposes specific solutions to public policy issues pertaining to online privacy and security. Requiring no technical or legal expertise, it provides a practical framework to address ethical and legal issues. The authors explore the well-established connection between social norms, privacy, security, and technological structure. They also discuss how rapid technological developments have created novel situations that lack relevant norms and present ways to develop these norms for protecting informational privacy and ensuring sufficient information security

    Cyber Law and Espionage Law as Communicating Vessels

    Get PDF
    Professor Lubin\u27s contribution is Cyber Law and Espionage Law as Communicating Vessels, pp. 203-225. Existing legal literature would have us assume that espionage operations and “below-the-threshold” cyber operations are doctrinally distinct. Whereas one is subject to the scant, amorphous, and under-developed legal framework of espionage law, the other is subject to an emerging, ever-evolving body of legal rules, known cumulatively as cyber law. This dichotomy, however, is erroneous and misleading. In practice, espionage and cyber law function as communicating vessels, and so are better conceived as two elements of a complex system, Information Warfare (IW). This paper therefore first draws attention to the similarities between the practices – the fact that the actors, technologies, and targets are interchangeable, as are the knee-jerk legal reactions of the international community. In light of the convergence between peacetime Low-Intensity Cyber Operations (LICOs) and peacetime Espionage Operations (EOs) the two should be subjected to a single regulatory framework, one which recognizes the role intelligence plays in our public world order and which adopts a contextual and consequential method of inquiry. The paper proceeds in the following order: Part 2 provides a descriptive account of the unique symbiotic relationship between espionage and cyber law, and further explains the reasons for this dynamic. Part 3 places the discussion surrounding this relationship within the broader discourse on IW, making the claim that the convergence between EOs and LICOs, as described in Part 2, could further be explained by an even larger convergence across all the various elements of the informational environment. Parts 2 and 3 then serve as the backdrop for Part 4, which details the attempt of the drafters of the Tallinn Manual 2.0 to compartmentalize espionage law and cyber law, and the deficits of their approach. The paper concludes by proposing an alternative holistic understanding of espionage law, grounded in general principles of law, which is more practically transferable to the cyber realmhttps://www.repository.law.indiana.edu/facbooks/1220/thumbnail.jp

    Digital Forensics Investigation Frameworks for Cloud Computing and Internet of Things

    Get PDF
    Rapid growth in Cloud computing and Internet of Things (IoT) introduces new vulnerabilities that can be exploited to mount cyber-attacks. Digital forensics investigation is commonly used to find the culprit and help expose the vulnerabilities. Traditional digital forensics tools and methods are unsuitable for use in these technologies. Therefore, new digital forensics investigation frameworks and methodologies are required. This research develops frameworks and methods for digital forensics investigations in cloud and IoT platforms

    Design and implementation of robust systems for secure malware detection

    Get PDF
    Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-UniversitĂ€t Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks

    Contextual Authority Tagging: Expertise Location via Social Labeling

    Get PDF
    This study investigates the possibility of a group of people making explicit their tacit knowledge about one another's areas of expertise. Through a design consisting of a modified Delphi Study, group members are asked to label both their own and each others' areas of expertise over the course of five rounds. Statistical analysis and qualitative evaluation of 10 participating organizations suggest they were successful and that, with simple keywords, group members can convey the salient areas of expertise of their colleagues to a degree that is deemed similar'' and of high quality by both third parties and those being evaluated. More work needs to be done to make this information directly actionable, but the foundational aspects have been identified. In a world with a democratization of voices from all around and increasing demands on our time and attention, this study suggests that simple, aggregated third-party expertise evaluations can augment our ongoing struggle for quality information source selection. These evaluations can serve as loose credentials when more expensive or heavyweight reputation cues may not be viable
    • 

    corecore