132 research outputs found

    Combating False Negatives in Adversarial Imitation Learning

    Full text link
    In adversarial imitation learning, a discriminator is trained to differentiate agent episodes from expert demonstrations representing the desired behavior. However, as the trained policy learns to be more successful, the negative examples (the ones produced by the agent) become increasingly similar to expert ones. Despite the fact that the task is successfully accomplished in some of the agent's trajectories, the discriminator is trained to output low values for them. We hypothesize that this inconsistent training signal for the discriminator can impede its learning, and consequently leads to worse overall performance of the agent. We show experimental evidence for this hypothesis and that the 'False Negatives' (i.e. successful agent episodes) significantly hinder adversarial imitation learning, which is the first contribution of this paper. Then, we propose a method to alleviate the impact of false negatives and test it on the BabyAI environment. This method consistently improves sample efficiency over the baselines by at least an order of magnitude.Comment: This is an extended version of the student abstract published at 34th AAAI Conference on Artificial Intelligenc

    Addressing the new generation of spam (Spam 2.0) through Web usage models

    Get PDF
    New Internet collaborative media introduce new ways of communicating that are not immune to abuse. A fake eye-catching profile in social networking websites, a promotional review, a response to a thread in online forums with unsolicited content or a manipulated Wiki page, are examples of new the generation of spam on the web, referred to as Web 2.0 Spam or Spam 2.0. Spam 2.0 is defined as the propagation of unsolicited, anonymous, mass content to infiltrate legitimate Web 2.0 applications.The current literature does not address Spam 2.0 in depth and the outcome of efforts to date are inadequate. The aim of this research is to formalise a definition for Spam 2.0 and provide Spam 2.0 filtering solutions. Early-detection, extendibility, robustness and adaptability are key factors in the design of the proposed method.This dissertation provides a comprehensive survey of the state-of-the-art web spam and Spam 2.0 filtering methods to highlight the unresolved issues and open problems, while at the same time effectively capturing the knowledge in the domain of spam filtering.This dissertation proposes three solutions in the area of Spam 2.0 filtering including: (1) characterising and profiling Spam 2.0, (2) Early-Detection based Spam 2.0 Filtering (EDSF) approach, and (3) On-the-Fly Spam 2.0 Filtering (OFSF) approach. All the proposed solutions are tested against real-world datasets and their performance is compared with that of existing Spam 2.0 filtering methods.This work has coined the term ‘Spam 2.0’, provided insight into the nature of Spam 2.0, and proposed filtering mechanisms to address this new and rapidly evolving problem

    Development and Validation of a Proof-of-Concept Prototype for Analytics-based Malicious Cybersecurity Insider Threat in a Real-Time Identification System

    Get PDF
    Insider threat has continued to be one of the most difficult cybersecurity threat vectors detectable by contemporary technologies. Most organizations apply standard technology-based practices to detect unusual network activity. While there have been significant advances in intrusion detection systems (IDS) as well as security incident and event management solutions (SIEM), these technologies fail to take into consideration the human aspects of personality and emotion in computer use and network activity, since insider threats are human-initiated. External influencers impact how an end-user interacts with both colleagues and organizational resources. Taking into consideration external influencers, such as personality, changes in organizational polices and structure, along with unusual technical activity analysis, would be an improvement over contemporary detection tools used for identifying at-risk employees. This would allow upper management or other organizational units to intervene before a malicious cybersecurity insider threat event occurs, or mitigate it quickly, once initiated. The main goal of this research study was to design, develop, and validate a proof-of-concept prototype for a malicious cybersecurity insider threat alerting system that will assist in the rapid detection and prediction of human-centric precursors to malicious cybersecurity insider threat activity. Disgruntled employees or end-users wishing to cause harm to the organization may do so by abusing the trust given to them in their access to available network and organizational resources. Reports on malicious insider threat actions indicated that insider threat attacks make up roughly 23% of all cybercrime incidents, resulting in $2.9 trillion in employee fraud losses globally. The damage and negative impact that insider threats cause was reported to be higher than that of outsider or other types of cybercrime incidents. Consequently, this study utilized weighted indicators to measure and correlate simulated user activity to possible precursors to malicious cybersecurity insider threat attacks. This study consisted of a mixed method approach utilizing an expert panel, developmental research, and quantitative data analysis using the developed tool on simulated data set. To assure validity and reliability of the indicators, a panel of subject matter experts (SMEs) reviewed the indicators and indicator categorizations that were collected from prior literature following the Delphi technique. The SMEs’ responses were incorporated into the development of a proof-of-concept prototype. Once the proof-of-concept prototype was completed and fully tested, an empirical simulation research study was conducted utilizing simulated user activity within a 16-month time frame. The results of the empirical simulation study were analyzed and presented. Recommendations resulting from the study also be provided

    Plausible Cause : Explanatory Standards in the Age of Powerful Machines

    Get PDF
    Much scholarship in law and political science has long understood the U.S. Supreme Court to be the apex court in the federal judicial system, and so to relate hierarchically to lower federal courts. On that top-down view, exemplified by the work of Alexander Bickel and many subsequent scholars, the Court is the principal, and lower federal courts are its faithful agents. Other scholarship takes a bottom-up approach, viewing lower federal courts as faithless agents or analyzing the percolation of issues in those courts before the Court decides. This Article identifies circumstances in which the relationship between the Court and other federal courts is best viewed as neither top-down nor bottom-up, but side-by-side. When the Court intervenes in fierce political conflicts, it may proceed in stages, interacting with other federal courts in a way that is aimed at enhancing its public legitimacy. First, the Court renders a decision that is interpreted as encouraging, but not requiring, other federal courts to expand the scope of its initial ruling. Then, most federal courts do expand the scope of the ruling, relying upon the Court\u27s initial decision as authority for doing so. Finally, the Court responds by invoking those district and circuit court decisions as authority for its own more definitive resolution. That dialectical process, which this Article calls reciprocal legitimation, was present along the path from Brown v. Board of Education to the unreasoned per curiams, from Baker v. Carr to Reynolds v. Sims, and from United States v. Windsor to Obergefell v. Hodges-as partially captured by Appendix A to the Court\u27s opinion in Obergefell and the opinion\u27s several references to it. This Article identifies the phenomenon of reciprocal legitimation, explains that it may initially be intentional or unintentional, and examines its implications for theories of constitutional change and scholarship in federal courts and judicial politics. Although the Article\u27s primary contribution is descriptive and analytical, it also normatively assesses reciprocal legitimation given the sacrifice of judicial candor that may accompany it. A Coda examines the likelihood and desirability of reciprocal legitimation in response to President Donald Trump\u27s derision of the federal courts as political and so illegitimate

    Plausible Cause : Explanatory Standards in the Age of Powerful Machines

    Get PDF
    The Fourth Amendment\u27s probable cause requirement is not about numbers or statistics. It is about requiring the police to account for their decisions. For a theory of wrongdoing to satisfy probable cause-and warrant a search or seizure-it must be plausible. The police must be able to explain why the observed facts invite an inference of wrongdoing, and judges must have an opportunity to scrutinize that explanation. Until recently, the explanatory aspect of Fourth Amendment suspicion- plausible cause -has been uncontroversial, and central to the Supreme Court\u27s jurisprudence, for a simple reason: explanations have served, in practice, as a guarantor of statistical likelihood. In other words, forcing police to articulate theories of wrongdoing is the means by which courts have traditionally ensured that (roughly) the right persons, houses, papers, and effects are targeted for intrusion. Going forward, however, technological change promises to disrupt the harmony between explanatory standards and statistical accuracy. Powerful machines enable a previously impossible combination: accurate predictions unaccompanied by explanations. As that change takes hold, we will need to think carefully about why explanation-giving matters. When judges assess the sufficiency of explanations offered by police (and other officials), what are they doing? If the answer comes back to error­ reduction-if the point of judicial oversight is simply to maximize the overall number of accurate decisions-machines could theoretically do the job as well as, if not better than, humans. But if the answer involves normative goals beyond error-reduction, automated tools-no matter their power-will remain, at best, partial substitutes for judicial scrutiny. This Article defends the latter view. I argue that statistical accuracy, though important, is not the crux of explanation-giving. Rather, explanatory standards-like probable cause-hold officials accountable to a plurality of sometimes-conflicting constitutional and rule-of-law values that, in our legal system, bound the scope of legitimate authority. Error-reduction is one such value. But there are many others, and sometimes the values work at cross purposes. When judges assess explanations, they navigate a space of value­pluralism: they identify which values are at stake in a given decisional environment and ask, where necessary, if those values have been properly balanced. Unexplained decisions render this process impossible and, in so doing, hobble the judicial role. Ultimately, that role has less to do with analytic power than practiced wisdom. A common argument against replacing judges, and other human experts, with intelligent machines is that machines are not (yet) intelligent enough to take up the mantle. In the age of powerful algorithms, however, this turns out to be a weak-and temporally limited-claim. The better argument, I suggest in closing, is that judging is not solely, or even primarily, about intelligence. It is about prudence

    Privacy-preserving systems around security, trust and identity

    Get PDF
    Data has proved to be the most valuable asset in a modern world of rapidly advancing technologies. Companies are trying to maximise their profits by getting valuable insights from collected data about people’s trends and behaviour which often can be considered personal and sensitive. Additionally, sophisticated adversaries often target organisations aiming to exfiltrate sensitive data to sell it to third parties or ask for ransom. Hence, the privacy assurance of the individual data producers is a matter of great importance who rely on simply trusting that the services they use took all the necessary countermeasures to protect them.Distributed ledger technology and its variants can securely store data and preserve its privacy with novel characteristics. Additionally, the concept of self-sovereign identity, which gives the control back to the data subjects, is an expected future step once these approaches mature further. Last but not least, big data analysis typically occurs through machine learning techniques. However, the security of these techniques is often questioned since adversaries aim to exploit them for their benefit.The aspect of security, privacy and trust is highlighted throughout this thesis which investigates several emerging technologies that aim to protect and analyse sensitive data compared to already existing systems, tools and approaches in terms of security guarantees and performance efficiency.The contributions of this thesis derive to i) the presentation of a novel distributed ledger infrastructure tailored to the domain name system, ii) the adaptation of this infrastructure to a critical healthcare use case, iii) the development of a novel self-sovereign identity healthcare scenario in which a data scientist analyses sensitive data stored in the premises of three hospitals, through a privacy-preserving machine learning approach, and iv) the thorough investigation of adversarial attacks that aim to exploit machine learning intrusion detection systems by “tricking” them to misclassify carefully crafted inputs such as malware identified as benign.A significant finding is that the security and privacy of data are often neglected since they do not directly impact people’s lives. It is common for the protection and confidentiality of systems, even of critical nature, to be an afterthought, which is considered merely after malicious intents occur. Further, emerging sets of technologies, tools, and approaches built with fundamental security and privacy principles, such as the distributed ledger technology, should be favoured by existing systems that can adopt them without significant changes and compromises. Additionally, it has been presented that the decentralisation of machine learning algorithms through self-sovereign identity technologies that provide novel end-to-end encrypted channels is possible without sacrificing the valuable utility of the original machine learning algorithms.However, a matter of great importance is that alongside technological advancements, adversaries are becoming more sophisticated in this area and are trying to exploit the aforementioned machine learning approaches and other similar ones for their benefit through various tools and approaches. Adversarial attacks pose a real threat to any machine learning algorithm and artificial intelligence technique, and their detection is challenging and often problematic. Hence, any security professional operating in this domain should consider the impact of these attacks and the protection countermeasures to combat or minimise them

    Perceptions of the conviction rate of reported adult female rape in Verulam, Durban.

    Get PDF
    Master of Social Science in Criminology. University of KwaZulu-Natal, Durban 2015.As rape is ranked as one of the most prevalent crimes in South Africa, its causes and consequences have become the subject of a large body of research. However, statistics and research on what happens after a rape report are rare. An estimated 7 percent of sexual offence cases that were reported to the police in 2012/13 resulted in a conviction, suggesting that there are major problems in the system that are restricting victims from obtaining justice. With a paucity of research having been done on the process of a rape report to the conviction stage, it seems relevant that research on the outcome of rape reports deserves attention. Therefore, regardless of the relatively small scale of this research, it was at attempt to fill this void. Based on a qualitative research methodology, the study focused on establishing which factors hinder the achievement of a high conviction rate of rape perpetrators in Verulam, Durban. The research focused on reported adult female rape in a four-year period from 2009 to 2013. Fifteen criminal justice personnel participated in this study by answering a semi-structured, open-ended questionnaire. A 5 percent conviction rate within the study period was identified, illustrating that rape victims are the most marginalised victims in society. Content data analysis indicated that the requirements for a conviction largely consist of extra-legal factors such as corroborative evidence, having qualified personnel, consistency, and court attendance. What is not surprising is that 'rape myths' and stereotyped notions are very active in the system; for example, if a woman sustains physical injuries or reported the violation soon after it occurred, she is deemed to be a ‘genuine’ rape victim. The fact that accused persons abscond from court proceedings whilst on bail was one of the obstacles highlighted by the respondents. Avoiding this obstacle can be achieved through an assessment of the South African legal system. Another major obstacle identified was the issue of the lack of expertise within the criminal justice system (CJS). However, there is potential to improve the conviction rate and a promising suggestion is specialisation in rape cases, specifically in terms of rape investigation, prosecution, and magistrates adjudicating the matter. For a positive change in the conviction rates and ensuring that victims attain justice, the actual implementation of the recommendations put forward in this study is necessary. However, further research is crucial, particularly research that can offer explanations for victims failing to attend court proceedings and for withdrawing their cases

    A governance framework for algorithmic accountability and transparency

    Get PDF
    Algorithmic systems are increasingly being used as part of decision-making processes in both the public and private sectors, with potentially significant consequences for individuals, organisations and societies as a whole. Algorithmic systems in this context refer to the combination of algorithms, data and the interface process that together determine the outcomes that affect end users. Many types of decisions can be made faster and more efficiently using algorithms. A significant factor in the adoption of algorithmic systems for decision-making is their capacity to process large amounts of varied data sets (i.e. big data), which can be paired with machine learning methods in order to infer statistical models directly from the data. The same properties of scale, complexity and autonomous model inference however are linked to increasing concerns that many of these systems are opaque to the people affected by their use and lack clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when these systems are applied as part of decision-making processes that can have a considerable impact on people's human rights (e.g. critical safety decisions in autonomous vehicles; allocation of health and social service resources, etc.). This study develops policy options for the governance of algorithmic transparency and accountability, based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems. Based on a review and analysis of existing proposals for governance of algorithmic systems, a set of four policy options are proposed, each of which addresses a different aspect of algorithmic transparency and accountability: 1. awareness raising: education, watchdogs and whistleblowers; 2. accountability in public-sector use of algorithmic decision-making; 3. regulatory oversight and legal liability; and 4. global coordination for algorithmic governance

    (Dis)Obedience in Digital Societies

    Get PDF
    Algorithms are not to be regarded as a technical structure but as a social phenomenon - they embed themselves, currently still very subtle, into our political and social system. Algorithms shape human behavior on various levels: they influence not only the aesthetic reception of the world but also the well-being and social interaction of their users. They act and intervene in a political and social context. As algorithms influence individual behavior in these social and political situations, their power should be the subject of critical discourse - or even lead to active disobedience and to the need for appropriate tools and methods which can be used to break the algorithmic power

    (Dis)Obedience in Digital Societies: Perspectives on the Power of Algorithms and Data

    Get PDF
    Algorithms are not to be regarded as a technical structure but as a social phenomenon - they embed themselves, currently still very subtle, into our political and social system. Algorithms shape human behavior on various levels: they influence not only the aesthetic reception of the world but also the well-being and social interaction of their users. They act and intervene in a political and social context. As algorithms influence individual behavior in these social and political situations, their power should be the subject of critical discourse - or even lead to active disobedience and to the need for appropriate tools and methods which can be used to break the algorithmic power
    corecore