722 research outputs found

    Manipulation of the Bitcoin market: an agent-based study

    Get PDF
    Fraudulent actions of a trader or a group of traders can cause substantial disturbance to the market, both directly influencing the price of an asset or indirectly by misinforming other market participants. Such behavior can be a source of systemic risk and increasing distrust for the market participants, consequences that call for viable countermeasures. Building on the foundations provided by the extant literature, this study aims to design an agent-based market model capable of reproducing the behavior of the Bitcoin market during the time of an alleged Bitcoin price manipulation that occurred between 2017 and early 2018. The model includes the mechanisms of a limit order book market and several agents associated with different trading strategies, including a fraudulent agent, initialized from empirical data and who performs market manipulation. The model is validated with respect to the Bitcoin price as well as the amount of Bitcoins obtained by the fraudulent agent and the traded volume. Simulation results provide a satisfactory fit to historical data. Several price dips and volume anomalies are explained by the actions of the fraudulent trader, completing the known body of evidence extracted from blockchain activity. The model suggests that the presence of the fraudulent agent was essential to obtain Bitcoin price development in the given time period; without this agent, it would have been very unlikely that the price had reached the heights as it did in late 2017. The insights gained from the model, especially the connection between liquidity and manipulation efficiency, unfold a discussion on how to prevent illicit behavior

    On the impact of non-IID data on the performance and fairness of differentially private federated learning

    Get PDF
    Federated Learning enables distributed data holders to train a shared machine learning model on their collective data. It provides some measure of privacy by not requiring the data be pooled and centralized but still has been shown to be vulnerable to adversarial attacks. Differential Privacy provides rigorous guarantees and sufficient protection against adversarial attacks and has been widely employed in recent years to perform privacy preserving machine learning. One common trait in many of recent methods on federated learning and federated differentially private learning is the assumption of IID data, which in real world scenarios most certainly does not hold true. In this work, we empirically investigate the effect of non-IID data on node level on federated, differentially private, deep learning. We show the non-IID data to have a negative impact on both performance and fairness of the trained model and discuss the trade off between privacy, utility and fairness. Our results highlight the limits of common federated learning algorithms in a differentially private setting to provide robust, reliable results across underrepresented groups. </p

    A Compression and Simulation-Based Approach to Fraud Discovery

    Get PDF
    With the uptake of digital services in public and private sectors, the formalization of laws is attracting increasing attention. Yet, non-compliant fraudulent behaviours (money laundering, tax evasion, etc.) - practical realizations of violations of law - remain very difficult to formalize, as one does not know the exact formal rules that define such violations. The present work introduces a methodological framework aiming to discover non-compliance through compressed representations of behaviour, considering a fraudulent agent that explores via simulation the space of possible non-compliant behaviours in a given social domain. The framework is founded on a combination of utility maximization and active learning. We illustrate its application on a simple social domain. The results are promising, and seemingly reduce the gap on fundamental questions in AI and Law, although this comes at the cost of developing complex models of the simulation environment, and sophisticated reasoning models of the fraudulent agent.</p

    Do agents dream of abiding by the rules?:Learning norms via behavioral exploration and sparse human supervision

    Get PDF
    In recent years, several normative systems have been presented in the literature. Relying on formal methods, these systems support the encoding of legal rules into machine-readable formats, enabling, e.g. to check whether a certain workflow satisfies or agents abide by these rules. However, not all rules can be easily expressed (see for instance the unclear boundaries between tax planning and tax avoidance). The paper introduces a framework for norm identification and norm induction that automates the formalization of norms about non-compliant behavior by exploring the behavioral space via simulation, and integrating inputs from humans via active learning. The proposed problem formulation sets also a bridge between AI &amp; law and more general branches of AI concerned by the adaptation of artificial agents to human directives.</p
    corecore