17 research outputs found

    Coarse and fine identification of collusive clique in financial market

    Get PDF
    Collusive transactions refer to the activity whereby traders use carefully-designed trade to illegally manipulate the market. They do this by increasing specific trading volumes, thus creating a false impression that a market is more active than it actually is. The traders involved in the collusive transactions are termed as collusive clique. The collusive clique and its activities can cause substantial damage to the market's integrity and attract much attention of the regulators around the world in recent years. Much of the current research focused on the detection based on a number of assumptions of how a normal market behaves. There is, clearly, a lack of effective decision-support tools with which to identify potential collusive clique in a real-life setting. The study in this paper examined the structures of the traders in all transactions, and proposed two approaches to detect potential collusive clique with their activities. The first approach targeted on the overall collusive trend of the traders. This is particularly useful when regulators seek a general overview of how traders gather together for their transactions. The second approach accurately detected the parcel-passing style collusive transactions on the market through analyzing the relations of the traders and transacted volumes. The proposed two approaches, on one hand, provided a complete cover for collusive transaction identifications, which can fulfill the different types of requirements of the regulation, i.e. MiFID II, on the other hand, showed a novel application of well known computational algorithms on solving real and complex financial problem. The proposed two approaches are evaluated using real financial data drawn from the NYSE and CME group. Experimental results suggested that those approaches successfully identified all primary collusive clique scenarios in all selected datasets and thus showed the effectiveness and stableness of the novel application

    Detecting wash trade in financial market using digraphs and dynamic programming

    Get PDF
    Wash trade refers to the illegal activities of traders who utilise carefully designed limit orders to manually increase the trading volumes for creating a false impression of an active market. As one of the primary formats of market abuse, wash trade can be extremely damaging to the proper functioning and integrity of capital markets. Existing work focuses on collusive clique detections based on certain assumptions of trading behaviours. Effective approaches for analysing and detecting wash trade in a real-life market have yet to be developed. This paper analyses and conceptualises the basic structures of the trading collusion in a wash trade by using a directed graph of traders. A novel method is then proposed to detect the potential wash trade activities involved in a financial instrument by first recognizing the suspiciously matched orders and then further identifying the collusions among the traders who submit such orders. Both steps are formulated as a simplified form of the Knapsack problem, which can be solved by dynamic programming approaches. The proposed approach is evaluated on seven stock datasets from NASDAQ and the London Stock Exchange. Experimental results show that the proposed approach can effectively detect all primary wash trade scenarios across the selected datasets

    Predictive Analytics For Controlling Tax Evasion

    Get PDF
    Tax evasion is an illegal practice where a person or a business entity intentionally avoids paying his/her true tax liability. Any business entity is required by the law to file their tax return statements following a periodical schedule. Avoiding to file the tax return statement is one among the most rudimentary forms of tax evasion. The dealers committing tax evasion in such a way are called return defaulters. We constructed a logistic regression model that predicts with high accuracy whether a business entity is a potential return defaulter for the upcoming tax-filing period. For the same, we analyzed the effect of the amount of sales/purchases transactions among the business entities (dealers) and the mean absolute deviation (MAD) value of the �rst digit Benford's analysis on sales transactions by a business entity. We developed and deployed this model for the commercial taxes department, government of Telangana, India. Another technique, which is a much more sophisticated one, used for tax evasion, is known as Circular trading. Circular trading is a fraudulent trading scheme used by notorious tax evaders with the motivation to trick the tax enforcement authorities from identifying their suspicious transactions. Dealers make use of this technique to collude with each other and hence do heavy illegitimate trade among themselves to hide suspicious sales transactions. We developed an algorithm to detect the group of colluding dealers who do heavy illegitimate trading among themselves. For the same, we formulated the problem as finding clusters in a weighted directed graph. Novelty of our approach is that we used Benford's analysis to define weights and defined a measure similar to F1 score to find similarity between two clusters. The proposed algorithm is run on the commercial tax data set, and the results obtained contains a group of several colluding dealers

    A Graph Theoretical Approach for Identifying Fraudulent Transactions in Circular Trading

    Get PDF
    Circular trading is an infamous technique used by tax evaders to confuse tax enforcement officers from detecting suspicious transactions. Dealers using this technique superimpose suspicious transactions by several illegitimate sales transactions in a circular manner. In this paper, we address this problem by developing an algorithm that detects circular trading and removes the illegitimate cycles to uncover the suspicious transactions. We formulate the problem as finding and then deleting specific type of cycles in a directed edge-labeled multigraph. We run this algorithm on the commercial tax data set provided by the government of Telangana, India, and discovered several suspicious transactions

    RegTech and Predictive Lawmaking: Closing the RegLag Between Prospective Regulated Activity and Regulation

    Get PDF
    Regulation chronically suffers significant delay starting at the detectable initiation of a “regulable activity” and culminating at effective regulatory response. Regulator reaction is impeded by various obstacles: (i) confusion in optimal level, form and choice of regulatory agency, (ii) political resistance to creating new regulatory agencies, (iii) lack of statutory authorization to address particular novel problems, (iv) jurisdictional competition among regulators, (v) Congressional disinclination to regulate given political conditions, and (vi) a lack of expertise, both substantive and procedural, to deploy successful counter-measures. Delay is rooted in several stubborn institutions, including libertarian ideals permeating both the U.S. legal system and the polity, constitutional constraints on exercise of governmental powers, chronic resource constraints including underfunding, and agency technical incapacities. Therefore, regulatory prospecting to identify regulable activity often lags the suspicion of future regulable activity or its first discernable appearance. This Article develops the regulatory lag theory (RegLag), argues that regulatory technologies (RegTech), including those from the blockchain technology space, can help narrow the RegLag gap, and proposes programs to improve regulatory agency clairvoyance to more aggressively adapt to changing regulable activities, such as by using promising anticipatory approaches

    High Quality P2P Service Provisioning via Decentralized Trust Management

    Get PDF
    Trust management is essential to fostering cooperation and high quality service provisioning in several peer-to-peer (P2P) applications. Among those applications are customer-to-customer (C2C) trading sites and markets of services implemented on top of centralized infrastructures, P2P systems, or online social networks. Under these application contexts, existing work does not adequately address the heterogeneity of the problem settings in practice. This heterogeneity includes the different approaches employed by the participants to evaluate trustworthiness of their partners, the diversity in contextual factors that influence service provisioning quality, as well as the variety of possible behavioral patterns of the participants. This thesis presents the design and usage of appropriate computational trust models to enforce cooperation and ensure high quality P2P service provisioning, considering the above heterogeneity issues. In this thesis, first I will propose a graphical probabilistic framework for peers to model and evaluate trustworthiness of the others in a highly heterogeneous setting. The framework targets many important issues in trust research literature: the multi-dimensionality of trust, the reliability of different rating sources, and the personalized modeling and computation of trust in a participant based on the quality of services it provides. Next, an analysis on the effective usage of computational trust models in environments where participants exhibit various behaviors, e.g., honest, rational, and malicious, will be presented. I provide theoretical results showing the conditions under which cooperation emerges when using trust learning models with a given detecting accuracy and how cooperation can still be sustained while reducing the cost and accuracy of those models. As another contribution, I also design and implement a general prototyping and simulation framework for reputation-based trust systems. The developed simulator can be used for many purposes, such as to discover new trust-related phenomena or to evaluate performance of a trust learning algorithm in complex settings. Two potential applications of computational trust models are then discussed: (1) the selection and ranking of (Web) services based on quality ratings from reputable users, and (2) the use of a trust model to choose reliable delegates in a key recovery scenario in a distributed online social network. Finally, I will identify a number of various issues in building next-generation, open reputation-based trust management systems as well as propose several future research directions starting from the work in this thesis

    Graph-Theoretical Tools for the Analysis of Complex Networks

    Get PDF
    We are currently experiencing an explosive growth in data collection technology that threatens to dwarf the commensurate gains in computational power predicted by Moore’s Law. At the same time, researchers across numerous domain sciences are finding success using network models to represent their data. Graph algorithms are then applied to study the topological structure and tease out latent relationships between variables. Unfortunately, the problems of interest, such as finding dense subgraphs, are often the most difficult to solve from a computational point of view. Together, these issues motivate the need for novel algorithmic techniques in the study of graphs derived from large, complex, data sources. This dissertation describes the development and application of graph theoretic tools for the study of complex networks. Algorithms are presented that leverage efficient, exact solutions to difficult combinatorial problems for epigenetic biomarker detection and disease subtyping based on gene expression signatures. Extensive testing on publicly available data is presented supporting the efficacy of these approaches. To address efficient algorithm design, a study of the two core tenets of fixed parameter tractability (branching and kernelization) is considered in the context of a parallel implementation of vertex cover. Results of testing on a wide variety of graphs derived from both real and synthetic data are presented. It is shown that the relative success of kernelization versus branching is found to be largely dependent on the degree distribution of the graph. Throughout, an emphasis is placed upon the practicality of resulting implementations to advance the limits of effective computation

    Off and Online Journalism and Corruption

    Get PDF
    This book provides a new theoretical framework of determinants that interact together in five hierarchical levels to restrain or produce corruption. The theory suggests a multilevel analysis that tests hypotheses regarding the relations of journalism and corruption within each level and across levels in international comparative research designs. Corruption as the abuse of power for private gain is built into the journalistic, economic, political, and cultural structures of any society and is affected by its interaction within the international system. The important questions of how differences in corruption across countries can be explained or what makes it more or less in a particular society and how press freedom and social media contribute to the fight against corruption are still unanswered. This book represents a significant contribution on the way to answer these critical questions. It discusses a variety of journalism-corruption experiences that provide a wealth of results and analyses. The cases it examines extend from Cuba to Algeria, India, Saudi Arabia, Sub-Saharan African, Gulf Cooperation Countries, Arab World, and Japan. The primary contribution of this book is both theoretical and empirical. Its details as well as the general theoretical frameworks make it a useful book for scholars, academics, undergraduate and graduate students, journalists, and policy makers
    corecore