181 research outputs found

    Of degens and defrauders: Using open-source investigative tools to investigate decentralized finance frauds and money laundering

    Get PDF
    Fraud across the decentralized finance (DeFi) ecosystem is growing, with victims losing billions to DeFi scams every year. However, there is a disconnect between the reported value of these scams and associated legal prosecutions. We use open-source investigative tools to (1) investigate potential frauds involving Ethereum tokens using on-chain data and token smart contract analysis, and (2) investigate the ways proceeds from these scams were subsequently laundered. The analysis enabled us to (1) uncover transaction-based evidence of several rug pull and pump-and-dump schemes, and (2) identify their perpetrators’ money laundering tactics and cash-out methods. The rug pulls were less sophisticated than anticipated, money laundering techniques were also rudimentary and many funds ended up at centralized exchanges. This study demonstrates how open-source investigative tools can extract transaction-based evidence that could be used in a court of law to prosecute DeFi frauds. Additionally, we investigate how these funds are subsequently laundered

    Detecting DeFi Securities Violations from Token Smart Contract Code

    Full text link
    Decentralized Finance (DeFi) is a system of financial products and services built and delivered through smart contracts on various blockchains. In the past year, DeFi has gained popularity and market capitalization. However, it has also been connected to crime, in particular, various types of securities violations. The lack of Know Your Customer requirements in DeFi poses challenges to governments trying to mitigate potential offending in this space. This study aims to uncover whether this problem is suited to a machine learning approach, namely, whether we can identify DeFi projects potentially engaging in securities violations based on their tokens' smart contract code. We adapt prior work on detecting specific types of securities violations across Ethereum, building a random forest classifier based on features extracted from DeFi projects' tokens' smart contract code. The final classifier achieves a 98.6% F1-score. From further feature-level analysis, we find a single feature makes this a highly detectable problem. The high reliance on a single feature means that, at this stage, a complex machine learning model may not be necessary or desirable for this problem. However, this may change as DeFi securities violations become more sophisticated. Another contribution of our study is a new dataset, comprised of (a) a verified ground truth dataset for tokens involved in securities violations and (b) a set of legitimate tokens from a reputable DeFi aggregator. This paper further discusses the potential use of a model like ours by prosecutors in enforcement efforts and connects it to the wider legal context

    Large Language Models in Cryptocurrency Securities Cases: Can ChatGPT Replace Lawyers?

    Full text link
    Large Language Models (LLMs) could enhance access to the legal system. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying LLMs' legal reasoning and drafting capabilities. We examine whether a) an LLM can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making based on complaints written by a lawyer compared to an LLM. We feed fact patterns from real-life cases to GPT-3.5 and evaluate its ability to determine correct potential violations from the scenario and exclude spurious violations. Second, we had mock jurors assess complaints written by the LLM and lawyers. GPT-3.5's legal reasoning skills proved weak, though we expect improvement in future models, particularly given the violations it suggested tended to be correct (it merely missed additional, correct violations). GPT-3.5 performed better at legal drafting, and jurors' decisions were not statistically significantly associated with the author of the document upon which they based their decisions. Because LLMs cannot satisfactorily conduct legal reasoning tasks, they would be unable to replace lawyers at this stage. However, their drafting skills (though, perhaps, still inferior to lawyers), could provide access to justice for more individuals by reducing the cost of legal services. Our research is the first to systematically study LLMs' legal drafting and reasoning capabilities in litigation, as well as in securities law and cryptocurrency-related misconduct

    Production networks in the cultural and creative sector: case studies from the publishing industry (CICERONE report D2.8)

    Get PDF
    The CICERONE project investigates cultural and creative industries through case study research, with a focus on production networks. This report, part of WP2, examines the publishing industry within this framework. It aims to understand the industry's hidden aspects, address statistical issues in measurement, and explore the industry's transformation and integration of cultural and economic values. The report provides an overview of the production network, explores statistical challenges, and presents qualitative analyses of two case studies. It concludes by highlighting the potential of the Global Production Network (GPN) approach for analyzing, researching, policymaking, and intervening in the European publishing network. The CICERONE project's case study research delves into the publishing industry, investigating its production networks and examining key aspects often unseen by the public. The report addresses statistical challenges in measuring the industry and sheds light on its ongoing transformations and integration of cultural and economic values. It presents an overview of the production network, explores statistical issues, and provides qualitative analyses of two case studies. The report emphasizes the potential of the GPN approach for analyzing and intervening in the European publishing network, ultimately contributing to research, policymaking, and understanding within the industry

    Parallel integrated frame synchronizer chip

    Get PDF
    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port

    On the dimension of subspaces with bounded Schmidt rank

    Full text link
    We consider the question of how large a subspace of a given bipartite quantum system can be when the subspace contains only highly entangled states. This is motivated in part by results of Hayden et al., which show that in large d x d--dimensional systems there exist random subspaces of dimension almost d^2, all of whose states have entropy of entanglement at least log d - O(1). It is also related to results due to Parthasarathy on the dimension of completely entangled subspaces, which have connections with the construction of unextendible product bases. Here we take as entanglement measure the Schmidt rank, and determine, for every pair of local dimensions dA and dB, and every r, the largest dimension of a subspace consisting only of entangled states of Schmidt rank r or larger. This exact answer is a significant improvement on the best bounds that can be obtained using random subspace techniques. We also determine the converse: the largest dimension of a subspace with an upper bound on the Schmidt rank. Finally, we discuss the question of subspaces containing only states with Schmidt equal to r.Comment: 4 pages, REVTeX4 forma

    A second generation 50 Mbps VLSI level zero processing system prototype

    Get PDF
    Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies

    VLSI technology for smaller, cheaper, faster return link systems

    Get PDF
    Very Large Scale Integration (VLSI) Application-specific Integrated Circuit (ASIC) technology has enabled substantially smaller, cheaper, and more capable telemetry data systems. However, the rapid growth in available ASIC fabrication densities has far outpaced the application of this technology to telemetry systems. Available densities have grown by well over an order magnitude since NASA's Goddard Space Flight Center (GSFC) first began developing ASIC's for ground telemetry systems in 1985. To take advantage of these higher integration levels, a new generation of ASIC's for return link telemetry processing is under development. These new submicron devices are designed to further reduce the cost and size of NASA return link processing systems while improving performance. This paper describes these highly integrated processing components

    In-situ high-pressure powder X-ray diffraction study of α -zirconium phosphate

    Get PDF
    The high-pressure structural chemistry of -zirconium phosphate, -Zr(HPO4) 2H2O, was studied using in-situ high-pressure diffraction and synchrotron radiation. The layered phosphate was studied under both hydrostatic and non-hydrostatic conditions and Rietveld refinement carried out on the resulting diffraction patterns. It was found that under hydrostatic conditions no uptake of additional water molecules from the pressure-transmitting medium occurred, contrary to what had previously been observed with some zeolite materials and a layered titanium phosphate. Under hydrostatic conditions the sample remained crystalline up to 10 GPa, but under non-hydrostatic conditions the sample amorphized between 7.3 and 9.5 GPa. The calculated bulk modulus, K 0 = 15.2 GPa, showed the material to be very compressible with the weak linkages in the structure of the type Zr—O—P

    Timing of Colonization of Caries-Producing Bacteria: An Approach Based on Studying Monozygotic Twin Pairs

    Get PDF
    Findings are presented from a prospective cohort study of timing of primary tooth emergence and timing of oral colonization of Streptococcus mutans (S. mutans) in Australian twins. The paper focuses on differences in colonization timing in genetically identical monozygotic (MZ) twins. Timing of tooth emergence was based on parental report. Colonization timing of S. mutans were established by plating samples of plaque and saliva on selective media at 3 monthly intervals and assessing colony morphology. In 25% of individuals colonization occurred prior to emergence of the first tooth. A significant proportion of MZ pairs (21%) was discordant for colonization occurring before or after first tooth emergence, suggesting a role of environmental or epigenetic factors in timing of tooth emergence, colonization by S. mutans, or both. These findings and further application of the MZ co-twin model should assist in development of strategies to prevent or delay infection with S. mutans in children
    corecore