5 research outputs found

    Innovation a modern model for estimating volume of money laundering

    Get PDF
    Money laundering or dirty money laundering refers to a process which attempts to demonstrate as legal the source of money obtained through illegal or illegitimate activities. Inflation and recession are among unavoidable consequences of money laundering and will eventually lead to spread of poverty in society. Therefore, recent years have witnessed a great tendency towards the assessment of this phenomenon and the application of different methods to estimate its quantity and volume alongside other similar variables in the socio-economic context. The aim of this article is to explain the ways of modeling of economic relations in addition to introducing a new method for assessment of the quantity of dirty money based on mathematical methods, without any particular presumption. This is while other methods, due to their nature, have a variety of different premises and this has indeed created many problems, including the likelihood of having remarkable errors. The present study applies a combination of Bhatta charya method and arithmetic methods which are based on Tikhonov's regularization strategy and inverse problem in order to introduce a new equation for assessment of the quantity of dirty money. Keywords: Corruption, Poverty, Money Laundering, Tikhonov Regularization Strategy, Inverse Proble

    IPA: Inference Pipeline Adaptation to Achieve High Accuracy and Cost-Efficiency

    Full text link
    Efficiently optimizing multi-model inference pipelines for fast, accurate, and cost-effective inference is a crucial challenge in ML production systems, given their tight end-to-end latency requirements. To simplify the exploration of the vast and intricate trade-off space of accuracy and cost in inference pipelines, providers frequently opt to consider one of them. However, the challenge lies in reconciling accuracy and cost trade-offs. To address this challenge and propose a solution to efficiently manage model variants in inference pipelines, we present IPA, an online deep-learning Inference Pipeline Adaptation system that efficiently leverages model variants for each deep learning task. Model variants are different versions of pre-trained models for the same deep learning task with variations in resource requirements, latency, and accuracy. IPA dynamically configures batch size, replication, and model variants to optimize accuracy, minimize costs, and meet user-defined latency SLAs using Integer Programming. It supports multi-objective settings for achieving different trade-offs between accuracy and cost objectives while remaining adaptable to varying workloads and dynamic traffic patterns. Extensive experiments on a Kubernetes implementation with five real-world inference pipelines demonstrate that IPA improves normalized accuracy by up to 35% with a minimal cost increase of less than 5%

    Reconciling High Accuracy, Cost-Efficiency, and Low Latency of Inference Serving Systems

    No full text
    The use of machine learning (ML) inference for various applications is growing drastically. ML inference services engage with users directly, requiring fast and accurate responses. Moreover, these services face dynamic workloads of requests, imposing changes in their computing resources. Failing to right-size computing resources results in either latency service level objectives (SLOs) violations or wasted computing resources. Adapting to dynamic workloads considering all the pillars of accuracy, latency, and resource cost is challenging. In response to these challenges, we propose InfAdapter, which proactively selects a set of ML model variants with their resource allocations to meet latency SLO while maximizing an objective function composed of accuracy and cost. InfAdapter decreases SLO violation and costs up to 65 and 33, respectively, compared to a popular industry autoscaler (Kubernetes Vertical Pod Autoscaler)
    corecore