1,699 research outputs found
Towards Understanding and Characterizing the Arbitrage Bot Scam In the Wild
This paper presents the first comprehensive analysis of an emerging
cryptocurrency scam named "arbitrage bot" disseminated on online social
networks. The scam revolves around Decentralized Exchanges (DEX) arbitrage and
aims to lure victims into executing a so-called "bot contract" to steal funds
from them.
To collect the scam at a large scale, we developed a fully automated scam
detection system named CryptoScamHunter, which continuously collects YouTube
videos and automatically detects scams. Meanwhile, CryptoScamHunter can
download the source code of the bot contract from the provided links and
extract the associated scam cryptocurrency address. Through deploying
CryptoScamHunter from Jun. 2022 to Jun. 2023, we have detected 10,442 arbitrage
bot scam videos published from thousands of YouTube accounts. Our analysis
reveals that different strategies have been utilized in spreading the scam,
including crafting popular accounts, registering spam accounts, and using
obfuscation tricks to hide the real scam address in the bot contracts.
Moreover, from the scam videos we have collected over 800 malicious bot
contracts with source code and extracted 354 scam addresses. By further
expanding the scam addresses with a similar contract matching technique, we
have obtained a total of 1,697 scam addresses. Through tracing the transactions
of all scam addresses on the Ethereum mainnet and Binance Smart Chain, we
reveal that over 25,000 victims have fallen prey to this scam, resulting in a
financial loss of up to 15 million USD.
Overall, our work sheds light on the dissemination tactics and censorship
evasion strategies adopted in the arbitrage bot scam, as well as on the scale
and impact of such a scam on online social networks and blockchain platforms,
emphasizing the urgent need for effective detection and prevention mechanisms
against such fraudulent activity.Comment: Accepted by ACM SIGMETRICS 202
Utilization-Based Scheduling of Flexible Mixed-Criticality Real-Time Tasks
Mixed-criticality models are an emerging paradigm for the design of real-time
systems because of their significantly improved resource efficiency. However,
formal mixed-criticality models have traditionally been characterized by two
impractical assumptions: once \textit{any} high-criticality task overruns,
\textit{all} low-criticality tasks are suspended and \textit{all other}
high-criticality tasks are assumed to exhibit high-criticality behaviors at the
same time. In this paper, we propose a more realistic mixed-criticality model,
called the flexible mixed-criticality (FMC) model, in which these two issues
are addressed in a combined manner. In this new model, only the overrun task
itself is assumed to exhibit high-criticality behavior, while other
high-criticality tasks remain in the same mode as before. The guaranteed
service levels of low-criticality tasks are gracefully degraded with the
overruns of high-criticality tasks. We derive a utilization-based technique to
analyze the schedulability of this new mixed-criticality model under EDF-VD
scheduling. During runtime, the proposed test condition serves an important
criterion for dynamic service level tuning, by means of which the maximum
available execution budget for low-criticality tasks can be directly determined
with minimal overhead while guaranteeing mixed-criticality schedulability.
Experiments demonstrate the effectiveness of the FMC scheme compared with
state-of-the-art techniques.Comment: This paper has been submitted to IEEE Transaction on Computers (TC)
on Sept-09th-201
Scale dependency of anisotropic thermal conductivity of heterogeneous geomaterials
The precise determination of subsurface thermal properties is critical for ground-source heating systems. The geomaterials are inherently heterogeneous, and their thermal conductivity measured in laboratory and field tests often exhibits anisotropic behaviours. However, the accurate measurement of thermal responses in geomaterials presents a challenging task due to the anisotropy’s variation with the observed scale. Hence, a numerical method is developed in this work and illustrated by taking a typical anisotropic structure of geomaterials with the porosity of 0.5 as an example. The differences in data from laboratory measurements and field tests are discussed to explore the scale effect on anisotropic thermal properties. A series of simulation tests are conducted on specimens with varying dimensions using the finite element method. Results indicate that the thermal properties show a substantial sensitivity to the observation scale, the variation of which decreases with the sample dimensions. By comparing in situ data and laboratory results, the values of average thermal conductivity and corresponding anisotropy ratio are lower than those at small scales, indicating that careful consideration should be given to the thermal properties to account for heterogeneity and anisotropy. In addition, four upscaling schemes based on the averaging method are discussed. This study sheds light on the gap between the laboratory results and the field’s inherent properties and provides guidelines for upscaling small-scale results to field-scale applications
Learning Competitive and Discriminative Reconstructions for Anomaly Detection
Most of the existing methods for anomaly detection use only positive data to
learn the data distribution, thus they usually need a pre-defined threshold at
the detection stage to determine whether a test instance is an outlier.
Unfortunately, a good threshold is vital for the performance and it is really
hard to find an optimal one. In this paper, we take the discriminative
information implied in unlabeled data into consideration and propose a new
method for anomaly detection that can learn the labels of unlabelled data
directly. Our proposed method has an end-to-end architecture with one encoder
and two decoders that are trained to model inliers and outliers' data
distributions in a competitive way. This architecture works in a discriminative
manner without suffering from overfitting, and the training algorithm of our
model is adopted from SGD, thus it is efficient and scalable even for
large-scale datasets. Empirical studies on 7 datasets including KDD99, MNIST,
Caltech-256, and ImageNet etc. show that our model outperforms the
state-of-the-art methods.Comment: 8 page
- …