269 research outputs found

    Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained Models

    Full text link
    Large-scale pre-trained models (PTMs) such as BERT and GPT have achieved great success in diverse fields. The typical paradigm is to pre-train a big deep learning model on large-scale data sets, and then fine-tune the model on small task-specific data sets for downstream tasks. Although PTMs have rapidly progressed with wide real-world applications, they also pose significant risks of potential attacks. Existing backdoor attacks or data poisoning methods often build up the assumption that the attacker invades the computers of victims or accesses the target data, which is challenging in real-world scenarios. In this paper, we propose a novel framework for an invisible attack on PTMs with enhanced MD5 collision. The key idea is to generate two equal-size models with the same MD5 checksum by leveraging the MD5 chosen-prefix collision. Afterwards, the two ``same" models will be deployed on public websites to induce victims to download the poisoned model. Unlike conventional attacks on deep learning models, this new attack is flexible, covert, and model-independent. Additionally, we propose a simple defensive strategy for recognizing the MD5 chosen-prefix collision and provide a theoretical justification for its feasibility. We extensively validate the effectiveness and stealthiness of our proposed attack and defensive method on different models and data sets

    Optimal Strategy Imitation Learning from Differential Games

    Get PDF
    The ability of a vehicle to navigate safely through any environment relies on its driver having an accurate sense of the future positions and goals of other vehicles on the road. A driver does not navigate around where an agent is, but where it is going to be. To avoid collisions, autonomous vehicles should be equipped with the ability to to derive appropriate controls using future estimations for other vehicles, pedestrians, or otherwise intentionally moving agents in a manner similar to or better than human drivers. Differential game theory provides one approach to generate a control strategy by modeling two players with opposing goals. Environments faced by autonomous vehicles, such as merging onto a freeway, are complex, but they can be modeled and solved as a differential game using discrete approximations; these games yield an optimal control policy for both players and can be used to model adversarial driving scenarios rather than average ones, so that autonomous vehicles will be safer on the road in more situations. Further, discrete approximations of solutions to complex games that are computationally tractable and provably asymptotically optimal have been developed, but may not produce usable results in an online fashion. To retrieve an efficient, continuous control policy, we use deep imitation learning to model the discrete approximation of a differential game solution. We successfully learn the policy generated for two games of different complexity, a fence escape and merging game, and show that the imitated policy generates control inputs faster than the differential game generated policy

    ANALIZA KOLIZJI W RUCHU MIEJSKIM Z WYKORZYSTANIEM TECHNIK GŁĘBOKIEGO UCZENIA

    Get PDF
    Road accidents are concerningly increasing in Andhra Pradesh. In 2021, Andhra Pradesh experienced a 20 percent upsurge in road accidents. The state's unfortunate position of being ranked eighth in terms of fatalities, with 8,946 lives lost in 22,311 traffic accidents, underscores the urgent nature of the problem. The significant financial impact on the victims and their families stresses the necessity for effective actions to reduce road accidents. This study proposes a framework that collects accident data from regions, namely Patamata, Penamaluru, Mylavaram, Krishnalanka, Ibrahimpatnam, and Gandhinagar in Vijayawada (India) from 2019 to 2021. The dataset comprises over 12,000 records of accident data. Deep learning techniques are applied to classify the severity of road accidents into Fatal, Grievous, and Severe Injuries. The classification procedure leverages advanced neural network models, including the Multilayer Perceptron, Long-Short Term Memory, Recurrent Neural Network, and Gated Recurrent Unit. These models are trained on the collected data to accurately predict the severity of road accidents. The project study to make important contributions for suggesting proactive measures and policies to reduce the severity and frequency of road accidents in Andhra Pradesh.Liczba wypadków drogowych w Andhra Pradesh niepokojąco rośnie. W 2021 r. stan Andhra Pradesh odnotował 20% wzrost liczby wypadków drogowych. Niefortunna pozycja stanu, który zajmuje ósme miejsce pod względem liczby ofiar śmiertelnych, z 8 946 ofiarami śmiertelnymi w 22 311 wypadkach drogowych, podkreśla pilny charakter problemu. Znaczący wymiar finansowy dla ofiar i ich rodziny podkreśla konieczność podjęcia skutecznych działań w celu ograniczenia liczby wypadków drogowych. W niniejszym badaniu zaproponowano system gromadzenia danych o wypadkach z regionów Patamata, Penamaluru, Mylavaram, Krishnalanka, Ibrahimpatnam i Gandhinagar w Vijayawada (India) w latach 2019–2021. Zbiór danych obejmuje ponad 12 000 rekordów danych o wypadkach. Techniki głębokiego uczenia są stosowane do klasyfikowania wagi wypadków drogowych na śmiertelne, poważne i ciężkie obrażenia. Procedura klasyfikacji wykorzystuje zaawansowane modele sieci neuronowych, w tym wielowarstwowy perceptron, pamięć długoterminową i krótkoterminową, rekurencyjną sieć neuronową i Gated Recurrent Unit. Modele te są trenowane na zebranych danych w celu dokładnego przewidywania wagi wypadków drogowych. Projekt ma wnieść istotny wkład w sugerowanie proaktywnych środków i polityk mających na celu zmniejszenie dotkliwości i częstotliwości wypadków drogowych w Andhra Pradesh

    Information Leakage Attacks and Countermeasures

    Get PDF
    The scientific community has been consistently working on the pervasive problem of information leakage, uncovering numerous attack vectors, and proposing various countermeasures. Despite these efforts, leakage incidents remain prevalent, as the complexity of systems and protocols increases, and sophisticated modeling methods become more accessible to adversaries. This work studies how information leakages manifest in and impact interconnected systems and their users. We first focus on online communications and investigate leakages in the Transport Layer Security protocol (TLS). Using modern machine learning models, we show that an eavesdropping adversary can efficiently exploit meta-information (e.g., packet size) not protected by the TLS’ encryption to launch fingerprinting attacks at an unprecedented scale even under non-optimal conditions. We then turn our attention to ultrasonic communications, and discuss their security shortcomings and how adversaries could exploit them to compromise anonymity network users (even though they aim to offer a greater level of privacy compared to TLS). Following up on these, we delve into physical layer leakages that concern a wide array of (networked) systems such as servers, embedded nodes, Tor relays, and hardware cryptocurrency wallets. We revisit location-based side-channel attacks and develop an exploitation neural network. Our model demonstrates the capabilities of a modern adversary but also presents an inexpensive tool to be used by auditors for detecting such leakages early on during the development cycle. Subsequently, we investigate techniques that further minimize the impact of leakages found in production components. Our proposed system design distributes both the custody of secrets and the cryptographic operation execution across several components, thus making the exploitation of leaks difficult
    corecore