5 research outputs found

    Model Extraction and Adversarial Attacks on Neural Networks Using Side-Channel Information

    Get PDF
    Artificial neural networks (ANNs) have gained significant popularity in the last decade for solving narrow AI problems in domains such as healthcare, transportation, and defense. As ANNs become more ubiquitous, it is imperative to understand their associated safety, security, and privacy vulnerabilities. Recently, it has been shown that ANNs are susceptible to a number of adversarial evasion attacks - inputs that cause the ANN to make high-confidence misclassifications despite being almost indistinguishable from the data used to train and test the network. This thesis explores to what degree finding these examples may be aided by using side-channel information, specifically power consumption, of hardware implementations of ANNs. A blackbox threat scenario is assumed, where an attacker has access to the ANN hardware’s input, outputs, and topology, but the trained model parameters are unknown. The extraction of the ANN parameters is performed by training a surrogate model using a dataset derived from querying the blackbox (oracle) model. The effect of the surrogate’s training set size on the accuracy of the extracted parameters was examined. It was found that the distance between the surrogate and oracle parameters increased with larger training set sizes, while the angle between the two parameter vectors held approximately constant at 90 degrees. However, it was found that the transferability of attacks from the surrogate to the oracle improved linearly with increased training set size with lower attack strength. Next, a novel method was developed to incorporate power consumption side-channel information from the oracle model into the surrogate training based on a Siamese neural network structure and a simplified power model. Comparison between surrogate models trained with and without power consumption data indicated that incorporation of the side channel information increases the fidelity of the model extraction by up to 30%. However, no improvement of transferability of adversarial examples was found, indicating behavior dissimilarity of the models despite them being closer in weight space

    Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator

    Full text link
    DNN accelerators have been widely deployed in many scenarios to speed up the inference process and reduce the energy consumption. One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel information, which enables an adversary to preciously recover the model details. Such model extraction attacks can not only compromise the intellectual property of DNN models, but also facilitate some adversarial attacks. Although previous works have demonstrated a number of side-channel techniques to extract models from DNN accelerators, they are not practical for two reasons. (1) They only target simplified accelerator implementations, which have limited practicality in the real world. (2) They require heavy human analysis and domain knowledge. To overcome these limitations, this paper presents Mercury, the first automated remote side-channel attack against the off-the-shelf Nvidia DNN accelerator. The key insight of Mercury is to model the side-channel extraction process as a sequence-to-sequence problem. The adversary can leverage a time-to-digital converter (TDC) to remotely collect the power trace of the target model's inference. Then he uses a learning model to automatically recover the architecture details of the victim model from the power trace without any prior knowledge. The adversary can further use the attention mechanism to localize the leakage points that contribute most to the attack. Evaluation results indicate that Mercury can keep the error rate of model extraction below 1%

    The Evolution of Culture-Climate Interplay in Temporary Multi-Organisations: The Case of Construction Alliancing Projects

    Get PDF
    Organisational culture has been a long-standing debate in management research. However, in the field of construction project management, it is relatively under-explored. This is mainly due to the different organisational context of Temporary Multi-Organisations (TMOs). This research re-explores the notion of organisational culture in construction projects. Based on Darwin’s theory of evolution this research goes back to the very beginning; illustrating the exact meaning and dynamics of organisational culture in a construction TMO’s ecosystem. This research view an organisation and its forming of culture(s) as part of an evolutionary process. Thus, a critical realist’ view of causation is used as the foundation of the research design and methodology. Case study materials are provided from three Alliancing TMOs belonging to two major infrastructure clients in the UK. A designer culture model and the institutional theory are drawn upon to complement the basis of analysis for evolution. A qualitative research method is employed through semi-structured interviews and pre- and post-interview meetings. Other supporting documentations are also consulted. Three propositions and a postulate are generated and examined against the empirical data. Findings suggest that (i) the TMOs’ culture evolves through a set of recursive stages across the project lifecycle, (ii) the culture of the TMO undergoes several lifecycles during one lifespan of the project, and (iii) there are some evidence that culture at TMO level is learned, rationalised and routinised at corporate level. The postulate shows that it is plausible to predict the trajectory of how a TMO’s culture will evolve across the project lifecycle given a set of organisational features. In practice, findings suggest that hard artifacts alone are not able to sustain established culture throughout the project lifecycle. Awareness is needed to press the “refresh” button at times to maintain the desired culture and manage the evolution path
    corecore