6 research outputs found

    Open Set Logo Detection and Retrieval

    Full text link
    Current logo retrieval research focuses on closed set scenarios. We argue that the logo domain is too large for this strategy and requires an open set approach. To foster research in this direction, a large-scale logo dataset, called Logos in the Wild, is collected and released to the public. A typical open set logo retrieval application is, for example, assessing the effectiveness of advertisement in sports event broadcasts. Given a query sample in shape of a logo image, the task is to find all further occurrences of this logo in a set of images or videos. Currently, common logo retrieval approaches are unsuitable for this task because of their closed world assumption. Thus, an open set logo retrieval method is proposed in this work which allows searching for previously unseen logos by a single query sample. A two stage concept with separate logo detection and comparison is proposed where both modules are based on task specific CNNs. If trained with the Logos in the Wild data, significant performance improvements are observed, especially compared with state-of-the-art closed set approaches.Comment: accepted at VISAPP 201

    Adversarial machine learning for cyber security

    Get PDF
    This master thesis aims to take advantage of state of the art and tools that have been developed in Adversarial Machine Learning (AML) and related research branches to strengthen Machine Learning (ML) models used in cyber security. First, it seeks to collect, organize and summarize the most recent and potential state-of-the-art techniques in AML, considering that it is a research branch in an unstable state with a great diversity of difficult to contrast proposals, which rapidly evolve but are quickly replaced by attacks or defenses with greater potential. This summary is important considering that the AML literature is far from being able to create defensive techniques that effectively protect a ML model from all possible attacks, and it is relevant to analyze them both in detail and with criteria in order to apply them in practice. It is also useful to find biases in state-of-the-art to be considered regarding the measurement of the attack or defense effectiveness, which can be addressed by proposing methodologies and metrics to mitigate them. Additionally, it is considered inappropriate to analyze AML in isolation, considering that the robustness of a ML model to adversarial attacks is totally related to its generalization capacity to in-distribution cases, to its robustness to out-of-distribution cases, and to the possibility of overinterpretation, using spurious (but statistically valid) patterns in the model that may give a false sense of high performance. Therefore, this thesis proposes a methodology to previously evaluate the exposure of a model to these considerations, focusing on improving it in progressive order of priorities in each of its stages, and to guarantee satisfactory overall robustness. Based on this methodology, two interesting case studies are chosen to be explored in greater depth to evaluate their robustness to adversarial attacks, perform attacks to gain insights about their strengths and weaknesses, and finally propose improvements. In this process, all kinds of approaches are used depending on the type of problem evaluated and its assumptions, performing exploratory analysis, applying AML attacks and detailing their implications, proposing improvements and implementation of defenses such as Adversarial Training, and finally creating and proposing a methodology to correctly evaluate the effectiveness of a defense avoiding the biases of the state of the art. For each of the case studies, it is possible to create efficient adversarial attacks, analyze the strengths of each model, and in the case of the second case study, it is possible to increase the adversarial robustness of a Classification Convolutional Neural Network using Adversarial Training. This leads to other positive effects on the model, such as a better representation of the data, easier implementation of techniques to detect adversarial cases through anomaly analysis, and insights concerning its performance to reinforce the model from other viewp
    corecore