35 research outputs found

    Open Set Classification of GAN-based Image Manipulations via a ViT-based Hybrid Architecture

    Full text link
    Classification of AI-manipulated content is receiving great attention, for distinguishing different types of manipulations. Most of the methods developed so far fail in the open-set scenario, that is when the algorithm used for the manipulation is not represented by the training set. In this paper, we focus on the classification of synthetic face generation and manipulation in open-set scenarios, and propose a method for classification with a rejection option. The proposed method combines the use of Vision Transformers (ViT) with a hybrid approach for simultaneous classification and localization. Feature map correlation is exploited by the ViT module, while a localization branch is employed as an attention mechanism to force the model to learn per-class discriminative features associated with the forgery when the manipulation is performed locally in the image. Rejection is performed by considering several strategies and analyzing the model output layers. The effectiveness of the proposed method is assessed for the task of classification of facial attribute editing and GAN attribution

    An Adaptive Integration Architecture for Software Reuse

    Get PDF
    The problem of building large, reliable software systems in a controlled, cost-effective way, the so-called software crisis problem, is one of computer science\u27s great challenges. From the very outset of computing as science, software reuse has been touted as a means to overcome the software crisis issue. Over three decades later, the software community is still grappling with the problem of building large reliable software systems in a controlled, cost effective way; the software crisis problem is alive and well. Today, many computer scientists still regard software reuse as a very powerful vehicle to improve the practice of software engineering. The advantage of amortizing software development cost through reuse continues to be a major objective in the art of building software, even though the tools, methods, languages, and overall understanding of software engineering have changed significantly over the years. Our work is primarily focused on the development of an Adaptive Application Integration Architecture Framework. Without good integration tools and techniques, reuse is difficult and will probably not happen to any significant degree. In the development of the adaptive integration architecture framework, the primary enabling concept is object-oriented design supported by the unified modeling language. The concepts of software architecture, design patterns, and abstract data views are used in a structured and disciplined manner to established a generic framework. This framework is applied to solve the Enterprise Application Integration (EM) problem in the telecommunications operations support system (OSS) enterprise marketplace. The proposed adaptive application integration architecture framework facilitates application reusability and flexible business process re-engineering. The architecture addresses the need for modern businesses to continuously redefine themselves to address changing market conditions in an increasingly competitive environment. We have developed a number of Enterprise Application Integration design patterns to enable the implementation of an EAI framework in a definite and repeatable manner. The design patterns allow for integration of commercial off-the-shelf applications into a unified enterprise framework facilitating true application portfolio interoperability. The notion of treating application services as infrastructure services and using business processes to combine them arbitrarily provides a natural way of thinking about adaptable and reusable software systems. We present a mathematical formalism for the specification of design patterns. This specification constitutes an extension of the basic concepts from many-sorted algebra. In particular, the notion of signature is extended to that of a vector, consisting of a set of linearly independent signatures. The approach can be used to reason about various properties including efforts for component reuse and to facilitate complex largescale software development by providing the developer with design alternatives and support for automatic program verification

    Data ingestion and assimilation in ionospheric models

    Get PDF
    <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 9.0px Times;">Current understanding of the ionospheric behaviour has been obtained through different observations, modelling and theoretical studies. Knowledge of the ionospheric electron density distribution and its fluctuations, high quality data sets, as well as reliable data ingestion and assimilation techniques are essential for models predicting ionospheric characteristics for radio wave propagation and for other applications such as satellite tracking navigation, etc., to mitigate the ionospheric effects on radio wave propagation. Effect of the ionosphere on Global Navigation Satellites System (GNSS) accuracy is one of the main factors limiting the reliability of GNSS applications.</p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 9.0px Times;">In accord with the objectives of the European COST 296 project, (Mitigation of Ionospheric Effects</p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 9.0px Times;">on Radio Systems, MIERS) under an international collaboration some new results have been achieved in collecting and processing high quality ionospheric data, in adaptation of the ionospheric models to enable data ingestion and assimilation, and in validation and improvement of real-time or near-real time ionospheric ionisation electron density reconstruction techniques.</p> <br /&gt

    TCS: a version control system

    Get PDF
    Not Include

    Object Recognition and Parsing with Weak Supervision

    Get PDF
    Object recognition is a fundamental problem in computer vision and has attracted a lot of research attention, while object parsing is equally important for many computer vision tasks but has been less studied. With the recent development of deep neural networks, computer vision researches have been dominated by deep learning approaches, which require large amount of training data for a specific task in a specific domain. The cost of collecting rare samples and making "hard" labels is forbiddingly high and has limited the development of many important vision studies, including object parsing. This dissertation will focus on object recognition and parsing with weak supervision, which tackles the problem when only a limited amount of data or label are available for training deep neural networks in the target domain. The goal is to design more advanced computer vision models with enhanced data efficiency during training and increased robustness to out-of-distribution samples during test. To achieve this goal, I will introduce several strategies, including unsupervised learning of compositional components in deep neural networks, zero/few-shot learning by preserving useful knowledge acquired in pre-training, weakly supervised learning combined with spatial-temporal information in video data, and learning from 3D computer graphics models and synthetic data. Furthermore, I will discuss new findings in our cognitive science projects and explain how the part-based representations benefit the development of visual analogical reasoning models. I believe this series of works alleviates the data-hungry problem of deep neural networks, and improves computer vision models to behave closer to human intelligence

    Advanced Code-reuse Attacks: A Novel Framework for JOP

    Get PDF
    Return-oriented programming is the predominant code-reuse attack, where short gadgets or borrowed chunks of code ending in a RET instruction can be discovered in binaries. A chain of ROP gadgets placed on the stack can permit control flow to be subverted, allowing for arbitrary computation. Jump-oriented programming is a class of code-reuse attack where instead of using RET instructions, indirect jumps and indirect calls are utilized to subvert the control flow. JOP is important because can allow for important mitigations and protections against ROP to be bypassed, and some protections against JOP are imperfect. This dissertation presents a design science study that proposes and creates the Jump-oriented Programming Reversing Open Cyber Knowledge Expert Tool, the JOP ROCKET. This is a novel framework for jump-oriented programming (JOP) that can help facilitate binary analysis for exploit development and code-reuse attacks. The process for manually developing exploits for JOP is a time-consuming and tedious process, often fraught with complications, and an exhaustive review of the literature shows there is a need for a mature, sophisticated tool to automate this process, to allow users to easily enumerate JOP gadgets for Windows x86 binaries. The JOP ROCKET fulfills this unmet need for a fully-featured tool to facilitate JOP gadget discovery. The JOP ROCKET discovers dispatcher gadgets as well as functional gadgets, and it performs classification on gadgets, according to registers used, registers affected, and operations performed. This allows researchers to utilize this tool to be very granular and specific about what gadgets they discover. Additionally, there are a variety of options available to modify how the gadgets are discovered, and this will expand or narrow the quantity of gadgets discovered. This design science research presents original significant contributions in the form of an instantiation and five new or highly reworked and enhanced methods. Some of these methods pertain directly to JOP, while others could be adapted and utilized in other reverse engineering projects. The JOP ROCKET allows researchers to enumerate JOP gadgets for software easily, allowing for a JOP exploit to be more efficiently constructed, whereas before the task would have been a time-consuming process requiring expert knowledge and the use of multiple tools
    corecore