79 research outputs found

    High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms

    Full text link
    Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. But just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein we carry out a large-scale investigation, specifically, one involving 26 ML algorithms, 250 datasets (regression and both binary and multinomial classification), 6 score metrics, and 28,857,600 algorithm runs. Analyzing the results we conclude that for many ML algorithms we should not expect considerable gains from hyperparameter tuning on average, however, there may be some datasets for which default hyperparameters perform poorly, this latter being truer for some algorithms than others. By defining a single hp_score value, which combines an algorithm's accumulated statistics, we are able to rank the 26 ML algorithms from those expected to gain the most from hyperparameter tuning to those expected to gain the least. We believe such a study may serve ML practitioners at large

    I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models

    Full text link
    Modern image-to-text systems typically adopt the encoder-decoder framework, which comprises two main components: an image encoder, responsible for extracting image features, and a transformer-based decoder, used for generating captions. Taking inspiration from the analysis of neural networks' robustness against adversarial perturbations, we propose a novel gray-box algorithm for creating adversarial examples in image-to-text models. Unlike image classification tasks that have a finite set of class labels, finding visually similar adversarial examples in an image-to-text task poses greater challenges because the captioning system allows for a virtually infinite space of possible captions. In this paper, we present a gray-box adversarial attack on image-to-text, both untargeted and targeted. We formulate the process of discovering adversarial perturbations as an optimization problem that uses only the image-encoder component, meaning the proposed attack is language-model agnostic. Through experiments conducted on the ViT-GPT2 model, which is the most-used image-to-text model in Hugging Face, and the Flickr30k dataset, we demonstrate that our proposed attack successfully generates visually similar adversarial examples, both with untargeted and targeted captions. Notably, our attack operates in a gray-box manner, requiring no knowledge about the decoder module. We also show that our attacks fool the popular open-source platform Hugging Face

    A Simple Cellular Automation that Solves the Density and Ordering Problems

    Get PDF
    Cellular automata (CA) are discrete, dynamical systems that perform computations in a distributed fashion on a spatially extended grid. The dynamical behavior of a CA may give rise to emergent computation, referring to the appearance of global information processing capabilities that are not explicitly represented in the system's elementary components nor in their local interconnections.1 As such, CAs o?er an austere yet versatile model for studying natural phenomena, as well as a powerful paradigm for attaining ?ne-grained, massively parallel computation. An example of such emergent computation is to use a CA to determine the global density of bits in an initial state con?guration. This problem, known as density classi?cation, has been studied quite intensively over the past few years. In this short communication we describe two previous versions of the problem along with their CA solutions, and then go on to show that there exists yet a third version | which admits a simple solution

    Open Sesame! Universal Black Box Jailbreaking of Large Language Models

    Full text link
    Large language models (LLMs), designed to provide helpful and safe responses, often rely on alignment techniques to align with user intent and social guidelines. Unfortunately, this alignment can be exploited by malicious actors seeking to manipulate an LLM's outputs for unintended purposes. In this paper we introduce a novel approach that employs a genetic algorithm (GA) to manipulate LLMs when model architecture and parameters are inaccessible. The GA attack works by optimizing a universal adversarial prompt that -- when combined with a user's query -- disrupts the attacked model's alignment, resulting in unintended and potentially harmful outputs. Our novel approach systematically reveals a model's limitations and vulnerabilities by uncovering instances where its responses deviate from expected behavior. Through extensive experiments we demonstrate the efficacy of our technique, thus contributing to the ongoing discussion on responsible AI development by providing a diagnostic tool for evaluating and enhancing alignment of LLMs with human intent. To our knowledge this is the first automated universal black box jailbreak attack

    Foiling Explanations in Deep Neural Networks

    Full text link
    Deep neural networks (DNNs) have greatly impacted numerous fields over the past decade. Yet despite exhibiting superb performance over many problems, their black-box nature still poses a significant challenge with respect to explainability. Indeed, explainable artificial intelligence (XAI) is crucial in several fields, wherein the answer alone -- sans a reasoning of how said answer was derived -- is of little value. This paper uncovers a troubling property of explanation methods for image-based DNNs: by making small visual changes to the input image -- hardly influencing the network's output -- we demonstrate how explanations may be arbitrarily manipulated through the use of evolution strategies. Our novel algorithm, AttaXAI, a model-agnostic, adversarial attack on XAI algorithms, only requires access to the output logits of a classifier and to the explanation map; these weak assumptions render our approach highly useful where real-world models and data are concerned. We compare our method's performance on two benchmark datasets -- CIFAR100 and ImageNet -- using four different pretrained deep-learning models: VGG16-CIFAR100, VGG16-ImageNet, MobileNet-CIFAR100, and Inception-v3-ImageNet. We find that the XAI methods can be manipulated without the use of gradients or other model internals. Our novel algorithm is successfully able to manipulate an image in a manner imperceptible to the human eye, such that the XAI method outputs a specific explanation map. To our knowledge, this is the first such method in a black-box setting, and we believe it has significant value where explainability is desired, required, or legally mandatory

    EC-KitY: Evolutionary Computation Tool Kit in Python with Seamless Machine Learning Integration

    Full text link
    EC-KitY is a comprehensive Python library for doing evolutionary computation (EC), licensed under the BSD 3-Clause License, and compatible with scikit-learn. Designed with modern software engineering and machine learning integration in mind, EC-KitY can support all popular EC paradigms, including genetic algorithms, genetic programming, coevolution, evolutionary multi-objective optimization, and more. This paper provides an overview of the package, including the ease of setting up an EC experiment, the architecture, the main features, and a comparison with other libraries.Comment: 6 pages, 1 figure, 1 table. Published in Elsevier Software

    Machine Nature: The Coming Age of Bio-Inspired Computing

    No full text
    Machine Nature: The Coming Age of Bio-Inspired Computing Artificial Life, but very much popular science; at most a first, very basic, overview. page 185: Nature has come up not only with ingenious solutions to specific problems - for example, structural designs such as eyes or wings - but indeed has found (and founded) entirely new processes to aid in the emergence of complex organisms. Two of the most important ones are ontogeny (the development of a multicellular organism from a single mother cell) and learning
    • …
    corecore