3,504 research outputs found

    The longer term value of creativity judgements in computational creativity

    Get PDF
    During research to develop the Standardised Procedure for Evaluating Creative Systems (SPECS) methodology for evaluat- ing the creativity of ‘creative’ systems, in 2011 an evaluation case study was carried out. The case study investigated how we can make a ‘snapshot’ decision, in a short space of time, on the creativity of systems in various domains. The systems to be evaluated were presented at the International Computational Creativity Conference in 2011. Evaluation was performed by people whose domain expertise ranges from expert to novice, depending on the system. The SPECS methodology was used for evaluation, and was compared to two other creativity evaluation methods (Ritchie’s criteria and Colton’s Creative Tripod) and to results from surveying people’s opinion on the creativity of the systems under investigation. Here, we revisit those results, considering them in the context of what these systems have contributed to computational creativity development. Five years on, we now have data on how influential these systems were within computational creativity, and to what extent the work in these systems has influenced further developments in computational creativity research. This paper investigates whether the evaluations of creativity of these systems have been helpful in predicting which systems will be more influential in computational creativity (as measured by paper citations and further development within later computational systems). While a direct correlation between evaluative results and longer term impact is not discovered (and perhaps too simplistic an aim, given the factors at play in determining research impact), some interesting alignments are noted between the 2011 results and the impact of papers five years on

    Collage Diffusion

    Full text link
    We seek to give users precise control over diffusion-based image generation by modeling complex scenes as sequences of layers, which define the desired spatial arrangement and visual attributes of objects in the scene. Collage Diffusion harmonizes the input layers to make objects fit together -- the key challenge involves minimizing changes in the positions and key visual attributes of the input layers while allowing other attributes to change in the harmonization process. We ensure that objects are generated in the correct locations by modifying text-image cross-attention with the layers' alpha masks. We preserve key visual attributes of input layers by learning specialized text representations per layer and by extending ControlNet to operate on layers. Layer input allows users to control the extent of image harmonization on a per-object basis, and users can even iteratively edit individual objects in generated images while keeping other objects fixed. By leveraging the rich information present in layer input, Collage Diffusion generates globally harmonized images that maintain desired object characteristics better than prior approaches

    CLIP-CLOP: CLIP-Guided Collage and Photomontage

    Full text link
    The unabated mystique of large-scale neural networks, such as the CLIP dual image-and-text encoder, popularized automatically generated art. Increasingly more sophisticated generators enhanced the artworks' realism and visual appearance, and creative prompt engineering enabled stylistic expression. Guided by an artist-in-the-loop ideal, we design a gradient-based generator to produce collages. It requires the human artist to curate libraries of image patches and to describe (with prompts) the whole image composition, with the option to manually adjust the patches' positions during generation, thereby allowing humans to reclaim some control of the process and achieve greater creative freedom. We explore the aesthetic potentials of high-resolution collages, and provide an open-source Google Colab as an artistic tool.Comment: 5 pages, 7 figures, published at the International Conference on Computational Creativity (ICCC) 2022 as Short Paper: Dem

    Towards 6G: Key technological directions

    Get PDF
    Sixth-generation mobile networks (6G) are expected to reach extreme communication capabilities to realize emerging applications demanded by the future society. This paper focuses on six technological directions towards 6G, namely, intent-based networking, THz communication, artificial intelligence, distributed ledger technology/blockchain, smart devices and gadget-free communication, and quantum communication. These technologies will enable 6G to be more capable of catering to the demands of future network services and applications. Each of these technologies is discussed highlighting recent developments, applicability in 6G, and deployment challenges. It is envisaged that this work will facilitate 6G related research and developments, especially along the six technological directions discussed in the paper

    Drag and Drop Image CAPTCHA

    Get PDF
    The massive and automated access to Web resources through robots has made it essential for Web service providers to make some conclusion about whether a user is human or robot. A Human Interaction Proof (HIP) like Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) offers a way to make such a distinction. CAPTCHA is essentially a modern implementation of Turing test, which carries out its job through a particular text based, image based or audio based challenge response system. In this paper we present a new image based CAPTCHA technique. Properties of the proposed technique offer all of the benefits of image based CAPTCHAs; grant an improved security control over the usual text based techniques and at the same time improve the user-friendliness of the Web page. Further, the paper briefly reviews various other existing CAPTCHA techniques

    Smart Augmentation - Learning an Optimal Data Augmentation Strategy

    Get PDF
    A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks(DNN). There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method which we call Smart Augmentation and we show how to use it to increase the accuracy and reduce overfitting on a target network. Smart Augmentation works by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart Augmentation has shown the potential to increase accuracy by demonstrably significant measures on all datasets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases

    AI in space: Past, present, and possible futures

    Get PDF
    While artificial intelligence (AI) has become increasingly present in recent space applications, new missions being planned will require even more incorporation of AI techniques. In this paper, we survey some of the progress made to date in implementing such programs, some current directions and issues, and speculate about the future of AI in space scenarios. We also provide examples of how thinkers from the realm of science fiction have envisioned AI's role in various aspects of space exploration
    • …
    corecore