192 research outputs found

    Interactive Technologies for the Public Sphere Toward a Theory of Critical Creative Technology

    Get PDF
    Digital media cultural practices continue to address the social, cultural and aesthetic contexts of the global information economy, perhaps better called ecology, by inventing new methods and genres that encourage interactive engagement, collaboration, exploration and learning. The theoretical framework for creative critical technology evolved from the confluence of the arts, human computer interaction, and critical theories of technology. Molding this nascent theoretical framework from these seemingly disparate disciplines was a reflexive process where the influence of each component on each other spiraled into the theory and practice as illustrated through the Constructed Narratives project. Research that evolves from an arts perspective encourages experimental processes of making as a method for defining research principles. The traditional reductionist approach to research requires that all confounding variables are eliminated or silenced using methods of statistics. However, that noise in the data, those confounding variables provide the rich context, media, and processes by which creative practices thrive. As research in the arts gains recognition for its contributions of new knowledge, the traditional reductive practice in search of general principles will be respectfully joined by methodologies for defining living principles that celebrate and build from the confounding variables, the data noise. The movement to develop research methodologies from the noisy edges of human interaction have been explored in the research and practices of ludic design and ambiguity (Gaver, 2003); affective gap (Sengers et al., 2005b; 2006); embodied interaction (Dourish, 2001); the felt life (McCarthy & Wright, 2004); and reflective HCI (Dourish, et al., 2004). The theory of critical creative technology examines the relationships between critical theories of technology, society and aesthetics, information technologies and contemporary practices in interaction design and creative digital media. The theory of critical creative technology is aligned with theories and practices in social navigation (Dourish, 1999) and community-based interactive systems (Stathis, 1999) in the development of smart appliances and network systems that support people in engaging in social activities, promoting communication and enhancing the potential for learning in a community-based environment. The theory of critical creative technology amends these community-based and collaborative design theories by emphasizing methods to facilitate face-to-face dialogical interaction when the exchange of ideas, observations, dreams, concerns, and celebrations may be silenced by societal norms about how to engage others in public spaces. The Constructed Narratives project is an experiment in the design of a critical creative technology that emphasizes the collaborative construction of new knowledge about one's lived world through computer-supported collaborative play (CSCP). To construct is to creatively invent one's world by engaging in creative decision-making, problem solving and acts of negotiation. The metaphor of construction is used to demonstrate how a simple artefact - a building block - can provide an interactive platform to support discourse between collaborating participants. The technical goal for this project was the development of a software and hardware platform for the design of critical creative technology applications that can process a dynamic flow of logistical and profile data from multiple users to be used in applications that facilitate dialogue between people in a real-time playful interactive experience

    Adapting a Hyper-heuristic to Respond to Scalability Issues in Combinatorial Optimisation

    No full text
    The development of a heuristic to solve an optimisation problem in a new domain, or a specific variation of an existing problem domain, is often beyond the means of many smaller businesses. This is largely due to the task normally needing to be assigned to a human expert, and such experts tend to be scarce and expensive. One of the aims of hyper-heuristic research is to automate all or part of the heuristic development process and thereby bring the generation of new heuristics within the means of more organisations. A second aim of hyper-heuristic research is to ensure that the process by which a domain specific heuristic is developed is itself independent of the problem domain. This enables a hyper-heuristic to exist and operate above the combinatorial optimisation problem “domain barrier” and generalise across different problem domains. A common issue with heuristic development is that a heuristic is often designed or evolved using small size problem instances and then assumed to perform well on larger problem instances. The goal of this thesis is to extend current hyper-heuristic research towards answering the question: How can a hyper-heuristic efficiently and effectively adapt the selection, generation and manipulation of domain specific heuristics as you move from small size and/or narrow domain problems to larger size and/or wider domain problems? In other words, how can different hyperheuristics respond to scalability issues? Each hyper-heuristic has its own strengths and weaknesses. In the context of hyper-heuristic research, this thesis contributes towards understanding scalability issues by firstly developing a compact and effective heuristic that can be applied to other problem instances of differing sizes in a compatible problem domain. We construct a hyper-heuristic for the Capacitated Vehicle Routing Problem domain to establish whether a heuristic for a specific problem domain can be developed which is compact and easy to interpret. The results show that generation of a simple but effective heuristic is possible. Secondly we develop two different types of hyper-heuristic and compare their performance across different combinatorial optimisation problem domains. We construct and compare simplified versions of two existing hyper-heuristics (adaptive and grammar-based), and analyse how each handles the trade-off between computation speed and quality of the solution. The performance of the two hyper-heuristics are tested on seven different problem domains compatible with the HyFlex (Hyper-heuristic Flexible) framework. The results indicate that the adaptive hyper-heuristic is able to deliver solutions of a pre-defined quality in a shorter computational time than the grammar-based hyper-heuristic. Thirdly we investigate how the adaptive hyper-heuristic developed in the second stage of this thesis can respond to problem instances of the same size, but containing different features and complexity. We investigate how, with minimal knowledge about the problem domain and features of the instance being worked on, a hyper-heuristic can modify its processes to respond to problem instances containing different features and problem domains of different complexity. In this stage we allow the adaptive hyper-heuristic to select alternative vectors for the selection of problem domain operators, and acceptance criteria used to determine whether solutions should be retained or discarded. We identify a consistent difference between the best performing pairings of selection vector and acceptance criteria, and those pairings which perform poorly. This thesis shows that hyper-heuristics can respond to scalability issues, although not all do so with equal ease. The flexibility of an adaptive hyper-heuristic enables it to perform faster than the more rigid grammar-based hyper-heuristic, but at the expense of losing a reusable heuristic

    Towards Clone Detection in UML Domain Models

    Get PDF

    A teachable semi-automatic web information extraction system based on evolved regular expression patterns

    Get PDF
    This thesis explores Web Information Extraction (WIE) and how it has been used in decision making and to support businesses in their daily operations. The research focuses on a WIE system based on Genetic Programming (GP) with an extensible model to enhance the automatic extractor. This uses a human as a teacher to identify and extract relevant information from the semi-structured HTML webpages. Regular expressions, which have been chosen as the pattern matching tool, are automatically generated based on the training data to provide an improved grammar and lexicon. This particularly benefits the GP system which may need to extend its lexicon in the presence of new tokens in the web pages. These tokens allow the GP method to produce new extraction patterns for new requirements

    Network intrusion detection using genetic programming.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Pietermaritzburg.Network intrusion detection is a real-world problem that involves detecting intrusions on a computer network. Detecting whether a network connection is intrusive or non-intrusive is essentially a binary classification problem. However, the type of intrusive connections can be categorised into a number of network attack classes and the task of associating an intrusion to a particular network type is multiclass classification. A number of artificial intelligence techniques have been used for network intrusion detection including Evolutionary Algorithms. This thesis investigates the application of evolutionary algorithms namely, Genetic Programming (GP), Grammatical Evolution (GE) and Multi-Expression Programming (MEP) in the network intrusion detection domain. Grammatical evolution and multi-expression programming are considered to be variants of GP. In this thesis, a comparison of the effectiveness of classifiers evolved by the three EAs within the network intrusion detection domain is performed. The comparison is performed on the publicly available KDD99 dataset. Furthermore, the effectiveness of a number of fitness functions is evaluated. From the results obtained, standard genetic programming performs better than grammatical evolution and multi-expression programming. The findings indicate that binary classifiers evolved using standard genetic programming outperformed classifiers evolved using grammatical evolution and multi-expression programming. For evolving multiclass classifiers different fitness functions used produced classifiers with different characteristics resulting in some classifiers achieving higher detection rates for specific network intrusion attacks as compared to other intrusion attacks. The findings indicate that classifiers evolved using multi-expression programming and genetic programming achieved high detection rates as compared to classifiers evolved using grammatical evolution

    A study of genetic programming and grammatical evolution for automatic object-oriented programming.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Pietermaritzburg 2016.Manual programming is time consuming and challenging for a complex problem. For efficiency of the manual programming process, human programmers adopt the object-oriented approach to programming. Yet, manual programming is still a tedious task. Recently, interest in automatic software production has grown rapidly due to global software demands and technological advancements. This study forms part of a larger initiative on automatic programming to aid manual programming in order to meet these demands. In artificial intelligence, Genetic Programming (GP) is an evolutionary algorithm which searches a program space for a solution program. A program generated by GP is executed to yield a solution to the problem at hand. Grammatical Evolution (GE) is a variation of genetic programming. GE adopts a genotype-phenotype distinction and maps from a genotypic space to a phenotypic (program) space to produce a program. Whereas the previous work on object-oriented programming and GP has involved taking an analogy from object-oriented programming to improve the scalability of genetic programming, this dissertation aims at evaluating GP and a variation thereof, namely, GE, for automatic object-oriented programming. The first objective is to implement and test the abilities of GP to automatically generate code for object-oriented programming problems. The second objective is to implement and test the abilities of GE to automatically generate code for object-oriented programming problems. The third objective is to compare the performance of GP and GE for automatic object-oriented programming. Object-Oriented Genetic Programming (OOGP), a variation of OOGP, namely, Greedy OOGP (GOOGP), and GE approaches to automatic object-oriented programming were implemented. The approaches were tested to produce code for three object-oriented programming problems. Each of the object-oriented programming problems involves two classes, one with the driver program and the Abstract Data Type (ADT) class. The results show that both GP and GE can be used for automatic object-oriented programming. However, it was found that the ability of each of the approaches to automatically generate code for object-oriented programming problems decreases with an increase in the problem complexity. The performance of the approaches were compared and statistically tested to determine the effectiveness of each approach. The results show that GE performs better than GOOGP and OOGP

    Motif discovery in sequential data

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (v. 2, leaves [435]-467).In this thesis, I discuss the application and development of methods for the automated discovery of motifs in sequential data. These data include DNA sequences, protein sequences, and real-valued sequential data such as protein structures and timeseries of arbitrary dimension. As more genomes are sequenced and annotated, the need for automated, computational methods for analyzing biological data is increasing rapidly. In broad terms, the goal of this thesis is to treat sequential data sets as unknown languages and to develop tools for interpreting an understanding these languages. The first chapter of this thesis is an introduction to the fundamentals of motif discovery, which establishes a common mode of thought and vocabulary for the subsequent chapters. One of the central themes of this work is the use of grammatical models, which are more commonly associated with the field of computational linguistics. In the second chapter, I use grammatical models to design novel antimicrobial peptides (AmPs). AmPs are small proteins used by the innate immune system to combat bacterial infection in multicellular eukaryotes. There is mounting evidence that these peptides are less susceptible to bacterial resistance than traditional antibiotics and may form the basis for a novel class of therapeutics.(cont.) In this thesis, I described the rational design of novel AmPs that show limited homology to naturally-occurring proteins but have strong bacteriostatic activity against several species of bacteria, including Staphylococcus aureus and Bacillus anthracis. These peptides were designed using a linguistic model of natural AmPs by treating the amino acid sequences of natural AmPs as a formal language and building a set of regular grammars to describe this language. is set of grammars was used to create novel, unnatural AmP sequences that conform to the formal syntax of natural antimicrobial peptides but populate a previously unexplored region of protein sequence space. The third chapter describes a novel, GEneric MOtif DIscovery Algorithm (Gemoda) for sequential data. Gemoda can be applied to any dataset with a sequential character, including both categorical and real-valued data. As I show, Gemoda deterministically discovers motifs that are maximal in composition and length. As well, the algorithm allows any choice of similarity metric for finding motifs. These motifs are representation-agnostic: they can be represented using regular expressions, position weight matrices, or any other model for sequential data.(cont.) I demonstrate a number of applications of the algorithm, including the discovery of motifs in amino acids and DNA sequences, and the discovery of conserved protein sub-structures. The final chapter is devoted to a series of smaller projects, employing tool methods indirectly related to motif discovery in sequential data. I describe the construction of a software tool, Biogrep that is designed to match large pattern sets against large biosequence databases in a parallel fashion. is makes biogrep well-suited to annotating sets of sequences using biologically significant patterns. In addition, I show that the BLOSUM series of amino acid substitution matrices, which are commonly used in motif discovery and sequence alignment problems, have changed drastically over time.The fidelity of amino acid sequence alignment and motif discovery tools depends strongly on the target frequencies implied by these underlying matrices. us, these results suggest that further optimization of these matrices is possible. The final chapter also contains two projects wherein I apply statistical motif discovery tools instead of grammatical tools.(cont.) In the first of these two, I develop three different physiochemical representations for a set of roughly 700 HIV-I protease substrates and use these representations for sequence classification and annotation. In the second of these two projects, I develop a simple statistical method for parsing out the phenotypic contribution of a single mutation from libraries of functional diversity that contain a multitude of mutations and varied phenotypes. I show that this new method successfully elucidates the effects of single nucleotide polymorphisms on the strength of a promoter placed upstream of a reporter gene. The central theme, present throughout this work, is the development and application of novel approaches to finding motifs in sequential data. The work on the design of AmPs is very applied and relies heavily on existing literature. In contrast, the work on Gemoda is the greatest contribution of this thesis and contains many new ideas.by Kyle L. Jensen.Ph.D

    Developing Efficient and Effective Intrusion Detection System using Evolutionary Computation

    Get PDF
    The internet and computer networks have become an essential tool in distributed computing organisations especially because they enable the collaboration between components of heterogeneous systems. The efficiency and flexibility of online services have attracted many applications, but as they have grown in popularity so have the numbers of attacks on them. Thus, security teams must deal with numerous threats where the threat landscape is continuously evolving. The traditional security solutions are by no means enough to create a secure environment, intrusion detection systems (IDSs), which observe system works and detect intrusions, are usually utilised to complement other defence techniques. However, threats are becoming more sophisticated, with attackers using new attack methods or modifying existing ones. Furthermore, building an effective and efficient IDS is a challenging research problem due to the environment resource restrictions and its constant evolution. To mitigate these problems, we propose to use machine learning techniques to assist with the IDS building effort. In this thesis, Evolutionary Computation (EC) algorithms are empirically investigated for synthesising intrusion detection programs. EC can construct programs for raising intrusion alerts automatically. One novel proposed approach, i.e. Cartesian Genetic Programming, has proved particularly effective. We also used an ensemble-learning paradigm, in which EC algorithms were used as a meta-learning method to produce detectors. The latter is more fully worked out than the former and has proved a significant success. An efficient IDS should always take into account the resource restrictions of the deployed systems. Memory usage and processing speed are critical requirements. We apply a multi-objective approach to find trade-offs among intrusion detection capability and resource consumption of programs and optimise these objectives simultaneously. High complexity and the large size of detectors are identified as general issues with the current approaches. The multi-objective approach is used to evolve Pareto fronts for detectors that aim to maintain the simplicity of the generated patterns. We also investigate the potential application of these algorithms to detect unknown attacks

    Using Genetic Programming with Prior Formula Knowledge to Solve Symbolic Regression Problem

    Get PDF
    A researcher can infer mathematical expressions of functions quickly by using his professional knowledge (called Prior Knowledge). But the results he finds may be biased and restricted to his research field due to limitation of his knowledge. In contrast, Genetic Programming method can discover fitted mathematical expressions from the huge search space through running evolutionary algorithms. And its results can be generalized to accommodate different fields of knowledge. However, since GP has to search a huge space, its speed of finding the results is rather slow. Therefore, in this paper, a framework of connection between Prior Formula Knowledge and GP (PFK-GP) is proposed to reduce the space of GP searching. The PFK is built based on the Deep Belief Network (DBN) which can identify candidate formulas that are consistent with the features of experimental data. By using these candidate formulas as the seed of a randomly generated population, PFK-GP finds the right formulas quickly by exploring the search space of data features. We have compared PFK-GP with Pareto GP on regression of eight benchmark problems. The experimental results confirm that the PFK-GP can reduce the search space and obtain the significant improvement in the quality of SR
    • …
    corecore