3 research outputs found

    The germinal centre artificial immune system

    Get PDF
    This thesis deals with the development and evaluation of the Germinal centre artificial immune system (GC-AIS) which is a novel artificial immune system based on advancements in the understanding of the germinal centre reaction of the immune system. The key research questions addressed in this thesis are: can an artificial immune system (AIS) be designed by taking inspiration from recent developments in immunology to tackle multi-objective optimisation problems? How can we incorporate desirable features of the immune system like diversity, parallelism and memory into this proposed AIS? How does the proposed AIS compare with other state of the art techniques in the field of multi-objective optimisation problems? How can we incorporate the learning component of the immune system into the algorithm and investigate the usefulness of memory in dynamic scenarios? The main contributions of the thesis are: • Understanding the behaviour and performance of the proposed GC-AIS on multiobjective optimisation problems and explaining its benefits and drawbacks, by comparing it with simple baseline and state of the art algorithms. • Improving the performance of GC-AIS by incorporating a popular technique from multi-objective optimisation. By overcoming its weaknesses the capability of the improved variant to compete with the state of the art algorithms is evaluated. • Answering key questions on the usefulness of incorporating memory in GC-AIS in a dynamic scenario

    Evolutionary computation for software testing

    Get PDF
    A variety of products undergo a transformation from a pure mechanical design to more and more software and electronic components. A polarized example are watches. Several decades ago they have been purely mechanical. Modern smart watches are almost completely electronic devices which heavily rely on software. Further, a smart watch offers a lot more features than just the information about the current time. This change had a crucial impact on how software is being developed. A first attempt to control the rising complexity was to move to agile development practices such as extreme programming or scrum. This rise in complexity is not only affecting the development process but also quality assurance and software testing. If a product contains more and more features then this leads to a higher number of tests necessary to ensure quality standards. Furthermore agile development practices work in an iterative manner which leads to repetitive testing that puts more effort on the testing team. We aimed within the thesis to ease the pain of testing. Thereby we examined a series of subproblems that arise. A key complexity is the number of test cases. We intended to reduce the number of test cases before they are executed manually or implemented as automated tests. Thereby we examined the test specification and based on the requirements coverage of the individual tests, we were able to identify redundant tests. We relied on a novel metaheuristic called GCAIS which we improved upon iteratively. Another task is to control the remaining complexity. Testing is often time crucial and an appropriate subset of the available tests must be chosen in order to get a quick insight into the status of the device under test. We examined this challenge in two different testing scenarios. The first scenario is located in semi-automated testing where engineers execute a set of automated tests locally and closely observe the behaviour of the system under test. We extended GCAIS to compute test suites that satisfy different criteria if provided with sufficient search time. The second use case is located in fully automated testing in a continuous integration (CI) setting. CI focuses on frequent software build cycles which also include testing. These builds contain a testing stage which greatly emphasizes speed. Thus there we also have to compute crucial tests. However, due to the nature of the process we have to continuously recompute a test suite for each build as the software and maybe even the test cases at hand have changed. Hence it is hard to compute the test suite ahead of time and these tests have to be determined as part of the CI execution. Thus we switched to a computational lightweight learning classifier system (LCS) to prioritize and select test cases. We integrated a series of innovations we made into an LCS known as XCSF such as continuous priorities, experience replay and transfer learning. This enabled us to outperform a state of the art artificial neural network which is used by companies such as Netflix. We further investigated how LCS can be made faster using parallelism. We developed generic approaches which may run on any multicore computing device. This is of interest for our CI use case as the build server's architecture is unknown. However, the methods are also independent of the concrete LCS and are not linked to our testing problem. We identified that many of the challenges that need to be faced in the CI use case have been tackled by Organic Computing (OC), for example the need to adapt to an ever changing environment. Hence we relied on OC design principles to create a system architecture which wraps the LCS developed and integrates it into existing CI processes. The final system is robust and highly autonomous. A side-effect of the high degree of autonomy is a high level of automatization which fits CI well. We also gave insight on the usability and delivery of the full system to our industrial partner. Test engineers can easily integrate it with a few lines of code and need no knowledge about LCS and OC in order to use it. Another implication of the developed system is that OC's ideas and design principles can also be employed outside the field of embedded systems. This shows that OC has a greater level of generality. The process of testing and correcting found errors is still only partially automated. We make a first step into automating the entire process and thereby take an analogy to the concept of self-healing of OC. As a first proof of concept of this school of thought we take a look at touch interfaces. There we can automatically manipulate the software to fulfill the specified behaviour. Thus only a minimalistic amount of manual work is required

    Advanced Immunoinformatics Approaches for Precision Medicine

    Get PDF
    Genomic sequencing and other ’-omic’ technologies are slowly changing biomedical practice. As a result, patients now can be treated based on their molecular profile. Especially the immune system’s variability, in particular that of the human leukocyte antigen (HLA) gene cluster, makes such a paradigm indispensable when treating illnesses such as cancer, autoimmune diseases, or infectious diseases. It can be, however, costly and time-consuming to determine the HLA genotype with traditional means, as these methods do not utilize often pre-existing sequencing data. We therefore proposed an algorithmic approach that can use these data sources to infer the HLA genotype. HLA genotyping inference can be cast into a set covering problem under special biological constraints and can be solved efficiently via integer linear programming. Our proposed approach outperformed previously published methods and remains one of the most accurate methods to date. We then introduced two applications in which a HLA-based stratification is vital for the efficacy of the treatment and the reduction of its adverse effects. In the first example, we dealt with the optimal design of string-of-beads vaccines (SOB). We developed a mathematical model that maximizes the efficacy of such vaccines while minimizing their side effects based on a given HLA distribution. Comparisons of our optimally designed SOB with experimentally tested designs yielded promising results. In the second example, we considered the problem of anti-drug antibody (ADA) formation of biotherapeutics caused by HLA presented peptides. We combined a new statistical model for mutation effect prediction together with a quantitative measure of immunogenicity to formulate an optimization problem that finds alterations to reduce the risk of ADA formation. To efficiently solve this bi-objective problem, we developed a distributed solver that is up to 25-times faster than state-of-the art solvers. We used our approach to design the C2 domain of factor VIII, which is linked to ADA formation in hemophilia A. Our experimental evaluations of the proposed designs are encouraging and demonstrate the prospects of our approach. Bioinformatics is an integral part of modern biomedical research. The translation of advanced methods into clinical use is often complicated. To ease the translation, we developed a programming library for computational immunology and used it to implement a Galaxy-based web server for vaccine design and a KNIME extension for desktop PCs. These platforms allow researchers to develop their own immunoinformatics workflows utilizing the platform’s graphical programming capabilities
    corecore