569 research outputs found

    Aeronautical engineering: A continuing bibliography with indexes (supplement 278)

    Get PDF
    This bibliography lists 414 reports, articles, and other documents introduced into the NASA scientific and technical information system in April 1992

    Multi-Objective, Multiphasic and Multi-Step Self-Optimising Continuous Flow Systems

    Get PDF
    Continuous flow chemistry is currently a vibrant area of research, offering many advantages over traditional batch chemistry. These include: enhanced heat and mass transfer, access to a wider range of reaction conditions, safer use of hazardous reagents, telescoping of multi-step reactions and readily accessible photochemistry. As such, there has been an increase in the adoption of continuous flow processes towards the synthesis of active pharmaceutical ingredients (APIs) in recent years. Advances in the automation of laboratory equipment has transformed the way in which routine experimentation is performed, with the digitisation of research and development (R&D) greatly reducing waste in terms of human and material resources. Self-optimising systems combine algorithms, automated control and process analytics for the feedback optimisation of continuous flow reactions. This provides efficient exploration of multi-dimensional experimental space, and accelerates the identification of optimum conditions. Therefore, this technology directly aligns with the drive towards more sustainable process development in the pharmaceutical industry. Yet the uptake of these systems by industrial R&D departments remains relatively low, suggesting that the capabilities of the current technology are still limited. The work in this thesis aims to improve existing self-optimisation technologies, to further bridge the gap between academic and industrial research. This includes introducing multi-objective optimisation algorithms and applying them towards the synthesis of APIs, developing a new multiphasic CSTR cascade reactor with photochemical capabilities and including downstream work-up operations in the optimisation of multi-step processes

    Evolutionary Algorithms in Engineering Design Optimization

    Get PDF
    Evolutionary algorithms (EAs) are population-based global optimizers, which, due to their characteristics, have allowed us to solve, in a straightforward way, many real world optimization problems in the last three decades, particularly in engineering fields. Their main advantages are the following: they do not require any requisite to the objective/fitness evaluation function (continuity, derivability, convexity, etc.); they are not limited by the appearance of discrete and/or mixed variables or by the requirement of uncertainty quantification in the search. Moreover, they can deal with more than one objective function simultaneously through the use of evolutionary multi-objective optimization algorithms. This set of advantages, and the continuously increased computing capability of modern computers, has enhanced their application in research and industry. From the application point of view, in this Special Issue, all engineering fields are welcomed, such as aerospace and aeronautical, biomedical, civil, chemical and materials science, electronic and telecommunications, energy and electrical, manufacturing, logistics and transportation, mechanical, naval architecture, reliability, robotics, structural, etc. Within the EA field, the integration of innovative and improvement aspects in the algorithms for solving real world engineering design problems, in the abovementioned application fields, are welcomed and encouraged, such as the following: parallel EAs, surrogate modelling, hybridization with other optimization techniques, multi-objective and many-objective optimization, etc

    Experimental and Theoretical Analysis of Pressure Coupled Infusion Gyration for Fibre Production

    Get PDF
    In this work, we uncover the science of the combined application of external pressure, controlled infusion of polymer solution and gyration in the field of nanofiber preparation. This novel application takes gyration-based method into another new arena through enabling the mass production of exceedingly fine (few nanometres upwards) nanofibres in a single step. Polyethylene oxide (PEO) was used as a model polymer in the experimental study, which shows the use of this novel method to fabricate polymeric nanofibres and nanofibrous mats under different combinations of operating parameters, including working pressure, rotational speed, infusion rate and collection distance. The morphologies of the nanofibres were characterised using scanning electron microscopy, and the anisotropy of alignment of fibre was studied using two dimensional fast Fourier transform analysis. A correlation between the product morphology and the processing parameters is established. The response surface models of the experimental process were developed using the least squares fitting. A systematic description of the PCIG spinning was developed to help us obtain a clear understanding of the fibre formation process of this novel application. The input data we used are the conventional mean of fibre diameter measurements obtained from our experimental works. In this part, both linear and nonlinear fitting formats were applied, and the successes of the fitted models were mainly evaluated using Adjusted R2 and Akaike Information Criterion (AIC). The correlations and effects of individual parameters and their interactions were explicitly studied. The modelling results indicated the polymer concentration has the most significant impact on fibre diameters. A self-defined objective function was studied with the best-fitted model to optimise the experimental process for achieving the desired nanofibre diameters and narrow standard deviations. The experimental parameters were optimised by several algorithms, and the most favoured sets of parameters recommended by the non-linear interior point methods were further validated through a set of additional experiments. The results of validation indicated that pressure coupled infusion gyration offers a facile way for forming nanofibres and nanofibre assemblies, and the developed model has a good prediction power of experimental parameters that are possible to be useful for achieving the desirable PEO nanofibres

    A Framework for Hyper-Heuristic Optimisation of Conceptual Aircraft Structural Designs

    Get PDF
    Conceptual aircraft structural design concerns the generation of an airframe that will provide sufficient strength under the loads encountered during the operation of the aircraft. In providing such strength, the airframe greatly contributes to the mass of the vehicle, where an excessively heavy design can penalise the performance and cost of the aircraft. Structural mass optimisation aims to minimise the airframe weight whilst maintaining adequate resistance to load. The traditional approach to such optimisation applies a single optimisation technique within a static process, which prevents adaptation of the optimisation process to react to changes in the problem. Hyper-heuristic optimisation is an evolving field of research wherein the optimisation process is evaluated and modified in an attempt to improve its performance, and thus the quality of solutions generated. Due to its relative infancy, hyper-heuristics have not been applied to the problem of aircraft structural design optimisation. It is the thesis of this research that hyper-heuristics can be employed within a framework to improve the quality of airframe designs generated without incurring additional computational cost. A framework has been developed to perform hyper-heuristic structural optimisation of a conceptual aircraft design. Four aspects of hyper-heuristics are included within the framework to promote improved process performance and subsequent solution quality. These aspects select multiple optimisation techniques to apply to the problem, analyse the solution space neighbouring good designs and adapt the process based on its performance. The framework has been evaluated through its implementation as a purpose-built computational tool called AStrO. The results of this evaluation have shown that significantly lighter airframe designs can be generated using hyper-heuristics than are obtainable by traditional optimisation approaches. Moreover, this is possible without penalising airframe strength or necessarily increasing computational costs. Furthermore, improvements are possible over the existing aircraft designs currently in production and operation

    Convective heat transfer control in turbulent boundary layers

    Get PDF
    Mención Internacional en el título de doctorThe sustainable development of our society opens up concerns in several fields of engineering, including energy management, production, and the impact of our technology, being thermal management a common issue to be addressed. The investigation reported in this manuscript focuses on understanding, controlling and optimizing the physical processes involving convective heat transfer in turbulent wall-bounded flows. The content is divided into two main blocks, namely the investigation of classic open-loop active-control techniques to control heat transfer, and the technological development of machine-learning strategies to enhance the performance of flow control in the field of convective heat transfer. The first block focuses on actuator technology, applying dielectric-barrier discharge (DBD) plasma actuators and a pulsed slot jet in crossflow (JICF), respectively, to control the convective heat transfer in a turbulent boundary layer (TBL) over a flat plate. In the former, an array of DBD plasma actuators is employed to induce pairs of counter-rotating, streamwise-aligned vortices embedded in the TBL to reduce heat transfer downstream of the actuation. The whole three-dimensional mean flow field downstream of the plasma actuator is reconstructed from stereoscopic particle image velocimetry (PIV). Infrared thermography (IR) measurements coupled with a heated thin foil provide ensemble-averaged convective heat transfer distributions downstream of the actuators. The combination of the flow field and heat transfer measurements provides a complete picture of the fluid-dynamic interaction of plasma-induced flow with local turbulent transport effects. The plasmainduced streamwise vortices are stationary and confined across the spanwise direction due to the action of the plasma discharge. The opposing plasma discharge causes a mass- and momentum-flux deficit within the boundary layer, leading to a low-velocity region that grows in the streamwise direction and which is characterised by an increase in displacement and momentum thicknesses. This low-velocity ribbon travels downstream, promoting streak-alike patterns of reduction in the convective heat transfer distribution. Near the wall, the plasma-induced jets divert the main flow due to the DBD-actuator momentum injection and the suction on the surrounding fluid by the emerging jets. The stationarity of the plasma-induced vortices makes them persistent far downstream, reducing the convective heat transfer. Conversely, the target of the second paper in this first block is to enhance convective heat transfer rather than reduce it. A fully modulated, pulsed, slot JICF is used to perturb the TBL. The slot-jet actuator, flush-mounted and aligned in the spanwise direction, is controlled based on two design parameters, namely the duty cycle (DC) and the pulsation frequency (f). Heat transfer and flow-field measurements are performed to characterise the control performance using IR thermography and planar PIV, respectively. A parametric study on f and DC is carried out to assess their effect on the heat transfer distribution. The vorticity fields are reconstructed from the Proper Orthogonal Decomposition (POD) modes, retrieving phase information. The flow topology is considerably altered by the jet pulsation, even compared to the case of a steady jet. The results show that both the jet penetration in the streamwise direction and the overall Nusselt number increase with increasing DC. However, the frequency at which the Nusselt number is maximised is independent of the duty cycle. A wall-attached jet rises from the slot accompanied by a pair of counter-rotating vortices that promote flow entrainment and mixing. Eventually, a simplified model is proposed which decouples the effect of f and DC in the overall heat transfer enhancement, with a good agreement with experimental data. The cost of actuation is also quantified in terms of the amount of injected fluid during the actuation, leading to conclude that the lowest duty cycle is the most efficient for heat transfer enhancement among the tested set. The second block of the thesis splits into a comparative assessment of machine learning (ML) methods for active feedback flow control and an application of linear genetic algorithms to an experimental convective heat transfer enhancement problem. First, the comparative study is carried out numerically based on a well-established benchmark problem, the drag reduction of a two-dimensional Kármán vortex street past a circular cylinder at a low Reynolds number (Re = 100). The flow is manipulated with two blowing/suction actuators on the upper and lower side of a cylinder. The feedback employs several velocity sensors. Two probe configurations are evaluated: 5 and 11 velocity probes located at different points around the cylinder and in the wake. The control laws are optimized with Deep Reinforcement Learning (DRL) and Linear Genetic Programming Control (LGPC). Both methods successfully stabilize the vortex alley and effectively reduce drag while using small mass flow rates for the actuation. DRL features higher robustness with respect to variable initial conditions and noise contamination of the sensor data; on the other hand, LGPC can identify compact and interpretable control laws, which only use a subset of sensors, thus allowing reducing the system complexity with reasonably good results. The gained experience and knowledge of machine-learning methods motivated the last study enclosed in this thesis, which utilises linear genetic algorithm control (LGAC) to identify the best actuation parameters in an experimental application. The actuator is a set of six slot jets in crossflow aligned with the freestream. An open-loop optimal periodic forcing is defined by the carrier frequency (f), the duty cycle (DC) and the phase between actuators (ϕ) as control parameters. The control laws are optimised with respect to the unperturbed TBL and the steady-jet actuation. The cost function includes wall convective heat transfer and the cost of the actuation, thus leading to a multi-objective optimisation problem. Surprisingly, the LGAC algorithm converges to the same frequency and duty cycle for all the actuators. This frequency is equivalent to the optimal frequency reported in the second study of the first block of this thesis. The performance of the controller is characterised by IR thermography and PIV measurements. The action of the jets considerably alters the flow topology compared to the steady-jet actuation, yielding a slightly asymmetric flow field. The phase difference between multiple jet actuation has shown to be very relevant and the main driver of flow asymmetry. A POD analysis concludes the shedding phenomena characterising the steady-jet actuation, while the optimised controller exhibits an elongated large-scale structure just downstream of the actuator. The investigation carried out in this thesis sheds some light on the application of different flow control strategies to the field of convective heat transfer. From the utilisation of plasma actuators and a single jet in cross flow to the development of sophisticated control logic, the results point to the exceptional potential of machine learning control in unravelling unexplored controllers within the actuation space. Ultimately, this work demonstrates the viability of employing sophisticated measurement techniques together with advanced algorithms in an experimental investigation, paving the way towards more complex applications involving feedback information.El desarrollo sostenible de nuestra sociedad abre preocupaciones en varios campos de la ingeniería, incluyendo la gestión de la energía, la producción y el impacto de nuestra tecnología, siendo la gestión térmica un tema común a tratar. La investigación que se presenta en este manuscrito se centra en la comprensión, el control y la optimización de los procesos físicos que implican la transferencia de calor por convección en flujos turbulentos de pared. El contenido se divide en dos bloques principales: la investigación de las técnicas clásicas de control activo de lazo abierto para controlar la transferencia de calor, y el desarrollo tecnológico de estrategias de aprendizaje automático para mejorar el rendimiento del control del flujo en el campo de la transferencia de calor por convección. El primer bloque se centra en la tecnología de los actuadores, aplicando actuadores de plasma de descarga de barrera dieléctrica (dielectric barrier dicharge, DBD) y un chorro con forma de ranura pulsado en flujo cruzado (jet in crossflow, JICF), respectivamente, para controlar la transferencia de calor por convección en una capa límite turbulenta (turbulent boundary layer, TBL) sobre una placa plana. En el primero, se emplea un conjunto de actuadores de plasma DBD para inducir pares de vórtices contra-rotativos, alineados con la corriente e incrustados en la TBL para reducir la transferencia de calor aguas abajo de la actuación. El campo de flujo medio tridimensional completo aguas abajo del actuador de plasma se reconstruye a partir de la velocimetría de imagen de partículas estereoscópica (particle image velocimetry, PIV). Las mediciones de termografía infrarroja (IR) junto a una fina lámina calentada proporcionan distribuciones de transferencia de calor convectiva promediadas aguas abajo de los actuadores. La combinación de las mediciones del campo de flujo y de la transferencia de calor proporciona una imagen completa de la interacción fluido-dinámica del flujo inducido por el plasma con los efectos locales de transporte turbulento. Los vórtices en el sentido de la corriente inducidos por plasma son estacionarios y están confinados transversalmente debido a la acción de la descarga de plasma. La descarga de plasma en oposición causa un déficit de flujo de masa y de momento dentro de la capa límite, lo que conduce a una región de baja velocidad que crece en la dirección de la corriente y que se caracteriza por un aumento de los espesores de desplazamiento y de momento. Esta zona de baja velocidad se desplaza corriente abajo, promoviendo patrones de reducción similares a rayas en los que se reduce la transferencia de calor por convección. Cerca de la pared, los chorros inducidos por el plasma desvían el flujo principal debido a la inyección de momento del actuador DBD y a la succión sobre el fluido circundante por parte de los chorros emergentes. La estacionariedad de los vórtices inducidos por el plasma los hace persistentes aguas abajo, reduciendo la transferencia de calor por convección. Por el contrario, el objetivo del segundo trabajo de este primer bloque es mejorar la transferencia de calor por convección en lugar de reducirla. Se utiliza un JICF, pulsado y con forma de ranura, totalmente modulado para perturbar la TBL. El actuador de chorro, montado a ras y alineado en la dirección transversal, se controla en base a dos parámetros de diseño, a saber, el ciclo de trabajo (DC) y la frecuencia de pulsación (f). Se realizan mediciones de la transferencia de calor y del campo de flujo para caracterizar el rendimiento del control mediante termografía IR y PIV planar, respectivamente. Se lleva a cabo un estudio paramétrico de f y DC para evaluar su efecto en la distribución de la transferencia de calor. Los campos de vorticidad se reconstruyen a partir de los modos de descomposición ortogonal adecuada (POD), recuperando la información de fase. La topología del flujo se ve considerablemente alterada por la pulsación del chorro, incluso en comparación con el caso de un chorro estacionario. Los resultados muestran que tanto la penetración del chorro en la dirección de la corriente como el número Nusselt global aumentan con el incremento de DC. Sin embargo, la frecuencia a la que se maximiza el número Nusselt es independiente del ciclo de trabajo. Un chorro adherido a la pared sale de la ranura acompañado de un par de vórtices contrarrotantes que promueven el arrastre y la mezcla del flujo. Finalmente, se propone un modelo simplificado que desacopla el efecto de f y DC en la mejora global de la transferencia de calor, con un buen acuerdo con los datos experimentales. También se cuantifica el coste de la actuación en términos de la cantidad de fluido inyectado durante la actuación, llegando a la conclusión de que el ciclo de trabajo más bajo es el más eficiente para la mejora de la transferencia de calor entre el conjunto probado. El segundo bloque de la tesis se divide en una evaluación comparativa de los métodos de aprendizaje automático (machine learning, ML) para el control activo del flujo por retroalimentación y una aplicación de algoritmos genéticos lineales a un problema experimental de mejora de la transferencia de calor por convección. En primer lugar, el estudio comparativo se realiza numéricamente a partir de un problema de referencia bien establecido: la reducción de la resistencia aerodinámica de una calle de vórtices de Kármán bidimensional tras un cilindro circular a un número de Reynolds bajo (Re = 100). El flujo se manipula con dos actuadores de soplado/succión en la parte superior e inferior de un cilindro. La retroalimentación emplea varios sensores de velocidad. Se evalúan dos configuraciones de sondas: 5 y 11 sondas de velocidad situadas en diferentes puntos alrededor del cilindro y en la estela. Las leyes de control se optimizan con el aprendizaje profundo por refuerzo (Deep Reinforcement Learning, DRL) y el control por programación genética lineal (Linear Genetic Programming Control, LGPC). Ambos métodos estabilizan con éxito la calle de vórtices y reducen de manera efectiva la resistencia al tiempo que usan caudales másicos pequeños para la actuación. El DRL se caracteriza por una mayor robustez con respecto a la variación de la condición inicial y a la contaminación por ruido de los datos de los sensores; por otro lado, el LGPC puede identificar leyes de control compactas e interpretables, que sólo utilizan un subconjunto de sensores, lo que permite reducir la complejidad del sistema con resultados razonablemente buenos. La experiencia adquirida y el conocimiento de los métodos de aprendizaje automático motivaron el último estudio incluido en esta tesis, que utiliza el control por algoritmo genético lineal (Linear Genetic Algorithm Control, LGAC) para identificar los mejores parámetros de actuación en una aplicación experimental. El actuador es un conjunto de seis chorros con forma de ranura en flujo cruzado y alineados con la corriente principal. Se define una ley de forzado periódica en lazo abierto mediante la frecuencia portadora (f), el ciclo de trabajo (DC) y la fase entre actuadores (ϕ) como parámetros de control. Las leyes de control se optimizan con respecto a la TBL no perturbada y la actuación de chorro constante. La función de coste incluye la transferencia de calor por convección de la pared y el coste de la actuación, lo que da lugar a un problema de optimización multiobjetivo. Sorprendentemente, el algoritmo LGAC converge a la misma frecuencia y ciclo de trabajo para todos los actuadores. Esta frecuencia es equivalente a la frecuencia óptima reportada en el segundo estudio del primer bloque de esta tesis. El rendimiento del controlador se caracteriza mediante termografía IR y mediciones PIV. La acción de los chorros altera considerablemente la topología del flujo en comparación con la actuación de los chorros constantes, dando lugar a un campo de flujo ligeramente asimétrico. La diferencia de fase entre la actuación de múltiples chorros ha demostrado ser muy relevante y el principal impulsor de la asimetría del flujo. Un análisis POD concluye los fenómenos de desprendimiento de vórtices que caracterizan la actuación de chorro constante, mientras que el controlador optimizado muestra una estructura alargada a gran escala justo aguas abajo del actuador. La investigación llevada a cabo en esta tesis arroja algo de luz sobre la aplicación de diferentes estrategias de control de flujo en el campo de la transferencia de calor por convección. Desde la utilización de actuadores de plasma y un único chorro en flujo cruzado hasta el desarrollo de una sofisticada lógica de control, los resultados apuntan al excepcional potencial del control por aprendizaje automático para desentrañar controladores inexplorados dentro del espacio de actuación. En última instancia, este trabajo demuestra la viabilidad de emplear sofisticadas técnicas de medición junto con algoritmos avanzados en una investigación experimental, allanando el camino hacia aplicaciones más complejas que implican información de retroalimentación.The work enclosed in this thesis has been partially supported by the Universidad Carlos III de Madrid through a PIPF scholarship awarded on a competitive basis, and by the following research projects: ARTURO (Active contRol of Turbulence for sUstainable aiRcraft propulsiOn), ref. PID2019-109717RB-I00/AEI/10.13039/501100011033, funded by the Spanish State Research Agency (SRA); the 2020 Leonardo Grant for Researchers and Cultural Creators AEROMATIC (Active flow control of aerodynamic flows with machine learning), funded by the BBVA Foundation with grant number IN[20]_ING_ING_0163; and GloWing Starting Grant, funded by the European Research Council (ERC), under grant agreement ERC-2018.StG-803082.Programa de Doctorado en Mecánica de Fluidos por la Universidad Carlos III de Madrid; la Universidad de Jaén; la Universidad de Zaragoza; la Universidad Nacional de Educación a Distancia; la Universidad Politécnica de Madrid y la Universidad Rovira i VirgiliPresidente: Octavio Armas Vergel.- Secretario: Manuel García-Villalba Navaridas.- Vocal: Gioacchino Cafier

    An Investigation of Factors Influencing Algorithm Selection for High Dimensional Continuous Optimisation Problems

    Get PDF
    The problem of algorithm selection is of great importance to the optimisation community, with a number of publications present in the Body-of-Knowledge. This importance stems from the consequences of the No-Free-Lunch Theorem which states that there cannot exist a single algorithm capable of solving all possible problems. However, despite this importance, the algorithm selection problem has of yet failed to gain widespread attention . In particular, little to no work in this area has been carried out with a focus on large-scale optimisation; a field quickly gaining momentum in line with advancements and influence of big data processing. As such, it is not as yet clear as to what factors, if any, influence the selection of algorithms for very high-dimensional problems (> 1000) - and it is entirely possible that algorithms that may not work well in lower dimensions may in fact work well in much higher dimensional spaces and vice-versa. This work therefore aims to begin addressing this knowledge gap by investigating some of these influencing factors for some common metaheuristic variants. To this end, typical parameters native to several metaheuristic algorithms are firstly tuned using the state-of-the-art automatic parameter tuner, SMAC. Tuning produces separate parameter configurations of each metaheuristic for each of a set of continuous benchmark functions; specifically, for every algorithm-function pairing, configurations are found for each dimensionality of the function from a geometrically increasing scale (from 2 to 1500 dimensions). The nature of this tuning is therefore highly computationally expensive necessitating the use of SMAC. Using these sets of parameter configurations, a vast amount of performance data relating to the large-scale optimisation of our benchmark suite by each metaheuristic was subsequently generated. From the generated data and its analysis, several behaviours presented by the metaheuristics as applied to large-scale optimisation have been identified and discussed. Further, this thesis provides a concise review of the relevant literature for the consumption of other researchers looking to progress in this area in addition to the large volume of data produced, relevant to the large-scale optimisation of our benchmark suite by the applied set of common metaheuristics. All work presented in this thesis was funded by EPSRC grant: EP/J017515/1 through the DAASE project

    A new optimisation procedure for uncertainty reduction by intelligent wells during field development planning

    Get PDF
    The uncertainty in the produced oil volume can be minimised by substituting intelligent wells (IWs) for conventional wells. A previous study showed that IWs reduce the impact of geological uncertainty on the production forecast (Birchenko, Demyanov et al. 2008). This investigation has now been extended to the “dynamic” parameters (fluid contacts, relative permeabilities, aquifer strength and zonal skin). The efficiency of the IWs in reducing the total production uncertainty due to the reservoir’s dynamic parameters was found to be comparable to that reported for the static parameters. However, this later study identified that the result was strongly dependent on the strategy employed to optimise the field’s performance. Experience has shown that challenges arise while using commercial software for optimisation of a typical, modern field with multiple reservoirs and a complex surface production network. Inclusion of the optimisation algorithm dramatically increases the calculation time in addition to showing stability and convergence problems. This thesis describes the development of a novel method of a reactive control strategy for ICVs that is both robust and computationally fast. The developed method identifies the critical water cut threshold at which a well will operate optimally when on/off valves are used. This method is not affected by the convergence problems which have lead to many of the difficulties associated with previous efforts to solve our non-linear optimisation problem. Run times similar to the (non-optimised) base case are now potentially possible and, equally importantly, the optimal value calculated is similar to the result from the various optimisation software referred to above. The approach is particularly valuable when analysing the impact of uncertainty on the reservoir’s dynamic and static parameters, the method being convergent and independent of the point used to initiate the optimization process. “Tuning” the algorithm’s optimisation parameters in the middle of the calculation is no longer required; thus ensuring the results from the many realisations are comparable

    The Use of Automated Search in Deriving Software Testing Strategies

    Get PDF
    Testing a software artefact using every one of its possible inputs would normally cost too much, and take too long, compared to the benefits of detecting faults in the software. Instead, a testing strategy is used to select a small subset of the inputs with which to test the software. The criterion used to select this subset affects the likelihood that faults in the software will be detected. For some testing strategies, the criterion may result in subsets that are very efficient at detecting faults, but implementing the strategy -- deriving a 'concrete strategy' specific to the software artefact -- is so difficult that it is not cost-effective to use that strategy in practice. In this thesis, we propose the use of metaheuristic search to derive concrete testing strategies in a cost-effective manner. We demonstrate a search-based algorithm that derives concrete strategies for 'statistical testing', a testing strategy that has a good fault-detecting ability in theory, but which is costly to implement in practice. The cost-effectiveness of the search-based approach is enhanced by the rigorous empirical determination of an efficient algorithm configuration and associated parameter settings, and by the exploitation of low-cost commodity GPU cards to reduce the time taken by the algorithm. The use of a flexible grammar-based representation for the test inputs ensures the applicability of the algorithm to a wide range of software
    corecore