63 research outputs found

    Automated design of robust discriminant analysis classifier for foot pressure lesions using kinematic data

    Get PDF
    In the recent years, the use of motion tracking systems for acquisition of functional biomechanical gait data, has received increasing interest due to the richness and accuracy of the measured kinematic information. However, costs frequently restrict the number of subjects employed, and this makes the dimensionality of the collected data far higher than the available samples. This paper applies discriminant analysis algorithms to the classification of patients with different types of foot lesions, in order to establish an association between foot motion and lesion formation. With primary attention to small sample size situations, we compare different types of Bayesian classifiers and evaluate their performance with various dimensionality reduction techniques for feature extraction, as well as search methods for selection of raw kinematic variables. Finally, we propose a novel integrated method which fine-tunes the classifier parameters and selects the most relevant kinematic variables simultaneously. Performance comparisons are using robust resampling techniques such as Bootstrap632+632+and k-fold cross-validation. Results from experimentations with lesion subjects suffering from pathological plantar hyperkeratosis, show that the proposed method can lead tosim96sim 96%correct classification rates with less than 10% of the original features

    Learning to Control Differential Evolution Operators

    Get PDF
    Evolutionary algorithms are widely used for optimsation by researchers in academia and industry. These algorithms have parameters, which have proven to highly determine the performance of an algorithm. For many decades, researchers have focused on determining optimal parameter values for an algorithm. Each parameter configuration has a performance value attached to it that is used to determine a good configuration for an algorithm. Parameter values depend on the problem at hand and are known to be set in two ways, by means of offline and online selection. Offline tuning assumes that the performance value of a configuration remains same during all generations in a run whereas online tuning assumes that the performance value varies from one generation to another. This thesis presents various adaptive approaches each learning from a range of feedback received from the evolutionary algorithm. The contributions demonstrate the benefits of utilising online and offline learning together at different levels for a particular task. Offline selection has been utilised to tune the hyper-parameters of proposed adaptive methods that control the parameters of evolutionary algorithm on-the-fly. All the contributions have been presented to control the mutation strategies of the differential evolution. The first contribution demonstrates an adaptive method that is mapped as markov reward process. It aims to maximise the cumulative future reward. Next chapter unifies various adaptive methods from literature that can be utilised to replicate existing methods and test new ones. The hyper-parameters of methods in first two chapters are tuned by an offline configurator, irace. Last chapter proposes four methods utilising deep reinforcement learning model. To test the applicability of the adaptive approaches presented in the thesis, all methods are compared to various adaptive methods from literature, variants of differential evolution and other state-of-the-art algorithms on various single objective noiseless problems from benchmark set, BBOB

    A review of population-based metaheuristics for large-scale black-box global optimization: Part A

    Get PDF
    Scalability of optimization algorithms is a major challenge in coping with the ever growing size of optimization problems in a wide range of application areas from high-dimensional machine learning to complex large-scale engineering problems. The field of large-scale global optimization is concerned with improving the scalability of global optimization algorithms, particularly population-based metaheuristics. Such metaheuristics have been successfully applied to continuous, discrete, or combinatorial problems ranging from several thousand dimensions to billions of decision variables. In this two-part survey, we review recent studies in the field of large-scale black-box global optimization to help researchers and practitioners gain a bird’s-eye view of the field, learn about its major trends, and the state-of-the-art algorithms. Part of the series covers two major algorithmic approaches to large-scale global optimization: problem decomposition and memetic algorithms. Part of the series covers a range of other algorithmic approaches to large-scale global optimization, describes a wide range of problem areas, and finally touches upon the pitfalls and challenges of current research and identifies several potential areas for future research

    Spatially optimised sustainable urban development

    Get PDF
    PhD ThesisTackling urbanisation and climate change requires more sustainable and resilient cities, which in turn will require planners to develop a portfolio of measures to manage climate risks such as flooding, meet energy and greenhouse gas reduction targets, and prioritise development on brownfield sites to preserve greenspace. However, the policies, strategies and measures put in place to meet such objectives can frequently conflict with each other or deliver unintended consequences, hampering long-term sustainability. For example, the densification of cities in order to reduce transport energy use can increase urban heat island effects and surface water flooding from extreme rainfall events. In order to make coherent decisions in the presence of such complex multi-dimensional spatial conflicts, urban planners require sophisticated planning tools to identify and manage potential trade-offs between the spatial strategies necessary to deliver sustainability. To achieve this aim, this research has developed a multi-objective spatial optimisation framework for the spatial planning of new residential development within cities. The implemented framework develops spatial strategies of required new residential development that minimize conflicts between multiple sustainability objectives as a result of planning policy and climate change related hazards. Five key sustainability objectives have been investigated, namely; (i) minimizing risk from heat waves, (ii) minimizing the risk from flood events, (iii) minimizing travel costs in order to reduce transport emissions, (iv) minimizing urban sprawl and (v) preventing development on existing greenspace. A review identified two optimisation algorithms as suitable for this task. Simulated Annealing (SA) is a traditional optimisation algorithm that uses a probabilistic approach to seek out a global optima by iteratively assessing a wide range of spatial configurations against the objectives under consideration. Gradual ‘cooling’, or reducing the probability of jumping to a different region of the objective space, helps the SA to converge on globally optimal spatial patterns. Genetic Algorithms (GA) evolve successive generations of solutions, by both recombining attributes and randomly mutating previous generations of solutions, to search for and converge towards superior spatial strategies. The framework works towards, and outputs, a series of Pareto-optimal spatial plans that outperform all other plans in at least one objective. This approach allows for a range of best trade-off plans for planners to choose from. ii Both SA and GA were evaluated for an initial case study in Middlesbrough, in the North East of England, and were able to identify strategies which significantly improve upon the local authority’s development plan. For example, the GA approach is able to identify a spatial strategy that reduces the travel to work distance between new development and the central business district by 77.5% whilst nullifying the flood risk to the new development. A comparison of the two optimisation approaches for the Middlesbrough case study revealed that the GA is the more effective approach. The GA is more able to escape local optima and on average outperforms the SA by 56% in in the Pareto fronts discovered whilst discovering double the number of multi-objective Pareto-optimal spatial plans. On the basis of the initial Middlesbrough case study the GA approach was applied to the significantly larger, and more computationally complex, problem of optimising spatial development plans for London in the UK – a total area of 1,572km2. The framework identified optimal strategies in less than 400 generations. The analysis showed, for example, strategies that provide the lowest heat risk (compared to the feasible spatial plans found) can be achieved whilst also using 85% brownfield land to locate new development. The framework was further extended to investigate the impact of different development and density regulations. This enabled the identification of optimised strategies, albeit at lower building density, that completely prevent any increase in urban sprawl whilst also improving the heat risk objective by 60% against a business as usual development strategy. Conversely by restricting development to brownfield the ability of the spatial plan to optimise future heat risk is reduced by 55.6% against the business as usual development strategy. The results of both case studies demonstrate the potential of spatial optimisation to provide planners with optimal spatial plans in the presence of conflicting sustainability objectives. The resulting diagnostic information provides an analytical appreciation of the sensitivity between conflicts and therefore the overall robustness of a plan to uncertainty. With the inclusion of further objectives, and qualitative information unsuitable for this type of analysis, spatial optimization can constitute a powerful decision support tool to help planners to identify spatial development strategies that satisfy multiple sustainability objectives and provide an evidence base for better decision making

    Towards a more efficient use of computational budget in large-scale black-box optimization

    Get PDF
    Evolutionary algorithms are general purpose optimizers that have been shown effective in solving a variety of challenging optimization problems. In contrast to mathematical programming models, evolutionary algorithms do not require derivative information and are still effective when the algebraic formula of the given problem is unavailable. Nevertheless, the rapid advances in science and technology have witnessed the emergence of more complex optimization problems than ever, which pose significant challenges to traditional optimization methods. The dimensionality of the search space of an optimization problem when the available computational budget is limited is one of the main contributors to its difficulty and complexity. This so-called curse of dimensionality can significantly affect the efficiency and effectiveness of optimization methods including evolutionary algorithms. This research aims to study two topics related to a more efficient use of computational budget in evolutionary algorithms when solving large-scale black-box optimization problems. More specifically, we study the role of population initializers in saving the computational resource, and computational budget allocation in cooperative coevolutionary algorithms. Consequently, this dissertation consists of two major parts, each of which relates to one of these research directions. In the first part, we review several population initialization techniques that have been used in evolutionary algorithms. Then, we categorize them from different perspectives. The contribution of each category to improving evolutionary algorithms in solving large-scale problems is measured. We also study the mutual effect of population size and initialization technique on the performance of evolutionary techniques when dealing with large-scale problems. Finally, assuming uniformity of initial population as a key contributor in saving a significant part of the computational budget, we investigate whether achieving a high-level of uniformity in high-dimensional spaces is feasible given the practical restriction in computational resources. In the second part of the thesis, we study the large-scale imbalanced problems. In many real world applications, a large problem may consist of subproblems with different degrees of difficulty and importance. In addition, the solution to each subproblem may contribute differently to the overall objective value of the final solution. When the computational budget is restricted, which is the case in many practical problems, investing the same portion of resources in optimizing each of these imbalanced subproblems is not the most efficient strategy. Therefore, we examine several ways to learn the contribution of each subproblem, and then, dynamically allocate the limited computational resources in solving each of them according to its contribution to the overall objective value of the final solution. To demonstrate the effectiveness of the proposed framework, we design a new set of 40 large-scale imbalanced problems and study the performance of some possible instances of the framework

    Grid-enabled adaptive surrugate modeling for computer aided engineering

    Get PDF

    Statistical Learning and Stochastic Process for Robust Predictive Control of Vehicle Suspension Systems

    Get PDF
    Predictive controllers play an important role in today's industry because of their capability of verifying optimum control signals for nonlinear systems in a real-time fashion. Due to their mathematical properties, such controllers are best suited for control problems with constraints. Also, these interesting controllers can be equipped with different types of optimization and learning modules. The main goal of this thesis is to explore the potential of predictive controllers for a challenging automotive problem, known as active vehicle suspension control. In this context, it is intended to explore both modeling and optimization modules using different statistical methodologies ranging from statistical learning to random process control. Among the variants of predictive controllers, learning-based model predictive controller (LBMPC) is becoming more and more interesting to the researchers of control society due to its structural flexibility and optimal performance. The current investigation will contribute to the improvement of LBMPC by adopting different statistical learning strategies and forecasting methods to improve the efficiency and robustness of learning performed in LBMPC. Also, advanced probabilistic tools such as reinforcement learning, absorbing state stochastic process, graphical modelling, and bootstrapping are used to quantify different sources of uncertainty which can affect the performance of the LBMPC when it is used for vehicle suspension control. Moreover, a comparative study is conducted using gradient-based as well as deterministic and stochastic direct search optimization algorithms for calculating the optimal control commands. By combining the well-established control and statistical theories, a novel variant of LBMPC is developed which not only affords stability and robustness, but also surpasses a wide range of conventional controllers for the vehicle suspension control problem. The findings of the current investigation can be interesting to the researchers of automotive industry (in particular those interested in automotive control), as several open issues regarding the potential of statistical tools for improving the performance of controllers for vehicle suspension problem are addressed

    Multiobjective Design Optimization using Nash Games

    Get PDF
    International audienceIn the area of pure numerical simulation of multidisciplinary coupled systems, the computational cost to evaluate a configuration may be very high. A fortiori, in multi- disciplinary optimization, one is led to evaluate a number of different configurations to iterate on the design parameters. This observation motivates the search for the most in- novative and computationally efficient approaches in all the sectors of the computational chain : at the level of the solvers (using a hierarchy of physical models), the meshes and geometrical parameterizations for shape, or shape deformation, the implementation (on a sequential or parallel architecture; grid computing), and the optimizers (deterministic or semi-stochastic, or hybrid; synchronous, or asynchronous). In the present approach, we concentrate on situations typically involving a small number of disciplines assumed to be strongly antagonistic, and a relatively moderate number of related objective functions. However, our objective functions are functionals, that is, PDE-constrained, and thus costly to evaluate. The aerodynamic and structural optimization of an aircraft configuration is a prototype of such a context, when these disciplines have been reduced to a few major objectives. This is the case when, implicitly, many subsystems are taken into account by local optimizations. Our developments are focused on the question of approximating the Pareto set in cases of strongly-conflicting disciplines. For this purpose, a general computational technique is proposed, guided by a form of sensitivity analysis, with the additional objective to be more economical than standard evolutionary approaches

    Aesthetic choices: Defining the range of aesthetic views in interactive digital media including games and 3D virtual environments (3D VEs)

    Get PDF
    Defining aesthetic choices for interactive digital media such as games is a challenging task. Objective and subjective factors such as colour, symmetry, order and complexity, and statistical features among others play an important role for defining the aesthetic properties of interactive digital artifacts. Computational approaches developed in this regard also consider objective factors such as statistical image features for the assessment of aesthetic qualities. However, aesthetics for interactive digital media, such as games, requires more nuanced consideration than simple objective and subjective factors, for choosing a range of aesthetic features. From the study it was found that the there is no one single optimum position or viewpoint with a corresponding relationship to the aesthetic considerations that influence interactive digital media. Instead, the incorporation of aesthetic features demonstrates the need to consider each component within interactive digital media as part of a range of possible features, and therefore within a range of possible camera positions. A framework, named as PCAWF, emphasized that combination of features and factors demonstrated the need to define a range of aesthetic viewpoints. This is important for improved user experience. From the framework it has been found that factors including the storyline, user state, gameplay, and application type are critical to defining the reasons associated with making aesthetic choices. The selection of a range of aesthetic features and characteristics is influenced by four main factors and sub-factors associated with the main factors. This study informs the future of interactive digital media interaction by providing clarity and reasoning behind the aesthetic decision-making inclusions that are integrated into automatically generated vision by providing a framework for choosing a range of aesthetic viewpoints in a 3D virtual environment of a game. The study identifies critical juxtapositions between photographic and cinema-based media aesthetics by incorporating qualitative rationales from experts within the interactive digital media field. This research will change the way Artificial Intelligence (AI) generated interactive digital media in the way that it chooses visual outputs in terms of camera positions, field-view, orientation, contextual considerations, and user experiences. It will impact across all automated systems to ensure that human-values, rich variations, and extensive complexity are integrated in the AI-dominated development and design of future interactive digital media production
    • 

    corecore