5 research outputs found

    Uncertainty quantification and management in multidisciplinary design optimisation.

    Get PDF
    We analyse the uncertainty present at the structural-sizing stage of aircraft design due to interactions between aeroelastic loading and incomplete structural definition. In particular, we look at critical load case identification: the process of identifying the flight conditions at which the maximum loading conditions occur from sparse, expensive to obtain data. To address this challenge, we investigate the construction of robust emulators: probabilistic models of computer code outputs, which explicitly and reliably model their predictive uncertainty. Using Gaussian process regression, we show how such models can be derived from simple and intuitive considerations about the interactions between parameter inference and data, and via state-of-the- art statistical software, develop a generally applicable and easy to use method for constructing them. The effectiveness of these models is demonstrated on a range of synthetic and engineering test functions. We then use them to approach two facets of critical load case identification: sample efficient searching for the critical cases via Bayesian optimisation, and probabilistic assessment of possible locations for the critical cases from a given sample; the latter facilitating quantitative downselection of candidate load cases by ruling out regions of the search space with a low probability of containing the critical cases, potentially saving a designer many hours of simulation time. Finally, we show how the presence of design variability in the loads analysis implies a stochastic process, and attempt to construct a model for this by parametrisation of its marginal distributions.PhD in Aerospac

    (Global) Optimization: Historical notes and recent developments

    Get PDF
    Recent developments in (Global) Optimization are surveyed in this paper. We collected and commented quite a large number of recent references which, in our opinion, well represent the vivacity, deepness, and width of scope of current computational approaches and theoretical results about nonconvex optimization problems. Before the presentation of the recent developments, which are subdivided into two parts related to heuristic and exact approaches, respectively, we briefly sketch the origin of the discipline and observe what, from the initial attempts, survived, what was not considered at all as well as a few approaches which have been recently rediscovered, mostly in connection with machine learning

    Active learning in recommender systems: an unbiased and beyond-accuracy perspective

    Get PDF
    The items that a Recommender System (RS) suggests to its users are typically ones that it thinks the user will like and want to consume. An RS that is good at its job is of interest not only to its customers but also to service providers, so they can secure long-term customers and increase revenue. Thus, there is a challenge in building better recommender systems. One way to build a better RS is to improve the quality of the data on which the RS model is trained. An RS can use Active Learning (AL) to proactively acquire such data, with the goal of improving its model. The idea of AL for RS is to explicitly query the users, asking them to rate items which have not been rated yet. The items that a user will be asked to rate are known as the query items. Query items are different from recommendations. For example, the former may be items that the AL strategy predicts the user has already consumed, whereas the latter are ones that the RS predicts the user will like. In AL, query items are selected `intelligently' by an Active Learning strategy. Different AL strategies take different approaches to identify the query items. As with the evaluation of RSs, preliminary evaluation of AL strategies must be done offline. An offline evaluation can help to narrow the number of promising strategies that need to be evaluated in subsequent costly user trials and online experiments. Where the literature describes the offline evaluation of AL, the evaluation is typically quite narrow and incomplete: mostly, the focus is cold-start users; the impact of newly-acquired ratings on recommendation quality is usually measured only for those users who supplied those ratings; and impact is measured in terms of prediction accuracy or recommendation relevance. Furthermore, the traditional AL evaluation does not take into account the bias problem. As brought to light by recent RS literature, this is a problem that affects the offline evaluation of RS; it arises when a biased dataset is used to perform the evaluation. We argue that it is a problem that affects offline evaluation of AL strategies too. The main focus of this dissertation is on the design and evaluation of AL strategies for RSs. We first design novel methods (designated WTD and WTD_H) that `intervene' on a biased dataset to generate a new dataset with unbiased-like properties. Compared to the most similar approach proposed in the literature, we give empirical evidence, using two publicly-available datasets, that WTD and WTD_H are more effective at debiasing the evaluation of different recommender system models. We then propose a new framework for offline evaluation of AL for RS, which we believe facilitates a more authentic picture of the performances of the AL strategies under evaluation. In particular, our framework uses WTD or WTD_H to mitigate the bias, but it also assesses the impact of AL in a more comprehensive way than the traditional evaluation used in the literature. Our framework is more comprehensive in at least two ways. First, it segments users in more ways than is conventional and analyses the impact of AL on the different segments. Second, in the same way that RS evaluation has changed from a narrow focus on prediction accuracy and recommendation relevance to a wider consideration of so-called `beyond-accuracy' criteria (such as diversity, serendipity and novelty), our framework extends the evaluation of AL strategies to also cover `beyond-accuracy' criteria. Experimental results on two datasets show the effectiveness of our new framework. Finally, we propose some new AL strategies of our own. In particular, our new AL strategies, instead of focusing exclusively on prediction accuracy and recommendation relevance, are designed to also enhance `beyond-accuracy' criteria. We evaluate the new strategies using our more comprehensive evaluation framework
    corecore