15,834 research outputs found

    Generalized breakup and coalescence models for population balance modelling of liquid-liquid flows

    Get PDF
    Population balance framework is a useful tool that can be used to describe size distribution of droplets in a liquid-liquid dispersion. Breakup and coalescence models provide closures for mathematical formulation of the population balance equation (PBE) and are crucial for accu- rate predictions of the mean droplet size in the ow. Number of closures for both breakup and coalescence can be identi ed in the literature and most of them need an estimation of model parameters that can di er even by several orders of magnitude on a case to case basis. In this paper we review the fundamental assumptions and derivation of breakup and coalescence ker- nels. Subsequently, we rigorously apply two-stage optimization over several independent sets of experiments in order to identify model parameters. Two-stage identi cation allows us to estab- lish new parametric dependencies valid for experiments that vary over large ranges of important non-dimensional groups. This be adopted for optimization of parameters in breakup and co- alescence models over multiple cases and we propose a correlation based on non-dimensional numbers that is applicable to number of di erent ows over wide range of Reynolds numbers

    A multi-scale approach for the optimum design of sandwich plates with honeycomb core. Part II: the optimisation strategy

    Get PDF
    This work deals with the problem of the optimum design of a sandwich panel. The design strategy that we propose is a numerical optimisation procedure that does not make any simplifying assumption to obtain a true global optimum configuration of the system. To face the design of the sandwich structure at both meso and macro scales, we use a two-level optimisation strategy: at the first level we determine the optimal geometry of the unit cell of the core together with the material and geometric parameters of the laminated skins, while at the second level we determine the optimal skins lay-up giving the geometrical and material parameters issued from the first level. The two-level strategy relies both on the use of the polar formalism for the description of the anisotropic behaviour of the laminates and on the use of a genetic algorithm as optimisation tool to perform the solution search. To prove its effectiveness, we apply our strategy to the least-weight design of a sandwich plate, satisfying several constraints: on the first buckling load, on the positive-definiteness of the stiffness tensor of the core, on the ratio between skins and core thickness and on the admissible moduli for the laminated skins

    Characterisation of the transmissivity field of a fractured and karstic aquifer, Southern France

    Get PDF
    International audienceGeological and hydrological data collected at the Terrieu experimental site north of Montpellier, in a confined carbonate aquifer indicates that both fracture clusters and a major bedding plane form the main flow paths of this highly heterogeneous karst aquifer. However, characterising the geometry and spatial location of the main flow channels and estimating their flow properties remain difficult. These challenges can be addressed by solving an inverse problem using the available hydraulic head data recorded during a set of interference pumping tests.We first constructed a 2D equivalent porous medium model to represent the test site domain and then employed regular zoning parameterisation, on which the inverse modelling was performed. Because we aim to resolve the fine-scale characteristics of the transmissivity field, the problem undertaken is essentially a large-scale inverse model, i.e. the dimension of the unknown parameters is high. In order to deal with the high computational demands in such a large-scale inverse problem, a gradient-based, non-linear algorithm (SNOPT) was used to estimate the transmissivity field on the experimental site scale through the inversion of steady-state, hydraulic head measurements recorded at 22 boreholes during 8 sequential cross-hole pumping tests. We used the data from outcrops, borehole fracture measurements and interpretations of inter-well connectivities from interference test responses as initial models to trigger the inversion. Constraints for hydraulic conductivities, based on analytical interpretations of pumping tests, were also added to the inversion models. In addition, the efficiency of the adopted inverse algorithm enables us to increase dramatically the number of unknown parameters to investigate the influence of elementary discretisation on the reconstruction of the transmissivity fields in both synthetic and field studies.By following the above approach, transmissivity fields that produce similar hydrodynamic behaviours to the real head measurements were obtained. The inverted transmissivity fields show complex, spatial heterogeneities with highly conductive channels embedded in a low transmissivity matrix region. The spatial trend of the main flow channels is in a good agreement with that of the main fracture sets mapped on outcrops in the vicinity of the Terrieu site suggesting that the hydraulic anisotropy is consistent with the structural anisotropy. These results from the inverse modelling enable the main flow paths to be located and their hydrodynamic properties to be estimated

    Visualising Basins of Attraction for the Cross-Entropy and the Squared Error Neural Network Loss Functions

    Get PDF
    Quantification of the stationary points and the associated basins of attraction of neural network loss surfaces is an important step towards a better understanding of neural network loss surfaces at large. This work proposes a novel method to visualise basins of attraction together with the associated stationary points via gradient-based random sampling. The proposed technique is used to perform an empirical study of the loss surfaces generated by two different error metrics: quadratic loss and entropic loss. The empirical observations confirm the theoretical hypothesis regarding the nature of neural network attraction basins. Entropic loss is shown to exhibit stronger gradients and fewer stationary points than quadratic loss, indicating that entropic loss has a more searchable landscape. Quadratic loss is shown to be more resilient to overfitting than entropic loss. Both losses are shown to exhibit local minima, but the number of local minima is shown to decrease with an increase in dimensionality. Thus, the proposed visualisation technique successfully captures the local minima properties exhibited by the neural network loss surfaces, and can be used for the purpose of fitness landscape analysis of neural networks.Comment: Preprint submitted to the Neural Networks journa

    Insight into High-quality Aerodynamic Design Spaces through Multi-objective Optimization

    Get PDF
    An approach to support the computational aerodynamic design process is presented and demonstrated through the application of a novel multi-objective variant of the Tabu Search optimization algorithm for continuous problems to the aerodynamic design optimization of turbomachinery blades. The aim is to improve the performance of a specific stage and ultimately of the whole engine. The integrated system developed for this purpose is described. This combines the optimizer with an existing geometry parameterization scheme and a well- established CFD package. The system’s performance is illustrated through case studies – one two-dimensional, one three-dimensional – in which flow characteristics important to the overall performance of turbomachinery blades are optimized. By showing the designer the trade-off surfaces between the competing objectives, this approach provides considerable insight into the design space under consideration and presents the designer with a range of different Pareto-optimal designs for further consideration. Special emphasis is given to the dimensionality in objective function space of the optimization problem, which seeks designs that perform well for a range of flow performance metrics. The resulting compressor blades achieve their high performance by exploiting complicated physical mechanisms successfully identified through the design process. The system can readily be run on parallel computers, substantially reducing wall-clock run times – a significant benefit when tackling computationally demanding design problems. Overall optimal performance is offered by compromise designs on the Pareto trade-off surface revealed through a true multi-objective design optimization test case. Bearing in mind the continuing rapid advances in computing power and the benefits discussed, this approach brings the adoption of such techniques in real-world engineering design practice a ste

    Performance Modelling and Optimisation of Multi-hop Networks

    Get PDF
    A major challenge in the design of large-scale networks is to predict and optimise the total time and energy consumption required to deliver a packet from a source node to a destination node. Examples of such complex networks include wireless ad hoc and sensor networks which need to deal with the effects of node mobility, routing inaccuracies, higher packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the computational limitations of the nodes. They also include more reliable communication environments, such as wired networks, that are susceptible to random failures, security threats and malicious behaviours which compromise their quality of service (QoS) guarantees. In such networks, packets traverse a number of hops that cannot be determined in advance and encounter non-homogeneous network conditions that have been largely ignored in the literature. This thesis examines analytical properties of packet travel in large networks and investigates the implications of some packet coding techniques on both QoS and resource utilisation. Specifically, we use a mixed jump and diffusion model to represent packet traversal through large networks. The model accounts for network non-homogeneity regarding routing and the loss rate that a packet experiences as it passes successive segments of a source to destination route. A mixed analytical-numerical method is developed to compute the average packet travel time and the energy it consumes. The model is able to capture the effects of increased loss rate in areas remote from the source and destination, variable rate of advancement towards destination over the route, as well as of defending against malicious packets within a certain distance from the destination. We then consider sending multiple coded packets that follow independent paths to the destination node so as to mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium and obtain the time-dependent properties of the packet’s travel process, allowing us to compare the merits and limitations of coding, both in terms of delivery times and energy efficiency. Finally, we propose models that can assist in the analysis and optimisation of the performance of inter-flow network coding (NC). We analyse two queueing models for a router that carries out NC, in addition to its standard packet routing function. The approach is extended to the study of multiple hops, which leads to an optimisation problem that characterises the optimal time that packets should be held back in a router, waiting for coding opportunities to arise, so that the total packet end-to-end delay is minimised

    Theoretical Analysis of Bayesian Optimisation with Unknown Gaussian Process Hyper-Parameters

    Full text link
    Bayesian optimisation has gained great popularity as a tool for optimising the parameters of machine learning algorithms and models. Somewhat ironically, setting up the hyper-parameters of Bayesian optimisation methods is notoriously hard. While reasonable practical solutions have been advanced, they can often fail to find the best optima. Surprisingly, there is little theoretical analysis of this crucial problem in the literature. To address this, we derive a cumulative regret bound for Bayesian optimisation with Gaussian processes and unknown kernel hyper-parameters in the stochastic setting. The bound, which applies to the expected improvement acquisition function and sub-Gaussian observation noise, provides us with guidelines on how to design hyper-parameter estimation methods. A simple simulation demonstrates the importance of following these guidelines.Comment: 16 pages, 1 figur
    corecore