1,169 research outputs found

    Efficient Detectors for MIMO-OFDM Systems under Spatial Correlation Antenna Arrays

    Full text link
    This work analyzes the performance of the implementable detectors for multiple-input-multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) technique under specific and realistic operation system condi- tions, including antenna correlation and array configuration. Time-domain channel model has been used to evaluate the system performance under realistic communication channel and system scenarios, including different channel correlation, modulation order and antenna arrays configurations. A bunch of MIMO-OFDM detectors were analyzed for the purpose of achieve high performance combined with high capacity systems and manageable computational complexity. Numerical Monte-Carlo simulations (MCS) demonstrate the channel selectivity effect, while the impact of the number of antennas, adoption of linear against heuristic-based detection schemes, and the spatial correlation effect under linear and planar antenna arrays are analyzed in the MIMO-OFDM context.Comment: 26 pgs, 16 figures and 5 table

    State-of-the-art in aerodynamic shape optimisation methods

    Get PDF
    Aerodynamic optimisation has become an indispensable component for any aerodynamic design over the past 60 years, with applications to aircraft, cars, trains, bridges, wind turbines, internal pipe flows, and cavities, among others, and is thus relevant in many facets of technology. With advancements in computational power, automated design optimisation procedures have become more competent, however, there is an ambiguity and bias throughout the literature with regards to relative performance of optimisation architectures and employed algorithms. This paper provides a well-balanced critical review of the dominant optimisation approaches that have been integrated with aerodynamic theory for the purpose of shape optimisation. A total of 229 papers, published in more than 120 journals and conference proceedings, have been classified into 6 different optimisation algorithm approaches. The material cited includes some of the most well-established authors and publications in the field of aerodynamic optimisation. This paper aims to eliminate bias toward certain algorithms by analysing the limitations, drawbacks, and the benefits of the most utilised optimisation approaches. This review provides comprehensive but straightforward insight for non-specialists and reference detailing the current state for specialist practitioners

    Geometric margin domain description with instance-specific margins

    Get PDF
    Support vector domain description (SVDD) is a useful tool in data mining, used for analysing the within-class distribution of multi-class data and to ascertain membership of a class with known training distribution. An important property of the method is its inner-product based formulation, resulting in its applicability to reproductive kernel Hilbert spaces using the “kernel trick”. This practice relies on full knowledge of feature values in the training set, requiring data exhibiting incompleteness to be pre-processed via imputation, sometimes adding unnecessary or incorrect data into the classifier. Based on an existing study of support vector machine (SVM) classification with structurally missing data, we present a method of domain description of incomplete data without imputation, and generalise to some times of kernel space. We review statistical techniques of dealing with missing data, and explore the properties and limitations of the SVM procedure. We present two methods to achieve this aim: the first provides an input space solution, and the second uses a given imputation of a dataset to calculate an improved solution. We apply our methods first to synthetic and commonly-used datasets, then to non-destructive assay (NDA) data provided by a third party. We compare our classification machines to the use of a standard SVDD boundary, and highlight where performance improves upon the use of imputation

    Investigations on corrosion monitor reliability, calibration, and coverage

    Get PDF
    Thickness loss due to internal corrosion and erosion is a critical issue in ferromagnetic steel structures that can cause catastrophic failures. Ultrasonic thickness gauges are widely used for the detection of wall thickness. Recently permanently installed ultrasonic sensors have become popular for the inspection of areas suspected to undergo wall thickness loss. However, these are limited by the high cost and requirement of coupling agents. To address these problems, a novel cost-effective, and smart corrosion monitor based on the magnetic eddy current technique is developed in this research. The performance and reliability of the monitor to track internal wall thickness loss is tested successfully through accelerated and real-life aging corrosion tests. Due to the handling and safety issues associated with the powerful magnets in magnetic techniques, a particle swarm-based optimisation method is proposed and validated through two test cases. The results indicate that the area of the magnetic excitation circuit could be reduced by 38% without compromising the sensitivity. The reliability of the corrosion monitor is improved by utilising the active redundancy approach to identify and isolate faults in sensors. A real-life aging test is conducted for eight months in an ambient environment through an accelerated corrosion setup. The results obtained from the two corrosion monitors confirm that the proposed corrosion monitor is reliable for tracking the thickness loss. The corrosion monitor is found to be stable against environmental variations. A new in-situ calibration method based on zero-crossing frequency feature is introduced to evaluate the in-situ relative permeability. The thickness of the test specimen could be estimated with an accuracy of ± 0.6 mm. The series of studies conducted in the project reveal that the magnetic corrosion monitor has the capability to detect and quantify uniform wall thickness loss reliably

    Simulation, optimization and instrumentation of agricultural biogas plants

    Get PDF
    During the last two decades, the production of renewable energy by anaerobic digestion (AD) in biogas plants has become increasingly popular due to its applicability to a great variety of organic material from energy crops and animal waste to the organic fraction of Municipal Solid Waste (MSW), and to the relative simplicity of AD plant designs. Thus, a whole new biogas market emerged in Europe, which is strongly supported by European and national funding and remuneration schemes. Nevertheless, stable and efficient operation and control of biogas plants can be challenging, due to the high complexity of the biochemical AD process, varying substrate quality and a lack of reliable online instrumentation. In addition, governmental support for biogas plants will decrease in the long run and the substrate market will become highly competitive. The principal aim of the research presented in this thesis is to achieve a substantial improvement in the operation of biogas plants. At first, a methodology for substrate inflow optimization of full-scale biogas plants is developed based on commonly measured process variables and using dynamic simulation models as well as computational intelligence (CI) methods. This methodology which is appliquable to a broad range of different biogas plants is then followed by an evaluation of existing online instrumentation for biogas plants and the development of a novel UV/vis spectroscopic online measurement system for volatile fatty acids. This new measurement system, which uses powerful machine learning techniques, provides a substantial improvement in online process monitoring for biogas plants. The methodologies developed and results achieved in the areas of simulation and optimization were validated at a full-scale agricultural biogas plant showing that global optimization of the substrate inflow based on dynamic simulation models is able to improve the yearly profit of a biogas plant by up to 70%. Furthermore, the validation of the newly developed online measurement for VFA concentration at an industrial biogas plant showed that a measurement accuracy of 88% is possible using UV/vis spectroscopic probes

    Adaptive algorithms for history matching and uncertainty quantification

    Get PDF
    Numerical reservoir simulation models are the basis for many decisions in regard to predicting, optimising, and improving production performance of oil and gas reservoirs. History matching is required to calibrate models to the dynamic behaviour of the reservoir, due to the existence of uncertainty in model parameters. Finally a set of history matched models are used for reservoir performance prediction and economic and risk assessment of different development scenarios. Various algorithms are employed to search and sample parameter space in history matching and uncertainty quantification problems. The algorithm choice and implementation, as done through a number of control parameters, have a significant impact on effectiveness and efficiency of the algorithm and thus, the quality of results and the speed of the process. This thesis is concerned with investigation, development, and implementation of improved and adaptive algorithms for reservoir history matching and uncertainty quantification problems. A set of evolutionary algorithms are considered and applied to history matching. The shared characteristic of applied algorithms is adaptation by balancing exploration and exploitation of the search space, which can lead to improved convergence and diversity. This includes the use of estimation of distribution algorithms, which implicitly adapt their search mechanism to the characteristics of the problem. Hybridising them with genetic algorithms, multiobjective sorting algorithms, and real-coded, multi-model and multivariate Gaussian-based models can help these algorithms to adapt even more and improve their performance. Finally diversity measures are used to develop an explicit, adaptive algorithm and control the algorithm’s performance, based on the structure of the problem. Uncertainty quantification in a Bayesian framework can be carried out by resampling of the search space using Markov chain Monte-Carlo sampling algorithms. Common critiques of these are low efficiency and their need for control parameter tuning. A Metropolis-Hastings sampling algorithm with an adaptive multivariate Gaussian proposal distribution and a K-nearest neighbour approximation has been developed and applied

    Spatio-temporal prediction of wind fields

    Get PDF
    Short-term wind and wind power forecasts are required for the reliable and economic operation of power systems with significant wind power penetration. This thesis presents new statistical techniques for producing forecasts at multiple locations using spatiotemporal information. Forecast horizons of up to 6 hours are considered for which statistical methods outperform physical models in general. Several methods for producing hourly wind speed and direction forecasts from 1 to 6 hours ahead are presented in addition to a method for producing five-minute-ahead probabilistic wind power forecasts. The former have applications in areas such as energy trading and defining reserve requirements, and the latter in power system balancing and wind farm control. Spatio-temporal information is captured by vector autoregressive (VAR) models that incorporate wind direction by modelling the wind time series using complex numbers. In a further development, the VAR coefficients are replaced with coefficient functions in order to capture the dependence of the predictor on external variables, such as the time of year or wind direction. The complex-valued approach is found to produce accurate speed predictions, and the conditional predictors offer improved performance with little additional computational cost. Two non-linear algorithms have been developed for wind forecasting. In the first, the predictor is derived from an ensemble of particle swarm optimised candidate solutions. This approach is low cost and requires very little training data but fails to capitalise on spatial information. The second approach uses kernelised forms of popular linear algorithms which are shown to produce more accurate forecasts than their linear equivalents for multi-step-ahead prediction. Finally, very-short-term wind power forecasting is considered. Five-minute-ahead parametric probabilistic forecasts are produced by modelling the predictive distribution as logit-normal and forecasting its parameters using a sparse-VAR (sVAR) approach. Development of the sVAR is motivated by the desire to produce forecasts on a large spatial scale, i.e. hundreds of locations, which is critical during periods of high instantaneous wind penetration.Short-term wind and wind power forecasts are required for the reliable and economic operation of power systems with significant wind power penetration. This thesis presents new statistical techniques for producing forecasts at multiple locations using spatiotemporal information. Forecast horizons of up to 6 hours are considered for which statistical methods outperform physical models in general. Several methods for producing hourly wind speed and direction forecasts from 1 to 6 hours ahead are presented in addition to a method for producing five-minute-ahead probabilistic wind power forecasts. The former have applications in areas such as energy trading and defining reserve requirements, and the latter in power system balancing and wind farm control. Spatio-temporal information is captured by vector autoregressive (VAR) models that incorporate wind direction by modelling the wind time series using complex numbers. In a further development, the VAR coefficients are replaced with coefficient functions in order to capture the dependence of the predictor on external variables, such as the time of year or wind direction. The complex-valued approach is found to produce accurate speed predictions, and the conditional predictors offer improved performance with little additional computational cost. Two non-linear algorithms have been developed for wind forecasting. In the first, the predictor is derived from an ensemble of particle swarm optimised candidate solutions. This approach is low cost and requires very little training data but fails to capitalise on spatial information. The second approach uses kernelised forms of popular linear algorithms which are shown to produce more accurate forecasts than their linear equivalents for multi-step-ahead prediction. Finally, very-short-term wind power forecasting is considered. Five-minute-ahead parametric probabilistic forecasts are produced by modelling the predictive distribution as logit-normal and forecasting its parameters using a sparse-VAR (sVAR) approach. Development of the sVAR is motivated by the desire to produce forecasts on a large spatial scale, i.e. hundreds of locations, which is critical during periods of high instantaneous wind penetration

    Parametric array calibration

    Get PDF
    The subject of this thesis is the development of parametric methods for the calibration of array shape errors. Two physical scenarios are considered, the online calibration (self-calibration) using far-field sources and the offline calibration using near-field sources. The maximum likelihood (ML) estimators are employed to estimate the errors. However, the well-known computational complexity in objective function optimization for the ML estimators demands effective and efficient optimization algorithms. A novel space-alternating generalized expectation-maximization (SAGE)-based algorithm is developed to optimize the objective function of the conditional maximum likelihood (CML) estimator for the far-field online calibration. Through data augmentation, joint direction of arrival (DOA) estimation and array calibration can be carried out by a computationally simple search procedure. Numerical experiments show that the proposed method outperforms the existing method for closely located signal sources and is robust to large shape errors. In addition, the accuracy of the proposed procedure attains the Cram´er-Rao bound (CRB). A global optimization algorithm, particle swarm optimization (PSO) is employed to optimize the objective function of the unconditional maximum likelihood (UML) estimator for the farfield online calibration and the near-field offline calibration. A new technique, decaying diagonal loading (DDL) is proposed to enhance the performance of PSO at high signal-to-noise ratio (SNR) by dynamically lowering it, based on the counter-intuitive observation that the global optimum of the UML objective function is more prominent at lower SNR. Numerical simulations demonstrate that the UML estimator optimized by PSO with DDL is optimally accurate, robust to large shape errors, and free of the initialization problem. In addition, the DDL technique is applicable to a wide range of array processing problems where the UML estimator is employed and can be coupled with different global optimization algorithms
    corecore