56 research outputs found

    Performance trade-offs in sequential matrix diagonalisation search strategies

    Get PDF
    Recently a selection of sequential matrix diagonalisation (SMD) algorithms have been introduced which approximate polynomial eigenvalue decomposition of parahermitian matrices. These variants differ only in the search methods that are used to bring energy onto the zero-lag. Here we analyse the search methods in terms of their computational complexities for different sizes of parahermitian matrices which are verified through simulated execution times. Another important factor for these search methods is their ability to transfer energy. Simulations show that the more computationally complex search methods transfer a greater proportion of the off-diagonal energy onto the zero-lag over a selected range of parahermitian matrix sizes. Despite their higher cost per iteration experiments indicate that the more complex search algorithms still converge faster in real time

    Mathematical tools for processing broadband multi-sensor signals

    Get PDF
    Spatial information in broadband array signals is embedded in the relative delay with which sources illuminate different sensors. Therefore, second order statistics, on which cost functions such as the mean square rest, must include such delays. Typically, a space-time covariance matrix therefore arises, which can be represented as a Laurent polynomial matrix. The optimisation of a cost function then requires extending the utility of the eigenvalue decomposition from narrowband covariance matrices to the broadband case of operating in a space-time covariance matrix. This overview paper summarises efforts in performing such factorisations, and demonstrated via the exemplar application of a broadband beamformer how thus well-known narrowband solutions can be extended to the broadband case using polynomial matrices and their factorisations

    Extending narrowband descriptions and optimal solutions to broadband sensor arrays

    Get PDF
    This overview paper motivates the description of broadband sensor array problems by polynomial matrices, directly extending notation that is familiar from the characterisation of narrowband problems. To admit optimal solutions, the approach relies on extending the utility of the eigen- and singular value decompositions, by finding decompositions of such polynomial matrices. Particularly the factorisation of parahermitian polynomial matrices --- including space-time covariance matrices that model the second order statistics of broadband sensor array data --- is important. The paper summarises recent findings on the existence and uniqueness of the eigenvalue decomposition of such parahermitian polynomial matrices, demonstrates some algorithms that implement such factorisations, and highlights key applications where such techniques can provide advantages over state-of-the-art solution

    Polynomial matrix eigenvalue decomposition techniques for multichannel signal processing

    Get PDF
    Polynomial eigenvalue decomposition (PEVD) is an extension of the eigenvalue decomposition (EVD) for para-Hermitian polynomial matrices, and it has been shown to be a powerful tool for broadband extensions of narrowband signal processing problems. In the context of broadband sensor arrays, the PEVD allows the para-Hermitian matrix that results from the calculation of a space-time covariance matrix of the convolutively mixed signals to be diagonalised. Once the matrix is diagonalised, not only can the correlation between different sensor signals be removed but the signal and noise subspaces can also be identified. This process is referred to as broadband subspace decomposition, and it plays a very important role in many areas that require signal separation techniques for multichannel convolutive mixtures, such as speech recognition, radar clutter suppression, underwater acoustics, etc. The multiple shift second order sequential best rotation (MS-SBR2) algorithm, built on the most established SBR2 algorithm, is proposed to compute the PEVD of para-Hermitian matrices. By annihilating multiple off-diagonal elements per iteration, the MS-SBR2 algorithm shows a potential advantage over its predecessor (SBR2) in terms of the computational speed. Furthermore, the MS-SBR2 algorithm permits us to minimise the order growth of polynomial matrices by shifting rows (or columns) in the same direction across iterations, which can potentially reduce the computational load of the algorithm. The effectiveness of the proposed MS-SBR2 algorithm is demonstrated by various para-Hermitian matrix examples, including randomly generated matrices with different sizes and matrices generated from source models with different dynamic ranges and relations between the sources’ power spectral densities. A worked example is presented to demonstrate how the MS-SBR2 algorithm can be used to strongly decorrelate a set of convolutively mixed signals. Furthermore, the performance metrics and computational complexity of MS-SBR2 are analysed and compared to other existing PEVD algorithms by means of numerical examples. Finally, two potential applications of theMS-SBR2 algorithm, includingmultichannel spectral factorisation and decoupling of broadband multiple-input multiple-output (MIMO) systems, are demonstrated in this dissertation

    The University Defence Research Collaboration In Signal Processing

    Get PDF
    This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations. The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour

    The University Defence Research Collaboration In Signal Processing: 2013-2018

    Get PDF
    Signal processing is an enabling technology crucial to all areas of defence and security. It is called for whenever humans and autonomous systems are required to interpret data (i.e. the signal) output from sensors. This leads to the production of the intelligence on which military outcomes depend. Signal processing should be timely, accurate and suited to the decisions to be made. When performed well it is critical, battle-winning and probably the most important weapon which you’ve never heard of. With the plethora of sensors and data sources that are emerging in the future network-enabled battlespace, sensing is becoming ubiquitous. This makes signal processing more complicated but also brings great opportunities. The second phase of the University Defence Research Collaboration in Signal Processing was set up to meet these complex problems head-on while taking advantage of the opportunities. Its unique structure combines two multi-disciplinary academic consortia, in which many researchers can approach different aspects of a problem, with baked-in industrial collaboration enabling early commercial exploitation. This phase of the UDRC will have been running for 5 years by the time it completes in March 2018, with remarkable results. This book aims to present those accomplishments and advances in a style accessible to stakeholders, collaborators and exploiters

    Adaptive heterogeneous parallelism for semi-empirical lattice dynamics in computational materials science.

    Get PDF
    With the variability in performance of the multitude of parallel environments available today, the conceptual overhead created by the need to anticipate runtime information to make design-time decisions has become overwhelming. Performance-critical applications and libraries carry implicit assumptions based on incidental metrics that are not portable to emerging computational platforms or even alternative contemporary architectures. Furthermore, the significance of runtime concerns such as makespan, energy efficiency and fault tolerance depends on the situational context. This thesis presents a case study in the application of both Mattsons prescriptive pattern-oriented approach and the more principled structured parallelism formalism to the computational simulation of inelastic neutron scattering spectra on hybrid CPU/GPU platforms. The original ad hoc implementation as well as new patternbased and structured implementations are evaluated for relative performance and scalability. Two new structural abstractions are introduced to facilitate adaptation by lazy optimisation and runtime feedback. A deferred-choice abstraction represents a unified space of alternative structural program variants, allowing static adaptation through model-specific exhaustive calibration with regards to the extrafunctional concerns of runtime, average instantaneous power and total energy usage. Instrumented queues serve as mechanism for structural composition and provide a representation of extrafunctional state that allows realisation of a market-based decentralised coordination heuristic for competitive resource allocation and the Lyapunov drift algorithm for cooperative scheduling

    Novel sampling techniques for reservoir history matching optimisation and uncertainty quantification in flow prediction

    Get PDF
    Modern reservoir management has an increasing focus on accurately predicting the likely range of field recoveries. A variety of assisted history matching techniques has been developed across the research community concerned with this topic. These techniques are based on obtaining multiple models that closely reproduce the historical flow behaviour of a reservoir. The set of resulted history matched models is then used to quantify uncertainty in predicting the future performance of the reservoir and providing economic evaluations for different field development strategies. The key step in this workflow is to employ algorithms that sample the parameter space in an efficient but appropriate manner. The algorithm choice has an impact on how fast a model is obtained and how well the model fits the production data. The sampling techniques that have been developed to date include, among others, gradient based methods, evolutionary algorithms, and ensemble Kalman filter (EnKF). This thesis has investigated and further developed the following sampling and inference techniques: Particle Swarm Optimisation (PSO), Hamiltonian Monte Carlo, and Population Markov Chain Monte Carlo. The inspected techniques have the capability of navigating the parameter space and producing history matched models that can be used to quantify the uncertainty in the forecasts in a faster and more reliable way. The analysis of these techniques, compared with Neighbourhood Algorithm (NA), has shown how the different techniques affect the predicted recovery from petroleum systems and the benefits of the developed methods over the NA. The history matching problem is multi-objective in nature, with the production data possibly consisting of multiple types, coming from different wells, and collected at different times. Multiple objectives can be constructed from these data and explicitly be optimised in the multi-objective scheme. The thesis has extended the PSO to handle multi-objective history matching problems in which a number of possible conflicting objectives must be satisfied simultaneously. The benefits and efficiency of innovative multi-objective particle swarm scheme (MOPSO) are demonstrated for synthetic reservoirs. It is demonstrated that the MOPSO procedure can provide a substantial improvement in finding a diverse set of good fitting models with a fewer number of very costly forward simulations runs than the standard single objective case, depending on how the objectives are constructed. The thesis has also shown how to tackle a large number of unknown parameters through the coupling of high performance global optimisation algorithms, such as PSO, with model reduction techniques such as kernel principal component analysis (PCA), for parameterising spatially correlated random fields. The results of the PSO-PCA coupling applied to a recent SPE benchmark history matching problem have demonstrated that the approach is indeed applicable for practical problems. A comparison of PSO with the EnKF data assimilation method has been carried out and has concluded that both methods have obtained comparable results on the example case. This point reinforces the need for using a range of assisted history matching algorithms for more confidence in predictions
    • …
    corecore