151 research outputs found

    Kernel-Based Framework for Multitemporal and Multisource Remote Sensing Data Classification and Change Detection

    Get PDF
    The multitemporal classification of remote sensing images is a challenging problem, in which the efficient combination of different sources of information (e.g., temporal, contextual, or multisensor) can improve the results. In this paper, we present a general framework based on kernel methods for the integration of heterogeneous sources of information. Using the theoretical principles in this framework, three main contributions are presented. First, a novel family of kernel-based methods for multitemporal classification of remote sensing images is presented. The second contribution is the development of nonlinear kernel classifiers for the well-known difference and ratioing change detection methods by formulating them in an adequate high-dimensional feature space. Finally, the presented methodology allows the integration of contextual information and multisensor images with different levels of nonlinear sophistication. The binary support vector (SV) classifier and the one-class SV domain description classifier are evaluated by using both linear and nonlinear kernel functions. Good performance on synthetic and real multitemporal classification scenarios illustrates the generalization of the framework and the capabilities of the proposed algorithms.Publicad

    Quantum Nescimus: Improving the characterization of quantum systems from limited information

    Get PDF
    We are currently approaching the point where quantum systems with 15 or more qubits will be controllable with high levels of coherence over long timescales. One of the fundamental problems that has been identified is that, as the number of qubits increases to these levels, there is currently no clear way to use efficiently the information that can be obtained from such a system to make diagnostic inferences and to enable improvements in the underlying quantum gates. Even with systems of only a few bits the exponential scaling in resources required by techniques such as quantum tomography or gate-set tomography will render these techniques impractical. Randomized benchmarking (RB) is a technique that will scale in a practical way with these increased system sizes. Although RB provides only a partial characterization of the quantum system, recent advances in the protocol and the interpretation of the results of such experiments confirm the information obtained as helpful in improving the control and verification of such processes. This thesis examines and extends the techniques of RB including practical analysis of systems affected by low frequency noise, extending techniques to allow the anisotropy of noise to be isolated, and showing how additional gates required for universal computation can be added to the protocol and thus benchmarked. Finally, it begins to explore the use of machine learning to aid in the ability to characterize, verify and validate noise in such systems, demonstrating by way of example how machine learning can be used to explore the edge between quantum non-locality and realism

    Optimal Test Inputs for Helicopter System Identification

    Get PDF
    The test input applied to a helicopter, or any other system, for the purpose of system identification can have a substantial effect on the parameter estimates obtained. It is therefore important that an appropriate input is chosen. Inputs must take account of the requirements, and restrictions, of the application. For example, in the rotorcraft case studied a linearised model is being identified, and it is therefore essential that the input produces a linear response. A straightforward method has been developed for the design of multi-step inputs. This method is based in the frequency-domain, and involves tailoring the auto-spectra of the inputs to give long, linear test records, and parameter estimates with reasonably low variances. In flight trials using the Lynx helicopter at RAE (Bedford), the double-doublet input, designed with this method, has been found to be a significant improvement over more traditional inputs. Using the data from the flight trials of the double-doublet, both equation-error and output-error identification have been carried out. Several discrepancies were found between the theoretical and identified models. More work is required to clarify this. Numerical difficulties were encountered during the output-error identification, and these were attributed to ill-conditioning resulting from the use of an unstable system. The design of optimal inputs has also been investigated. In particular, constraints have been developed which are suitable for ensuring that the optimal inputs produce linear responses, and are robust. Conventional energy constraints were found to be of little use for these purposes. Algorithms have been developed for the design of optimal inputs with a variety of constraints, and simulation studies have been made to gain an understanding of the effect of these constraints on the form of the inputs. With the constraints obtained from this work, an optimal input has been designed for use with the Lynx helicopter. This input is robust, and yet is predicted to give significantly improved parameter estimates. Unfortunately, at the time of writing, flight trials of this input could not be performed

    Empirical estimation of low-frequency nonlinear hydrodynamic loads on moored structures

    Get PDF
    Low-frequency (LF) motions of floating structures are commonly modeled as the response of an oscillator to a second-order wave excitation. We present here an empirical method that reliably estimates the oscillators parameters and quadratic transfer function (QTF) used in such models. The method is based on an active stationkeeping system that enables to accurately control external boundary conditions applied on the floating structure in a wave basin. The resulting system can be successively tuned to different frequency ranges of interest. Then, by deconvolution and optimization, LF damping and added-mass loads, as well as a response-independent wave excitation load, can be evaluated. From the wave elevation, and estimated load time series, the difference-frequency QTF is finally estimated by a cross-bi-spectral analysis, including a new treatment of statistical noise. The paper describes the proposed method in details, and illustrates it with the study of a ship-shaped floating unit in a sea-state of relevance for the fatigue design of mooring systems (steep waves, low return period).publishedVersio

    <strong>Non-Gaussian, Non-stationary and Nonlinear Signal Processing Methods - with Applications to Speech Processing and Channel Estimation</strong>

    Get PDF

    Seafloor depth estimation by means of interferometric synthetic aperture sonar

    Get PDF
    The topic of this thesis is relative depth estimation using interferometric sidelooking sonar. We give a thorough description of the geometry of interferometric sonar and of time delay estimation techniques. We present a novel solution for the depth estimate using sidelooking sonar, and review the cross-correlation function, the cross-uncertainty function and the phase-differencing technique. We find an elegant solution to co-registration and unwrapping by interpolating the sonar data in ground-range. Two depth estimation techniques are developed: Cross-correlation based sidescan bathymetry and synthetic aperture sonar (SAS) interferometry. We define flank length as a measure of the horizontal resolution in bathymetric maps and find that both sidescan bathymetry and SAS interferometry achieve theoretical resolutions. The vertical precision of our two methods are close to the performance predicted from the measured coherence. We study absolute phase-difference estimation using bandwidth and find a very simple split-bandwidth approach which outperforms a standard 2D phase unwrapper on complicated objects. We also examine advanced filtering of depth maps. Finally, we present pipeline surveying as an example application of interferometric SAS

    Turbulence and mixing by internal waves in the Celtic Sea determined from ocean glider microstructure measurements

    Get PDF
    We present a new series of data from a 9-day deployment of an ocean microstructure glider (OMG) in the Celtic Sea during the summer of 2012. The OMG has been specially adapted to measure shear microstructure and coincident density structure from which we derive the dissipation rate of turbulent kinetic energy (Δ) and diapycnal diffusion rates (K). The methods employed to provide trustworthy turbulent parameters are described and data from 766 profiles of Δ, temperature, salinity and density structure are presented. Surface and bottom boundary layers are intuitively controlled by wind and tidal forcing. Interior dynamics is dominated by a highly variable internal wave-field with peak vertical displacements in excess of 50 m, equivalent to over a third of the water depth. Following a relatively quiescent period internal wave energy, represented by the available potential energy (APE), increases dramatically close to the spring tide flow. Rather than follow the assumed spring-neap cycle however, APE is divided into two distinct peak periods lasting only one or two days. Pycnocline Δ also increases close to the spring tide period and similar to APE, is distinguishable as two distinct energetic periods, however the timing of these periods is not consistent with APE. Pycnocline mixing associated with the observed Δ is shown to be responsible for the majority of the observed reduction in bottom boundary layer density suggesting that diapycnal exchange is a key mechanism in controlling or limiting exchange between the continental shelf and the deep ocean. Results confirm pycnocline turbulence to be highly variable and difficult to predict however a log-normal distribution does suggest that natural variability could be reproduced if the mean state can be accurately simulated

    Laser Ionisation Spectroscopy of Alkalis: Applications to Resonance Ionisation Mass Spectrometry

    Get PDF
    Resonance ionisation spectroscopy (RIS) at Glasgow University began as a result of the need to calibrate large gas-filled multiwire proportional counters (MWPCs) currently being built at CERN, specifically the ALEPH time projection chamber. From this work the direction shifted towards the development of laser ionisation as an analytical tool, with the design of two resonance ionisation time of flight mass spectrometers. The two instruments have slightly different remits. One is particularly suited to surface analysis, the other to trace element detection. The work outlined in this thesis was intended to help in the design of these time of flight mass spectrometers, by highlighting difficulties likely to be encountered in the resonant ionisation and detection of small numbers of atoms. In order to prove the potential of resonance ionisation, and also to gain experience in the experimental techniques applicable to resonance ionisation mass spectrometry, initial experiments were carried out on elemental caesium and rubidium in a simple proportional counter. Chapter 1 outlines the basic theory behind the resonance ionisation technique, and shows its wide applicability to elemental ionisation and detection. A brief historical outline of previous experimental and theoretical work on resonance ionisation traces the development of RIS as an analytical tool, leading to the design and construction of a resonance ionisation time of flight mass spectrometer at Glasgow. Chapter 2 is a brief description of some of the theoretical aspects of resonance ionisation. A simple population rate equation model is used to derive expressions for the ion yields for a two level atom as a function of atomic and laser parameters. A semi-classical model of the atom-radiation interaction is given, leading to the model of Rabi oscillations between electronic states in an intense laser field. Transitions involving more than one photon are qualitatively described. The laser systems used for resonance ionisation are described in chapter 3, along with the ion detectors used. Descriptions of the proportional counter, and quadrupole and time of flight mass spectrometers are given. Chapter 3 concludes with a discussion of the reasoning behind the decision to use caesium and rubidium for the initial experiments with these detectors. Chapter 4 begins with a brief survey of previous work on the resonance ionisation of alkali metals. The electronic structure of atomic and molecular caesium and rubidium is summarised, and energy level diagrams for these systems are presented. Experimental work conducted at Glasgow to investigate the background ionisation in proportional counters is reported in chapter 5. These results were deemed important in that they suggested that, at wavelengths below 300 nm, the ionisation of organic impurities in proportional counters, or any ionisation spectrometer, could swamp the resonant ionisation signal of interest, particularly at trace concentration levels. The ionisation of these impurities might therefore be a limiting factor to the sensitivity of resonance ionisation at these UV wavelengths. Two impurities were identified in the proportional counter, phenol and toluene. The origin of phenol was traced to plastic piping used to introduce the buffer gas to the proportional counter. The origin of toluene was not determined. Chapter 6 reports on the resonance ionisation spectroscopy of caesium and rubidium. Early work concentrated on using a specially designed proportional counter, which was both robust and free from contaminants. One and two photon transitions were investigated. The collisional enhancement of the ionisation of photoexcited Rydberg levels was investigated using a simple model of the process. The proportional counter was also incorporated into a quadrupole mass spectrometer for an early attempt at the resonant ionisation mass spectroscopy of atomic and molecular rubidium. With the completion of the construction of the time of flight mass spectrometer, the experimental work switched to this instrument. Preliminary results are presented in chapter 7. These have mainly been obtained to date, (due to technical difficulties with the ion gun), with a fairly simple technique of pulsed laser ablation/ionisation of a sample, ions being formed in the ablation process itself and by the nonresonant ionisation of ablated neutrals. Not surprisingly the selectivity of this process is limited although resonant transitions can be distinguished. A brief calculation of the projected sensitivity of the instrument, when operating in its normal mode of pulsed ion bombardment with resonant ionisation, is also presented, and ways in which the sensitivity may be increased are explored. The conclusion draws together the results from the work with the proportional counter/quadrupole mass spectrometer, and suggests future experiments, both spectroscopic and analytical, which could be carried out in this instrument, with the addition of a low temperature oven to atomise samples. Experiments could be done to investigate the collisional ionisation of highly excited states, search for autoionisation states in multielectron atoms, investigate the potential of field ionisation as a substitute for photoionisation and also determine the validity of population rate equations to describe resonance ionisation. Experiments to determine the sensitivity of the time of flight mass spectrometer will shortly be conducted. This instrument promises to revolutionise the detection of trace elements, particularly in surface analysis
    • 

    corecore