40 research outputs found

    Heterogeneous data fusion for brain psychology applications

    No full text
    This thesis aims to apply Empirical Mode Decomposition (EMD), Multiscale Entropy (MSE), and collaborative adaptive filters for the monitoring of different brain consciousness states. Both block based and online approaches are investigated, and a possible extension to the monitoring and identification of Electromyograph (EMG) states is provided. Firstly, EMD is employed as a multiscale time-frequency data driven tool to decompose a signal into a number of band-limited oscillatory components; its data driven nature makes EMD an ideal candidate for the analysis of nonlinear and non-stationary data. This methodology is further extended to process multichannel real world data, by making use of recent theoretical advances in complex and multivariate EMD. It is shown that this can be used to robustly measure higher order features in multichannel recordings to robustly indicate ‘QBD’. In the next stage, analysis is performed in an information theory setting on multiple scales in time, using MSE. This enables an insight into the complexity of real world recordings. The results of the MSE analysis and the corresponding statistical analysis show a clear difference in MSE between the patients in different brain consciousness states. Finally, an online method for the assessment of the underlying signal nature is studied. This method is based on a collaborative adaptive filtering approach, and is shown to be able to approximately quantify the degree of signal nonlinearity, sparsity, and non-circularity relative to the constituent subfilters. To further illustrate the usefulness of the proposed data driven multiscale signal processing methodology, the final case study considers a human-robot interface based on a multichannel EMG analysis. A preliminary analysis shows that the same methodology as that applied to the analysis of brain cognitive states gives robust and accurate results. The analysis, simulations, and the scope of applications presented suggest great potential of the proposed multiscale data processing framework for feature extraction in multichannel data analysis. Directions for future work include further development of real-time feature map approaches and their use across brain-computer and brain-machine interface applications

    A Hierarchical Filtering-Based Monitoring Architecture for Large-scale Distributed Systems

    Get PDF
    On-line monitoring is essential for observing and improving the reliability and performance of large-scale distributed (LSD) systems. In an LSD environment, large numbers of events are generated by system components during their execution and interaction with external objects (e.g. users or processes). These events must be monitored to accurately determine the run-time behavior of an LSD system and to obtain status information that is required for debugging and steering applications. However, the manner in which events are generated in an LSD system is complex and represents a number of challenges for an on-line monitoring system. Correlated events axe generated concurrently and can occur at multiple locations distributed throughout the environment. This makes monitoring an intricate task and complicates the management decision process. Furthermore, the large number of entities and the geographical distribution inherent with LSD systems increases the difficulty of addressing traditional issues, such as performance bottlenecks, scalability, and application perturbation. This dissertation proposes a scalable, high-performance, dynamic, flexible and non-intrusive monitoring architecture for LSD systems. The resulting architecture detects and classifies interesting primitive and composite events and performs either a corrective or steering action. When appropriate, information is disseminated to management applications, such as reactive control and debugging tools. The monitoring architecture employs a novel hierarchical event filtering approach that distributes the monitoring load and limits event propagation. This significantly improves scalability and performance while minimizing the monitoring intrusiveness. The architecture provides dynamic monitoring capabilities through: subscription policies that enable applications developers to add, delete and modify monitoring demands on-the-fly, an adaptable configuration that accommodates environmental changes, and a programmable environment that facilitates development of self-directed monitoring tasks. Increased flexibility is achieved through a declarative and comprehensive monitoring language, a simple code instrumentation process, and automated monitoring administration. These elements substantially relieve the burden imposed by using on-line distributed monitoring systems. In addition, the monitoring system provides techniques to manage the trade-offs between various monitoring objectives. The proposed solution offers improvements over related works by presenting a comprehensive architecture that considers the requirements and implied objectives for monitoring large-scale distributed systems. This architecture is referred to as the HiFi monitoring system. To demonstrate effectiveness at debugging and steering LSD systems, the HiFi monitoring system has been implemented at the Old Dominion University for monitoring the Interactive Remote Instruction (IRI) system. The results from this case study validate that the HiFi system achieves the objectives outlined in this thesis

    Fuzzy Logic Control for Multiresolutive Adaptive PN Acquisition Scheme in Time-Varying Multipath Ionospheric Channel

    Get PDF
    Communication with remote places is a challenge often solved using satellites. However, when trying to reach Antarctic stations, this solution suffers from poor visibility range and high operational costs. In such scenarios, skywave ionospheric communication systems represent a good alternative to satellite communications. The Research Group in Electromagnetism and Communications (GRECO) is designing an HF system for long haul digital communication between the Antarctic Spanish Base in Livingston Island (62.6S, 60.4W) and Observatori de l’Ebre in Spain (40.8N,0.5E) (Vilella et al., 2008). The main interest of Observatori de l’Ebre is the transmission of the data collected from the sensors located at the base, including a geomagnetic sensor, a vertical incidence ionosonde, an oblique incidence ionosonde and a GNSS receiver. The geomagnetic sensor, the vertical incidence ionosonde and the GNSS receiver are commercial solutions from third parties. The oblique incidence ionosonde, used to sound the ionospheric channel between Antarctica and Spain, was developed by the GRECO in the framework of this project. During the last Antarctic campaign, exhaustive measurements of the HF channel characteristics were performed, which allowed us to determine parameters such as availability, SNR, delay and Doppler spread, etc. In addition to the scientific interest of this sounding, a further objective of the project is the establishment of a backup link for data transmission from the remote sensors in the Antarctica. In this scenario, ionospheric communications appear to be an interesting complementary alternative to geostationary satellite communications since the latter are expensive and not always available from high-latitudes. Research work in the field of fuzzy logics applied to the estimation of the above mentioned channel was first applied in (Alsina et al., 2005a) for serial search acquisition systems in AWGN channels, afterwards applied to the same channel but in the multiresolutive structure (Alsina et al., 2009a; Morán et al., 2001) in papers (Alsina et al., 2007b; 2009b) achieving good results. In this chapter the application of fuzzy logic control trained for Rayleigh fading channels (Proakis, 1995) with Direct-Sequence Spread-Spectrum (DS-SS) is presented, specifically suited for the ionospheric channel Antarctica-Spain. Stability and reliability of the reception, which are currently being designed, are key factors for the reception. It is important to note that the fuzzy control design presented in this chapter not only resolves the issue of improving the multiresolutive structure performance presented by (Morán et al., 2001), but also introduces a new option for the control design of many LMS adaptive structures used for PN code acquisition found in the literature. (El-Tarhuni & Sheikh, 1996) presented an LMS-based system to acquire a DS-SS system in Rayleigh channels; years after, (Han et al., 2006) improved the performance of the acquisition system designed by (El-Tarhuni & Sheikh, 1996). And also in other type of channels, LMS filters are used as an acquisition system, even in oceanic transmissions (Stojanovic & Freitag, 2003). Although the fuzzy control system presented in this chapter is compared to the stability control used in (Morán et al., 2001) it also can be used to improve all previous designs performance in terms of stability and robustness. Despite this generalization, the design of every control system should be done according to the requirements of the acquisition system and the specific channel characteristics

    Low Power Adaptive Equaliser Architectures for Wireless LMMSE Receivers

    Get PDF
    Power consumption requires critical consideration during system design for portable wireless communication devices as it has a direct influence on the battery weight and volume required for operation. Wideband Code Division Multiple Access (W-CDMA) techniques are favoured for use in future generation mobile communication systems. This thesis investigates novel low power techniques for use in system blocks within a W-CDMA adaptive linear minimum mean squared error (LMMSE) receiver architecture. Two low power techniques are presented for reducing power dissipation in the LMS adaptive filter, this being the main power consuming block within this receiver. These low power techniques are namely the decorrelating transform, this is a differential coefficient technique, and the variable length update algorithm which is a dynamic tap-length optimisation technique. The decorrelating transform is based on the principle of reducing the wordlength of filter coefficients by using the computed difference between adjacent coefficients in calculation of the filter output. The effect of reducing the wordlength of filter coefficients being presented to multipliers in the filter is a reduction in switching activity within the multiplier thus reducing power consumed. In the case of the LMS adaptive filter, with coefficients being continuously updated, the decorrelating transform is applied to these calculated coefficients with minimal hardware or computational overhead. The correlation between filter coefficients is exploited to achieve a wordlength reduction from 16 bits down to 10 bits in the FIR filter block. The variable length update algorithm is based on the principle of optimising the number of operational filter taps in the LMS adaptive filter according to operating conditions. The number of taps in operation can be increased or decreased dynamically according to the mean squared error at the output of the filter. This algorithm is used to exploit the fact that when the SNR in the channel is low the minimum mean squared error of the short equaliser is almost the same as that of the longer equaliser. Therefore, minimising the length of the equaliser will not result in poorer MSE performance and there is no disadvantage in having fewer taps in operation. If fewer taps are in operation then switching will not only be reduced in the arithmetic blocks but also in the memory blocks required by the LMS algorithm and FIR filter process. This reduces the power consumed by both these computation intensive functional blocks. Power results are obtained for equaliser lengths from 73 to 16 taps and for operation with varying input SNR. This thesis then proposes that the variable length LMS adaptive filter is applied in the adaptive LMMSE receiver to create a low power implementation. Power consumption in the receiver is reduced by the dynamic optimisation of the LMS receiver coefficient calculation. A considerable power saving is seen to be achieved when moving from a fixed length LMS implementation to the variable length design. All design architectures are coded in Verilog hardware description language at register transfer level (RTL). Once functional specification of the design is verified, synthesis is carried out using either Synopsys DesignCompiler or Cadence BuildGates to create a gate level netlist. Power consumption results are determined at the gate level and estimated using the Synopsys DesignPower tool

    Machine Learning and Its Application to Reacting Flows

    Get PDF
    This open access book introduces and explains machine learning (ML) algorithms and techniques developed for statistical inferences on a complex process or system and their applications to simulations of chemically reacting turbulent flows. These two fields, ML and turbulent combustion, have large body of work and knowledge on their own, and this book brings them together and explain the complexities and challenges involved in applying ML techniques to simulate and study reacting flows. This is important as to the world’s total primary energy supply (TPES), since more than 90% of this supply is through combustion technologies and the non-negligible effects of combustion on environment. Although alternative technologies based on renewable energies are coming up, their shares for the TPES is are less than 5% currently and one needs a complete paradigm shift to replace combustion sources. Whether this is practical or not is entirely a different question, and an answer to this question depends on the respondent. However, a pragmatic analysis suggests that the combustion share to TPES is likely to be more than 70% even by 2070. Hence, it will be prudent to take advantage of ML techniques to improve combustion sciences and technologies so that efficient and “greener” combustion systems that are friendlier to the environment can be designed. The book covers the current state of the art in these two topics and outlines the challenges involved, merits and drawbacks of using ML for turbulent combustion simulations including avenues which can be explored to overcome the challenges. The required mathematical equations and backgrounds are discussed with ample references for readers to find further detail if they wish. This book is unique since there is not any book with similar coverage of topics, ranging from big data analysis and machine learning algorithm to their applications for combustion science and system design for energy generation

    Machine Learning and Its Application to Reacting Flows

    Get PDF
    This open access book introduces and explains machine learning (ML) algorithms and techniques developed for statistical inferences on a complex process or system and their applications to simulations of chemically reacting turbulent flows. These two fields, ML and turbulent combustion, have large body of work and knowledge on their own, and this book brings them together and explain the complexities and challenges involved in applying ML techniques to simulate and study reacting flows. This is important as to the world’s total primary energy supply (TPES), since more than 90% of this supply is through combustion technologies and the non-negligible effects of combustion on environment. Although alternative technologies based on renewable energies are coming up, their shares for the TPES is are less than 5% currently and one needs a complete paradigm shift to replace combustion sources. Whether this is practical or not is entirely a different question, and an answer to this question depends on the respondent. However, a pragmatic analysis suggests that the combustion share to TPES is likely to be more than 70% even by 2070. Hence, it will be prudent to take advantage of ML techniques to improve combustion sciences and technologies so that efficient and “greener” combustion systems that are friendlier to the environment can be designed. The book covers the current state of the art in these two topics and outlines the challenges involved, merits and drawbacks of using ML for turbulent combustion simulations including avenues which can be explored to overcome the challenges. The required mathematical equations and backgrounds are discussed with ample references for readers to find further detail if they wish. This book is unique since there is not any book with similar coverage of topics, ranging from big data analysis and machine learning algorithm to their applications for combustion science and system design for energy generation

    Digital Filters

    Get PDF
    The new technology advances provide that a great number of system signals can be easily measured with a low cost. The main problem is that usually only a fraction of the signal is useful for different purposes, for example maintenance, DVD-recorders, computers, electric/electronic circuits, econometric, optimization, etc. Digital filters are the most versatile, practical and effective methods for extracting the information necessary from the signal. They can be dynamic, so they can be automatically or manually adjusted to the external and internal conditions. Presented in this book are the most advanced digital filters including different case studies and the most relevant literature

    Improved integrity algorithms for integrated GPS/INS systems in the presence of slowly growing errors

    No full text
    GPS is the most widely used satellite navigation system. By design, there is no provision for real time integrity information within the Standard Positioning Service (SPS). However, in safety critical sectors like aviation, stringent integrity performance requirements must be met. This can be achieved using special augmentation systems or at the user sensor level through Receiver Autonomous Integrity Monitoring (RAIM) or both. RAIM, which is considered as the most cost effective method relies on data consistency, and therefore requires redundant measurements for its operation. An external aid to provide this redundancy can be in the form of an Inertial Navigation system (INS). This should enable continued performance even during RAIM holes (when no redundant satellite measurements are available). However, the integrated system faces the risk of failures generated at different levels of the system, in the operational environment and at the user sensor (receiver) level. This thesis addresses integrated GPSIINS architectures, the corresponding failure modes and the sensor level integrity algorithms used to protect users from such failure modes. An exhaustive literature review is conducted to identify the various failure modes. These are then grouped into classes based on their characteristics and a mathematical (failure) model is specified for each class. For the analysis of failures, a simulation of a typical aircraft trajectory is developed, including the capability to generate raw measurements from GPS and the INS. The simulated GPS and INS measurements for the aircraft are used to evaluate the performance of the current integrity algorithms. Their performances are assessed for the most difficult case of failures; slowly growing errors (SGE), and shown to be inadequate (i.e. a considerable period of time is required for detection). This is addressed by developing a new algorithm based on the detection ofthe growth rate ofa typical test statistic (assuming a single failure at a time). Results show that the new algorithm detects slowly growing ramp-type errors faster than the current methods, with a forty percent improvement in the time it takes to detect the worst case SGE. The algorithm is then extended to include detection of multiple SGEs for which a new tightly coupled method referred to as the 'piggyback architecture' is proposed. This method provides the novel capability of detecting all failures including those affecting the INS. The proposed algorithms are validated with real GPS and INS data. In this way, the integrity performance of the integrated system is enhanced against the worst case failures with a detection time that is beneficial for the achievement of stringent time-to-alert requirements. A practical implementation would then comprise of the use of the rate detector algorithm alongside the current methods.Imperial Users onl

    Design and implementation of resilient attitude estimation algorithms for aerospace applications

    Get PDF
    Satellite attitude estimation is a critical component of satellite attitude determination and control systems, relying on highly accurate sensors such as IMUs, star trackers, and sun sensors. However, the complex space environment can cause sensor performance degradation or even failure. To address this issue, FDIR systems are necessary. This thesis presents a novel approach to satellite attitude estimation that utilizes an InertialNavigation System (INS) to achieve high accuracy with the low computational load. The algorithm is based on a two-layer Kalman filter, which incorporates the quaternion estimator(QUEST) algorithm, FQA, Linear interpolation (LERP)algorithms, and KF. Moreover, the thesis proposes an FDIR system for the INS that can detect and isolate faults and recover the system safely. This system includes two-layer fault detection with isolation and two-layered recovery, which utilizes an Adaptive Unscented Kalman Filter (AUKF), QUEST algorithm, residual generators, Radial Basis Function (RBF) neural networks, and an adaptive complementary filter (ACF). These two fault detection layers aim to isolate and identify faults while decreasing the rate of false alarms. An FPGA-based FDIR system is also designed and implemented to reduce latency while maintaining normal resource consumption in this thesis. Finally, a Fault Tolerance Federated Kalman Filter (FTFKF) is proposed to fuse the output from INS and the CNS to achieve high precision and robust attitude estimation.The findings of this study provide a solid foundation for the development of FDIR systems for various applications such as robotics, autonomous vehicles, and unmanned aerial vehicles, particularly for satellite attitude estimation. The proposed INS-based approach with the FDIR system has demonstrated high accuracy, fault tolerance, and low computational load, making it a promising solution for satellite attitude estimation in harsh space environment

    GPU Accelerated protocol analysis for large and long-term traffic traces

    Get PDF
    This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark
    corecore