2,981 research outputs found

    Decision-Making with Heterogeneous Sensors - A Copula Based Approach

    Get PDF
    Statistical decision making has wide ranging applications, from communications and signal processing to econometrics and finance. In contrast to the classical one source-one receiver paradigm, several applications have been identified in the recent past that require acquiring data from multiple sources or sensors. Information from the multiple sensors are transmitted to a remotely located receiver known as the fusion center which makes a global decision. Past work has largely focused on fusion of information from homogeneous sensors. This dissertation extends the formulation to the case when the local sensors may possess disparate sensing modalities. Both the theoretical and practical aspects of multimodal signal processing are considered. The first and foremost challenge is to \u27adequately\u27 model the joint statistics of such heterogeneous sensors. We propose the use of copula theory for this purpose. Copula models are general descriptors of dependence. They provide a way to characterize the nonlinear functional relationships between the multiple modalities, which are otherwise difficult to formalize. The important problem of selecting the `best\u27 copula function from a given set of valid copula densities is addressed, especially in the context of binary hypothesis testing problems. Both, the training-testing paradigm, where a training set is assumed to be available for learning the copula models prior to system deployment, as well as generalized likelihood ratio test (GLRT) based fusion rule for the online selection and estimation of copula parameters are considered. The developed theory is corroborated with extensive computer simulations as well as results on real-world data. Sensor observations (or features extracted thereof) are most often quantized before their transmission to the fusion center for bandwidth and power conservation. A detection scheme is proposed for this problem assuming unifom scalar quantizers at each sensor. The designed rule is applicable for both binary and multibit local sensor decisions. An alternative suboptimal but computationally efficient fusion rule is also designed which involves injecting a deliberate disturbance to the local sensor decisions before fusion. The rule is based on Widrow\u27s statistical theory of quantization. Addition of controlled noise helps to \u27linearize\u27 the higly nonlinear quantization process thus resulting in computational savings. It is shown that although the introduction of external noise does cause a reduction in the received signal to noise ratio, the proposed approach can be highly accurate when the input signals have bandlimited characteristic functions, and the number of quantization levels is large. The problem of quantifying neural synchrony using copula functions is also investigated. It has been widely accepted that multiple simultaneously recorded electroencephalographic signals exhibit nonlinear and non-Gaussian statistics. While the existing and popular measures such as correlation coefficient, corr-entropy coefficient, coh-entropy and mutual information are limited to being bivariate and hence applicable only to pairs of channels, measures such as Granger causality, even though multivariate, fail to account for any nonlinear inter-channel dependence. The application of copula theory helps alleviate both these limitations. The problem of distinguishing patients with mild cognitive impairment from the age-matched control subjects is also considered. Results show that the copula derived synchrony measures when used in conjunction with other synchrony measures improve the detection of Alzheimer\u27s disease onset

    Design of a Simulator for Neonatal Multichannel EEG: Application to Time-Frequency Approaches for Automatic Artifact Removal and Seizure Detection

    Get PDF
    The electroencephalogram (EEG) is used to noninvasively monitor brain activities; it is the most utilized tool to detect abnormalities such as seizures. In recent studies, detection of neonatal EEG seizures has been automated to assist neurophysiologists in diagnosing EEG as manual detection is time consuming and subjective; however it still lacks the necessary robustness that is required for clinical implementation. Moreover, as EEG is intended to record the cerebral activities, extra-cerebral activities external to the brain are also recorded; these are called “artifacts” and can seriously degrade the accuracy of seizure detection. Seizures are one of the most common neurologic problems managed by hospitals occurring in 0.1%-0.5% livebirths. Neonates with seizures are at higher risk for mortality and are reported to be 55-70 times more likely to have severe cerebral-palsy. Therefore, early and accurate detection of neonatal seizures is important to prevent long-term neurological damage. Several attempts in modelling the neonatal EEG and artifacts have been done, but most did not consider the multichannel case. Furthermore, these models were used to test artifact or seizure detection separately, but not together. This study aims to design synthetic models that generate clean or corrupted multichannel EEG to test the accuracy of available artifact and seizure detection algorithms in a controlled environment. In this thesis, synthetic neonatal EEG model is constructed by using; single-channel EEG simulators, head model, 21-electrodes, and propagation equations, to produce clean multichannel EEG. Furthermore, neonatal EEG artifact model is designed using synthetic signals to corrupt EEG waveforms. After that, an automated EEG artifact detection and removal system is designed in both time and time-frequency domains. Artifact detection is optimised and removal performance is evaluated. Finally, an automated seizure detection technique is developed, utilising fused and extended multichannel features along a cross-validated SVM classifier. Results show that the synthetic EEG model mimics real neonatal EEG with 0.62 average correlation, and corrupted-EEG can degrade seizure detection average accuracy from 100% to 70.9%. They also show that using artifact detection and removal enhances the average accuracy to 89.6%, and utilising the extended features enhances it to 97.4% and strengthened its robustness.لمراقبة ورصد أنشطة واشارات المخ، دون الحاجة لأي عملیات (EEG) یستخدم الرسم أو التخطیط الكھربائي للدماغ للدماغجراحیة، وھي تعد الأداة الأكثر استخداما في الكشف عن أي شذوذأو نوبات غیر طبیعیة مثل نوبات الصرع. وقد أظھرت دراسات حدیثة، أن الكشف الآلي لنوبات حدیثي الولادة، ساعد علماء الفسیولوجیا العصبیة في تشخیص الاشارات الدماغیة بشكل أكبر من الكشف الیدوي، حیث أن الكشف الیدوي یحتاج إلى وقت وجھد أكبر وھوذو فعالیة أقل بكثیر، إلا أنھ لا یزال یفتقر إلى المتانة الضروریة والمطلوبة للتطبیق السریري.علاوة على ذلك؛ فكما یقوم الرسم الكھربائي بتسجیل الأنشطة والإشارات الدماغیة الداخلیة، فھو یسجل أیضا أي نشاط أو اشارات خارجیة، مما یؤدي إلى -(artifacts) :حدوث خلل في مدى دقة وفعالیة الكشف عن النوبات الدماغیة الداخلیة، ویطلق على تلك الاشارات مسمى (نتاج صنعي) . 0.5٪ولادة حدیثة في -٪تعد نوبات الصرع من أكثر المشكلات العصبیة انتشارا،ً وھي تصیب ما یقارب 0.1المستشفیات. حیث أن حدیثي الولادة المصابین بنوبات الصرع ھم أكثر عرضة للوفاة، وكما تشیر التقاریر الى أنھم 70مرة أكثر. لذا یعد الكشف المبكر والدقیق للنوبات الدماغیة -معرضین للإصابة بالشلل الدماغي الشدید بما یقارب 55لحدیثي الولادة مھم جدا لمنع الضرر العصبي على المدى الطویل. لقد تم القیام بالعدید من المحاولات التي كانتتھدف الى تصمیم نموذج التخطیط الكھربائي والنتاج الصنعي لدماغ حدیثي الولادة, إلا أن معظمھا لم یعر أي اھتمام الى قضیة تعدد القنوات. إضافة الى ذلك, استخدمت ھذه النماذج , كل على حدة, أو نوبات الصرع. تھدف ھذه الدراسة الى تصمیم نماذج مصطنعة من شأنھا (artifact) لإختبار كاشفات النتاج الصنعيأن تولد اشارات دماغیة متعددة القنوات سلیمة أو معطلة وذلك لفحص مدى دقة فعالیة خوارزمیات الكشف عن نوبات ضمن بیئة یمكن السیطرة علیھا. (artifact) الصرع و النتاج الصنعي في ھذه الأطروحة, یتكون نموذج الرسم الكھربائي المصطنع لحدیثي الولادة من : قناة محاكاة واحده للرسم الكھربائي, نموذج رأس, 21قطب كھربائي و معادلات إنتشار. حیث تھدف جمیعھا لإنتاج إشاراة سلیمة متعدده القنوات للتخطیط عن طریق استخدام اشارات مصطنعة (artifact) الكھربائي للدماغ.علاوة على ذلك, لقد تم تصمیم نموذجالنتاج الصنعيفي نطاقالوقت و (artifact) لإتلاف الرسم الكھربائي للدماغ. بعد ذلك تم انشاء برنامج لكشف و إزالةالنتاج الصناعينطاقالوقت و التردد المشترك. تم تحسین برنامج الكشف النتاج الصناعيالى ابعد ما یمكن بینما تمت عملیة تقییم أداء الإزالة. وفي الختام تم التمكن من تطویر تقنیة الكشف الآلي عن نوبات الصرع, وذلك بتوظیف صفات مدمجة و صفات الذي تم التأكد من صحتھ. (SVM) جدیدة للقنوات المتعددة لإستخدامھا للمصنفلقد أظھرت النتائج أن نموذج الرسم الكھربائي المصطنع لحدیثي الولادة یحاكي الرسمالكھربائي الحقیقي لحدیثي الولادة بمتوسط ترابط 0.62, و أنالرسم الكھربائي المتضرر للدماغ قد یؤدي الى حدوث ھبوطفي مدى دقة متوسط الكشف عن نوبات الصرع من 100%الى 70.9%. وقد أشارت أیضا الى أن استخدام الكشف والإزالة عن النتاج الصنعي (artifact) یؤدي الى تحسن مستوى الدقة الى نسبة 89.6 %, وأن توظیف الصفات الجدیدة للقنوات المتعددة یزید من تحسنھا لتصل الى نسبة 94.4 % مما یعمل على دعم متانتھا

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Enhancing brain-computer interfacing through advanced independent component analysis techniques

    No full text
    A Brain-computer interface (BCI) is a direct communication system between a brain and an external device in which messages or commands sent by an individual do not pass through the brain’s normal output pathways but is detected through brain signals. Some severe motor impairments, such as Amyothrophic Lateral Sclerosis, head trauma, spinal injuries and other diseases may cause the patients to lose their muscle control and become unable to communicate with the outside environment. Currently no effective cure or treatment has yet been found for these diseases. Therefore using a BCI system to rebuild the communication pathway becomes a possible alternative solution. Among different types of BCIs, an electroencephalogram (EEG) based BCI is becoming a popular system due to EEG’s fine temporal resolution, ease of use, portability and low set-up cost. However EEG’s susceptibility to noise is a major issue to develop a robust BCI. Signal processing techniques such as coherent averaging, filtering, FFT and AR modelling, etc. are used to reduce the noise and extract components of interest. However these methods process the data on the observed mixture domain which mixes components of interest and noise. Such a limitation means that extracted EEG signals possibly still contain the noise residue or coarsely that the removed noise also contains part of EEG signals embedded. Independent Component Analysis (ICA), a Blind Source Separation (BSS) technique, is able to extract relevant information within noisy signals and separate the fundamental sources into the independent components (ICs). The most common assumption of ICA method is that the source signals are unknown and statistically independent. Through this assumption, ICA is able to recover the source signals. Since the ICA concepts appeared in the fields of neural networks and signal processing in the 1980s, many ICA applications in telecommunications, biomedical data analysis, feature extraction, speech separation, time-series analysis and data mining have been reported in the literature. In this thesis several ICA techniques are proposed to optimize two major issues for BCI applications: reducing the recording time needed in order to speed up the signal processing and reducing the number of recording channels whilst improving the final classification performance or at least with it remaining the same as the current performance. These will make BCI a more practical prospect for everyday use. This thesis first defines BCI and the diverse BCI models based on different control patterns. After the general idea of ICA is introduced along with some modifications to ICA, several new ICA approaches are proposed. The practical work in this thesis starts with the preliminary analyses on the Southampton BCI pilot datasets starting with basic and then advanced signal processing techniques. The proposed ICA techniques are then presented using a multi-channel event related potential (ERP) based BCI. Next, the ICA algorithm is applied to a multi-channel spontaneous activity based BCI. The final ICA approach aims to examine the possibility of using ICA based on just one or a few channel recordings on an ERP based BCI. The novel ICA approaches for BCI systems presented in this thesis show that ICA is able to accurately and repeatedly extract the relevant information buried within noisy signals and the signal quality is enhanced so that even a simple classifier can achieve good classification accuracy. In the ERP based BCI application, after multichannel ICA the data just applied to eight averages/epochs can achieve 83.9% classification accuracy whilst the data by coherent averaging can reach only 32.3% accuracy. In the spontaneous activity based BCI, the use of the multi-channel ICA algorithm can effectively extract discriminatory information from two types of singletrial EEG data. The classification accuracy is improved by about 25%, on average, compared to the performance on the unpreprocessed data. The single channel ICA technique on the ERP based BCI produces much better results than results using the lowpass filter. Whereas the appropriate number of averages improves the signal to noise rate of P300 activities which helps to achieve a better classification. These advantages will lead to a reliable and practical BCI for use outside of the clinical laboratory

    Causality and synchronisation in complex systems with applications to neuroscience

    Get PDF
    This thesis presents an investigation, of synchronisation and causality, motivated by problems in computational neuroscience. The thesis addresses both theoretical and practical signal processing issues regarding the estimation of interdependence from a set of multivariate data generated by a complex underlying dynamical system. This topic is driven by a series of problems in neuroscience, which represents the principal background motive behind the material in this work. The underlying system is the human brain and the generative process of the data is based on modern electromagnetic neuroimaging methods . In this thesis, the underlying functional of the brain mechanisms are derived from the recent mathematical formalism of dynamical systems in complex networks. This is justified principally on the grounds of the complex hierarchical and multiscale nature of the brain and it offers new methods of analysis to model its emergent phenomena. A fundamental approach to study the neural activity is to investigate the connectivity pattern developed by the brain’s complex network. Three types of connectivity are important to study: 1) anatomical connectivity refering to the physical links forming the topology of the brain network; 2) effective connectivity concerning with the way the neural elements communicate with each other using the brain’s anatomical structure, through phenomena of synchronisation and information transfer; 3) functional connectivity, presenting an epistemic concept which alludes to the interdependence between data measured from the brain network. The main contribution of this thesis is to present, apply and discuss novel algorithms of functional connectivities, which are designed to extract different specific aspects of interaction between the underlying generators of the data. Firstly, a univariate statistic is developed to allow for indirect assessment of synchronisation in the local network from a single time series. This approach is useful in inferring the coupling as in a local cortical area as observed by a single measurement electrode. Secondly, different existing methods of phase synchronisation are considered from the perspective of experimental data analysis and inference of coupling from observed data. These methods are designed to address the estimation of medium to long range connectivity and their differences are particularly relevant in the context of volume conduction, that is known to produce spurious detections of connectivity. Finally, an asymmetric temporal metric is introduced in order to detect the direction of the coupling between different regions of the brain. The method developed in this thesis is based on a machine learning extensions of the well known concept of Granger causality. The thesis discussion is developed alongside examples of synthetic and experimental real data. The synthetic data are simulations of complex dynamical systems with the intention to mimic the behaviour of simple cortical neural assemblies. They are helpful to test the techniques developed in this thesis. The real datasets are provided to illustrate the problem of brain connectivity in the case of important neurological disorders such as Epilepsy and Parkinson’s disease. The methods of functional connectivity in this thesis are applied to intracranial EEG recordings in order to extract features, which characterize underlying spatiotemporal dynamics before during and after an epileptic seizure and predict seizure location and onset prior to conventional electrographic signs. The methodology is also applied to a MEG dataset containing healthy, Parkinson’s and dementia subjects with the scope of distinguishing patterns of pathological from physiological connectivity

    Mobile Robots

    Get PDF
    The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations

    Health monitoring of Gas turbine engines: Framework design and strategies

    Get PDF

    Single channel signal separation using pseudo-stereo model and time-freqency masking

    Get PDF
    PhD ThesisIn many practical applications, one sensor is only available to record a mixture of a number of signals. Single-channel blind signal separation (SCBSS) is the research topic that addresses the problem of recovering the original signals from the observed mixture without (or as little as possible) any prior knowledge of the signals. Given a single mixture, a new pseudo-stereo mixing model is developed. A “pseudo-stereo” mixture is formulated by weighting and time-shifting the original single-channel mixture. This creates an artificial resemblance of a stereo signal given by one location which results in the same time-delay but different attenuation of the source signals. The pseudo-stereo mixing model relaxes the underdetermined ill-conditions associated with monaural source separation and begets the advantage of the relationship of the signals between the readily observed mixture and the pseudo-stereo mixture. This research proposes three novel algorithms based on the pseudo-stereo mixing model and the binary time-frequency (TF) mask. Firstly, the proposed SCBSS algorithm estimates signals’ weighted coefficients from a ratio of the pseudo-stereo mixing model and then constructs a binary maximum likelihood TF masking for separating the observed mixture. Secondly, a mixture in noisy background environment is considered. Thus, a mixture enhancement algorithm has been developed and the proposed SCBSS algorithm is reformulated using an adaptive coefficients estimator. The adaptive coefficients estimator computes the signal characteristics for each time frame. This property is desirable for both speech and audio signals as they are aptly characterized as non-stationary AR processes. Finally, a multiple-time delay (MTD) pseudo-stereo SINGLE CHANNEL SIGNAL SEPARATION ii mixture is developed. The MTD mixture enhances the flexibility as well as the separability over the originally proposed pseudo-stereo mixing model. The separation algorithm of the MTD mixture has also been derived. Additionally, comparison analysis between the MTD mixture and the pseudo-stereo mixture has also been identified. All algorithms have been demonstrated by synthesized and real-audio signals. The performance of source separation has been assessed by measuring the distortion between original source and the estimated one according to the signal-to-distortion (SDR) ratio. Results show that all proposed SCBSS algorithms yield a significantly better separation performance with an average SDR improvement that ranges from 2.4dB to 5dB per source and they are computationally faster over the benchmarked algorithms.Payap University
    corecore