598 research outputs found

    Localization of brain signal sources using blind source separation

    Get PDF
    Reliable localization of brain signal sources by using convenient, easy, and hazardless data acquisition techniques can potentially play a key role in the understanding, analysis, and tracking of brain activities for determination of physiological, pathological, and functional abnormalities. The sources can be due to normal brain activities, mental disorders, stimulation of the brain, or movement related tasks. The focus of this thesis is therefore the development of novel source localization techniques based upon EEG measurements. Independent component analysis is used in blind separation (BSS) of the EEG sources to yield three different approaches for source localization. In the first method the sources are localized over the scalp pattern using BSS in various subbands, and by investigating the number of components which are likely to be the true sources. In the second method, the sources are separated and their corresponding topographical information is used within a least-squares algorithm to localize the sources within the brain region. The locations of the known sources, such as some normal brain rhythms, are also utilized to help in determining the unknown sources. The final approach is an effective BSS algorithm partially constrained by information related to the known sources. In addition, some investigation have been undertaken to incorporate non-homogeneity of the head layers in terms of the changes in electrical and magnetic characteristics and also with respect to the noise level within the processing methods. Experimental studies with real and synthetic data sets are undertaken using MATLAB and the efficacy of each method discussed

    Multimodal Integration: fMRI, MRI, EEG, MEG

    Get PDF
    This chapter provides a comprehensive survey of the motivations, assumptions and pitfalls associated with combining signals such as fMRI with EEG or MEG. Our initial focus in the chapter concerns mathematical approaches for solving the localization problem in EEG and MEG. Next we document the most recent and promising ways in which these signals can be combined with fMRI. Specically, we look at correlative analysis, decomposition techniques, equivalent dipole tting, distributed sources modeling, beamforming, and Bayesian methods. Due to difculties in assessing ground truth of a combined signal in any realistic experiment difculty further confounded by lack of accurate biophysical models of BOLD signal we are cautious to be optimistic about multimodal integration. Nonetheless, as we highlight and explore the technical and methodological difculties of fusing heterogeneous signals, it seems likely that correct fusion of multimodal data will allow previously inaccessible spatiotemporal structures to be visualized and formalized and thus eventually become a useful tool in brain imaging research

    Adaptive techniques for the detection and localization of event related potentials from EEGs using reference signals

    Get PDF
    In this thesis we show the methods we developed for the detection and localisation of P300 signals from the electroencephalogram. We utilised signal processing theory in order to enhance the current methodology. The work done can be applied both to EEG averages and single trial EEG data. We developed a variety of methods dealing with the extraction of the P300 and its subcomponents using independent component analysis and least squares. Moreover, we developed novel localisation methods that localise the desired P300 subcomponent from EEG data. Throughout the thesis the main idea was the use of reference signals, which describe the prior information we have about the sources of interest. The main objective of this thesis is to utilize adaptive techniques, namely blind source separation (BSS), least squares (LS) and spatial filtering, in order to extract the P300 subcomponents from the electroencephalogram (EEG) with greater accuracy than the traditional methods. The first topic of research, is the development of constrained BSS and blind signal extraction (BSE) algorithms, to enhance the estimation of the conventional BSS and BSE algorithms. In these methods we use reference signals as prior information, obtained from real EEG data, to aid BSS and BSE in the extraction of the P300 subcomponents. Although, this method exhibits very good behaviour in terms of EEG averaged data, its performance degrades when applied to single trial data, which is the response of the brain after one single stimulus. The second topic deals with single trial EEG data and is based on least squares. Again, we use reference signals to describe the prior knowledge of the P300 subcomponents. In contrast to the first method, the reference signals are Gaussian spike templates with variable latency and width. The target of this algorithm is to measure the properties of the extracted P300 subcomponents and obtain features that can be used in the classification of schizophrenic patients and healthy subjects. Finally, the idea of spatial filtering combined with the use of a reference signal for localisation is introduced for the first time. The designed algorithm localises our desired source from within a mixture of sources where the propagation model of the sources is available. It performs well in the presence of noise and correlated sources. The research presented in this thesis paves the path in introducing adaptive techniques based on reference signals into ERP estimation. The results have been very promising and provide a big step in establishing a foundation for future research

    Advanced algorithms for audio and image processing

    Get PDF
    The objective of the thesis is the development of a set of innovative algorithms around the topic of beamforming in the field of acoustic imaging, audio and image processing, aimed at significantly improving the performance of devices that exploit these computational approaches. Therefore the context is the improvement of devices (ultrasound machines and video/audio devices) already on the market or the development of new ones which, through the proposed studies, can be introduced on new the markets with the launch of innovative high-tech start-ups. This is the motivation and the leitmotiv behind the doctoral work carried out. In fact, in the first part of the work an innovative image reconstruction algorithm in the field of ultrasound biomedical imaging is presented, which is connected to the development of such equipment that exploits the computing opportunities currently offered nowadays at low cost by GPUs (Moore\u2019s law). The proposed target is to obtain a new pipeline of the reconstruction of the image abandoning the architecture of such hardware based In the first part of the thesis I faced the topic of the reconstruction of ultrasound images for applications hypothesized on a software based device through image reconstruction algorithms processed in the frequency domain. An innovative beamforming algorithm based on seismic migration is presented, in which a transformation of the RF data is carried out and the reconstruction algorithm can evaluate a masking of the k-space of the data, speeding up the reconstruction process and reducing the computational burden. The analysis and development of the algorithms responsible for carrying out the thesis has been approached from a feasibility point in an off-line context and on the Matlab platform, processing both synthetic simulated generated data and real RF data: the subsequent development of these algorithms within of the future ultrasound biomedical equipment will exploit an high-performance computing framework capable of processing customized kernel pipelines (henceforth called \u2019filters\u2019) on CPU/GPU. The type of filters implemented involved the topic of Plane Wave Imaging (PWI), an alternative method of acquiring the ultrasound image compared to the state of the art of the traditional standard B-mode which currently exploit sequential sequence of insonification of the sample under examination through focused beams transmitted by the probe channels. The PWI mode is interesting and opens up new scenarios compared to the usual signal acquisition and processing techniques, with the aim of making signal processing in general and image reconstruction in particular faster and more flexible, and increasing importantly the frame rate opens up and improves clinical applications. The innovative idea is to introduce in an offline seismic reconstruction algorithm for ultrasound imaging a further filter, named masking matrix. The masking matrices can be computed offline knowing the system parameters, since they do not depend from acquired data. Moreover, they can be pre-multiplied to propagation matrices, without affecting the overall computational load. Subsequently in the thesis, the topic of beamforming in audio processing on super-direct linear arrays of microphones is addressed. The aim is to make an in depth analysis of two main families of data-independent approaches and algorithms present in the literature by comparing their performances and the trade-off between directivity and frequency invariance, which is not yet known at to the state-of-the-art. The goal is to validate the best algorithm that allows, from the perspective of an implementation, to experimentally verify performance, correlating it with the characteristics and error statistics. Frequency-invariant beam patterns are often required by systems using an array of sensors to process broadband signals. In some experimental conditions, the array spatial aperture is shorter than the involved wavelengths. In these conditions, superdirective beamforming is essential for an efficient system. I present a comparison between two methods that deal with a data-independent beamformer based on a filter-and-sum structure. Both methods (the first one numerical, the second one analytic) formulate a mathematical convex minimization problem, in which the variables to be optimized are the filters coefficients or frequency responses. In the described simulations, I have chosen a geometry and a set-up of parameters that allows us to make a fair comparison between the performances of the two different design methods analyzed. In particular, I addressed a small linear array for audio capture with different purposes (hearing aids, audio surveillance system, video-conference system, multimedia device, etc.). The research activity carried out has been used for the launch of a high-tech device through an innovative start-up in the field of glasses/audio devices (https://acoesis.com/en/). It has been proven that the proposed algorithm gives the possibility of obtaining higher performances than the state of the art of similar algorithms, additionally providing the possibility of connecting directivity or better generalized directivity to the statistics of phase errors and gain of sensors, extremely important in superdirective arrays in the case of real and industrial implementation. Therefore, the method proposed by the comparison is innovative because it quantitatively links the physical construction characteristics of the array to measurable and experimentally verifiable quantities, making the real implementation process controllable. The third topic faced is the reconstruction of the Room Impluse Response (RIR) using audio processing blind methods. Given an unknown audio source, the estimation of time differences-of-arrivals (TDOAs) can be efficiently and robustly solved using blind channel identification and exploiting the cross-correlation identity (CCI). Prior blind works have improved the estimate of TDOAs by means of different algorithmic solutions and optimization strategies, while always sticking to the case N = 2 microphones. But what if we can obtain a direct improvement in performance by just increasing N? In the fourth Chapter I tried to investigate this direction, showing that, despite the arguable simplicity, this is capable of (sharply) improving upon state-of-the-art blind channel identification methods based on CCI, without modifying the computational pipeline. Inspired by our results, we seek to warm up the community and the practitioners by paving the way (with two concrete, yet preliminary, examples) towards joint approaches in which advances in the optimization are combined with an increased number of microphones, in order to achieve further improvements. Sound source localisation applications can be tackled by inferring the time-difference-of-arrivals (TDOAs) between a sound-emitting source and a set of microphones. Among the referred applications, one can surely list room-aware sound reproduction, room geometry\u2019s estimation, speech enhancement. Despite a broad spectrum of prior works estimate TDOAs from a known audio source, even when the signal emitted from the acoustic source is unknown, TDOAs can be inferred by comparing the signals received at two (or more) spatially separated microphones, using the notion of cross-corrlation identity (CCI). This is the key theoretical tool, not only, to make the ordering of microphones irrelevant during the acquisition stage, but also to solve the problem as blind channel identification, robustly and reliably inferring TDOAs from an unknown audio source. However, when dealing with natural environments, such \u201cmutual agreement\u201d between microphones can be tampered by a variety of audio ambiguities such as ambient noise. Furthermore, each observed signal may contain multiple distorted or delayed replicas of the emitting source due to reflections or generic boundary effects related to the (closed) environment. Thus, robustly estimating TDOAs is surely a challenging problem and CCI-based approaches cast it as single-input/multi-output blind channel identification. Such methods promote robustness in the estimate from the methodological standpoint: using either energy-based regularization, sparsity or positivity constraints, while also pre-conditioning the solution space. Last but not least, the Acoustic Imaging is an imaging modality that exploits the propagation of acoustic waves in a medium to recover the spatial distribution and intensity of sound sources in a given region. Well known and widespread acoustic imaging applications are, for example, sonar and ultrasound. There are active and passive imaging devices: in the context of this thesis I consider a passive imaging system called Dual Cam that does not emit any sound but acquires it from the environment. In an acoustic image each pixel corresponds to the sound intensity of the source, the whose position is described by a particular pair of angles and, in the case in which the beamformer can, as in our case, work in near-field, from a distance on which the system is focused. In the last part of this work I propose the use of a new modality characterized by a richer information content, namely acoustic images, for the sake of audio-visual scene understanding. Each pixel in such images is characterized by a spectral signature, associated to a specific direction in space and obtained by processing the audio signals coming from an array of microphones. By coupling such array with a video camera, we obtain spatio-temporal alignment of acoustic images and video frames. This constitutes a powerful source of self-supervision, which can be exploited in the learning pipeline we are proposing, without resorting to expensive data annotations. However, since 2D planar arrays are cumbersome and not as widespread as ordinary microphones, we propose that the richer information content of acoustic images can be distilled, through a self-supervised learning scheme, into more powerful audio and visual feature representations. The learnt feature representations can then be employed for downstream tasks such as classification and cross-modal retrieval, without the need of a microphone array. To prove that, we introduce a novel multimodal dataset consisting in RGB videos, raw audio signals and acoustic images, aligned in space and synchronized in time. Experimental results demonstrate the validity of our hypothesis and the effectiveness of the proposed pipeline, also when tested for tasks and datasets different from those used for training. Chapter 6 closes the thesis, presenting a development activity of a new Dual Cam POC to build-up from it a spin-off, assuming to apply for an innovation project for hi-tech start- ups (such as a SME instrument H2020) for a 50Keuro grant, following the idea of the technology transfer. A deep analysis of the reference market, technologies and commercial competitors, business model and the FTO of intellectual property is then conducted. Finally, following the latest technological trends (https://www.flir.eu/products/si124/) a new version of the device (planar audio array) with reduced dimensions and improved technical characteristics is simulated, simpler and easier to use than the current one, opening up new interesting possibilities of development not only technical and scientific but also in terms of business fallout

    Methodological consensus on clinical proton MRS of the brain: Review and recommendations

    Get PDF
    © 2019 International Society for Magnetic Resonance in Medicine Proton MRS (1H MRS) provides noninvasive, quantitative metabolite profiles of tissue and has been shown to aid the clinical management of several brain diseases. Although most modern clinical MR scanners support MRS capabilities, routine use is largely restricted to specialized centers with good access to MR research support. Widespread adoption has been slow for several reasons, and technical challenges toward obtaining reliable good-quality results have been identified as a contributing factor. Considerable progress has been made by the research community to address many of these challenges, and in this paper a consensus is presented on deficiencies in widely available MRS methodology and validated improvements that are currently in routine use at several clinical research institutions. In particular, the localization error for the PRESS localization sequence was found to be unacceptably high at 3 T, and use of the semi-adiabatic localization by adiabatic selective refocusing sequence is a recommended solution. Incorporation of simulated metabolite basis sets into analysis routines is recommended for reliably capturing the full spectral detail available from short TE acquisitions. In addition, the importance of achieving a highly homogenous static magnetic field (B0) in the acquisition region is emphasized, and the limitations of current methods and hardware are discussed. Most recommendations require only software improvements, greatly enhancing the capabilities of clinical MRS on existing hardware. Implementation of these recommendations should strengthen current clinical applications and advance progress toward developing and validating new MRS biomarkers for clinical use

    Epilepsy

    Get PDF
    With the vision of including authors from different parts of the world, different educational backgrounds, and offering open-access to their published work, InTech proudly presents the latest edited book in epilepsy research, Epilepsy: Histological, electroencephalographic, and psychological aspects. Here are twelve interesting and inspiring chapters dealing with basic molecular and cellular mechanisms underlying epileptic seizures, electroencephalographic findings, and neuropsychological, psychological, and psychiatric aspects of epileptic seizures, but non-epileptic as well

    Doctor of Philosophy

    Get PDF
    dissertationHuman retinitis pigmentosa (RP) typically involves decades of progressive vision loss before some patients become blind, and prospective therapies target patients who have been blind for substantial time, even decades. Evaluations of molecular and cellular therapies have primarily employed short-lived mouse models lacking the scope of remodeling common in human RP. The Rho Tg P347L transgenic rabbit offers a unique opportunity to evaluate the primary degeneration event and subsequent progressive remodeling that ensues over a timespan that recapitulates the human disease phenotype. Retinas from a TgP347L rabbit model of human dominant RP and wild-type litter mates were harvested over an 8-year span and processed for transmission electron microscope connectomics, immunocytochemistry for a range of macromolecules, and computational molecular phenotyping for small molecules, including transport tracing with D-Asp. Early time points in the TgP347L rabbit recapitulate the established sequence of photoreceptor loss, retinal remodeling, and reprogramming, and also reveal progressive disruptions in MĂĽller cell metabolism, where rather than observing a homogeneous glial population, chaotic metabolic signatures emerge. By 4 years, virtually all remnants of photoreceptors are gone and the neural retina manifests severe cell loss and near complete loss of glutamine synthetase, though glial glutamate transport persists. By 6 years, there is a global >90% neuronal loss. In some regions the retina is devoid of identifiable cells and replaced by unknown debris-like assemblies. Though the 6-year retina does have locations with recognizable neurons, all cell types are drastically reduced in number and some have altered metabolic phenotypes. These results are never seen in wt littermates, including rabbits which are 8 years old. Electron microscopic analysis using wide-field connectomics imaging of the 6-year TgP347L sample demonstrates some structurally normal synapses, indicating that survivor neurons in these regions are not quiescent despite the lack of sensory input for a substantial period of time. These results indicate that, although photoreceptor degeneration is the trigger, retinal remodeling ultimately gives way to neurodegeneration, which is a separate unrelenting disease process independent of the initial insult, closely resembling slow progressive CNS neurodegenerations. Indeed, both metabolic disruption and debris-related degeneration predicts the existence of a persistent neuropathy, and increases in ?-synuclein levels support a proteinopathy component. Remodeling and neurodegeneration progress until the retina is devoid of recognizable cells. There is no stable state into which the retina settles and no cell type is spared. This has profound implications for current therapeutics. There will likely be critical windows for implementation but, ultimately, suspension of neurodegenerative remodeling will be required for long-term success
    • …
    corecore