6,775 research outputs found

    Analysis of Dynamic Brain Imaging Data

    Get PDF
    Modern imaging techniques for probing brain function, including functional Magnetic Resonance Imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques of analysis and visualization of such imaging data, in order to separate the signal from the noise, as well as to characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: `noise' characterization and suppression, and `signal' characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for non-stationarity in the data. Of particular note are (a) the development of a decomposition technique (`space-frequency singular value decomposition') that is shown to be a useful means of characterizing the image data, and (b) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources.Comment: 40 pages; 26 figures with subparts including 3 figures as .gif files. Originally submitted to the neuro-sys archive which was never publicly announced (was 9804003

    Denoising using local projective subspace methods

    Get PDF
    In this paper we present denoising algorithms for enhancing noisy signals based on Local ICA (LICA), Delayed AMUSE (dAMUSE) and Kernel PCA (KPCA). The algorithm LICA relies on applying ICA locally to clusters of signals embedded in a high-dimensional feature space of delayed coordinates. The components resembling the signals can be detected by various criteria like estimators of kurtosis or the variance of autocorrelations depending on the statistical nature of the signal. The algorithm proposed can be applied favorably to the problem of denoising multi-dimensional data. Another projective subspace denoising method using delayed coordinates has been proposed recently with the algorithm dAMUSE. It combines the solution of blind source separation problems with denoising efforts in an elegant way and proofs to be very efficient and fast. Finally, KPCA represents a non-linear projective subspace method that is well suited for denoising also. Besides illustrative applications to toy examples and images, we provide an application of all algorithms considered to the analysis of protein NMR spectra.info:eu-repo/semantics/publishedVersio

    Identification of audio evoked response potentials in ambulatory EEG data

    Get PDF
    Electroencephalography (EEG) is commonly used for observing brain function over a period of time. It employs a set of invasive electrodes on the scalp to measure the electrical activity of the brain. EEG is mainly used by researchers and clinicians to study the brain’s responses to a specific stimulus - the event-related potentials (ERPs). Different types of undesirable signals, which are known as artefacts, contaminate the EEG signal. EEG and ERP signals are very small (in the order of microvolts); they are often obscured by artefacts with much larger amplitudes in the order of millivolts. This greatly increases the difficulty of interpreting EEG and ERP signals.Typically, ERPs are observed by averaging EEG measurements made with many repetitions of the stimulus. The average may require many tens of repetitions before the ERP signal can be observed with any confidence. This greatly limits the study and useof ERPs. This project explores more sophisticated methods of ERP estimation from measured EEGs. An Optimal Weighted Mean (OWM) method is developed that forms a weighted average to maximise the signal to noise ratio in the mean. This is developedfurther into a Bayesian Optimal Combining (BOC) method where the information in repetitions of ERP measures is combined to provide a sequence of ERP estimations with monotonically decreasing uncertainty. A Principal Component Analysis (PCA) isperformed to identify the basis of signals that explains the greatest amount of ERP variation. Projecting measured EEG signals onto this basis greatly reduces the noise in measured ERPs. The PCA filtering can be followed by OWM or BOC. Finally, crosschannel information can be used. The ERP signal is measured on many electrodes simultaneously and an improved estimate can be formed by combining electrode measurements. A MAP estimate, phrased in terms of Kalman Filtering, is developed using all electrode measurements.The methods developed in this project have been evaluated using both synthetic and measured EEG data. A synthetic, multi-channel ERP simulator has been developed specifically for this project.Numerical experiments on synthetic ERP data showed that Bayesian Optimal Combining of trial data filtered using a combination of PCA projection and Kalman Filtering, yielded the best estimates of the underlying ERP signal. This method has been applied to subsets of real Ambulatory Electroencephalography (AEEG) data, recorded while participants performed a range of activities in different environments. From this analysis, the number of trials that need to be collected to observe the P300 amplitude and delay has been calculated for a range of scenarios

    Denoising with patch-based principal component analysis

    Get PDF
    One important task in image processing is noise reduction, which requires to recover image information by removing noise without loss of local structures. In recent decades patch-based denoising techniques proved to have a better performance than pixel-based ones, since a spatial neighbourhood can represent high correlations between nearby pixels and improve the results of similarity measurements. This bachelor thesis deals with denoising strategies with patch-based principal component analysis. The main focus lies on learning a new basis on which the representation of an image has the best denoising effect. The first attempt is to perform principal component analysis on a global scale, which obtains a basis that reflects the major variance of an image. The second attempt is to learn bases respectively over patches in a local window, so that more image details can be preserved. In addition, local pixel grouping is introduced to find similar patches in a local window. Due to the importance of sufficient samples in the principal component analysis transform, the third attempt is to search for more similar patches in the whole image by using a vantage point tree for space partitioning. In the part of implementation, parameter selection and time complexity are discussed. The denoising performance of different approaches is evaluated in terms of both PSNR value and visual quality.Eine der wichtigen Aufgaben in der Bildverarbeitung ist die Entrauschung, die erfordert Bildinformationen ohne Verlust lokaler Strukturen wiederzuherstellen. In den letzten Jahrzehnten hat es sich herausgestellt, dass Patch-basierte Verfahren eine bessere Leistung bei der Bildentrauschung haben als Pixel-basierte Verfahren. Der Grund liegt darin, dass eine räumliche Nachbarschaft die Korrelationen zwischen benachbarten Pixels repräsentiert und die Ergebnisse des Ähnlichkeitsmaß verbessern. In dieser Bachelorarbeit geht es um Entrauschungsstrategien mit der Patch-basierten Hauptkomponentenanalyse. Der Schwerpunkt liegt im Lernen einer neuen Basis, auf welcher die Representation eines Bildes den besten Entrauschungseffekt hat. Der erste Versuch ist, die Hauptkomponentenanalyse global durchzuführen und eine Basis zu erhalten, welche die Hauptvarianz eines Bildes reflektiert. Der zweite Versuch ist, mehrere Basen jeweils über Patches in einem lokalen Fenster zu lernen, um mehr Details zu behalten. Außerdem wird Local Pixel Grouping benutzt um ähnliche Patches in einem lokalen Fenster zu suchen. Die Hauptkomponentenanalyse ist wichtig dass genügend Samples vorhanden sind, daher werden im dritten Versuch weitere ähnliche Patches innerhalb des ganzen Bildes mithilfe von einem Vantage Point Baum gesucht. Im Teil der Implementierung wird über die Auswahl der Parameter und die Zeitkomplexität diskutiert. Die Entrauschungsleistung von unterschiedlichen Verfahren wird nach dem PSNR-Wert und der visuellen Qualität evaluiert

    Decoding the Encoding of Functional Brain Networks: an fMRI Classification Comparison of Non-negative Matrix Factorization (NMF), Independent Component Analysis (ICA), and Sparse Coding Algorithms

    Full text link
    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet mathematical constraints such as sparse coding and positivity both provide alternate biologically-plausible frameworks for generating brain networks. Non-negative Matrix Factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks for different constraints are used as basis functions to encode the observed functional activity at a given time point. These encodings are decoded using machine learning to compare both the algorithms and their assumptions, using the time series weights to predict whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. For classifying cognitive activity, the sparse coding algorithm of L1L1 Regularized Learning consistently outperformed 4 variations of ICA across different numbers of networks and noise levels (p<<0.001). The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy. Within each algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<<0.001). The success of sparse coding algorithms may suggest that algorithms which enforce sparse coding, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA

    Mitigating wind induced noise in outdoor microphone signals using a singular spectral subspace method

    Get PDF
    Wind induced noise is one of the major concerns of outdoor acoustic signal acquisition. It affects many field measurement and audio recording scenarios. Filtering such noise is known to be difficult due to its broadband and time varying nature. In this paper, a new method to mitigate wind induced noise in microphone signals is developed. Instead of applying filtering techniques, wind induced noise is statistically separated from wanted signals in a singular spectral subspace. The paper is presented in the context of handling microphone signals acquired outdoor for acoustic sensing and environmental noise monitoring or soundscapes sampling. The method includes two complementary stages, namely decomposition and reconstruction. The first stage decomposes mixed signals in eigen-subspaces, selects and groups the principal components according to their contributions to wind noise and wanted signals in the singular spectrum domain. The second stage reconstructs the signals in the time domain, resulting in the separation of wind noise and wanted signals. Results show that microphone wind noise is separable in the singular spectrum domain evidenced by the weighted correlation. The new method might be generalized to other outdoor sound acquisition applications. Keywords: microphone; wind noise; matrix decomposition and reconstruction; separability; weighted correlation; acoustic sensing; acoustic signals; environmental noise; monitorin

    Removing Spurious Concepts from Neural Network Representations via Joint Subspace Estimation

    Full text link
    Out-of-distribution generalization in neural networks is often hampered by spurious correlations. A common strategy is to mitigate this by removing spurious concepts from the neural network representation of the data. Existing concept-removal methods tend to be overzealous by inadvertently eliminating features associated with the main task of the model, thereby harming model performance. We propose an iterative algorithm that separates spurious from main-task concepts by jointly identifying two low-dimensional orthogonal subspaces in the neural network representation. We evaluate the algorithm on benchmark datasets for computer vision (Waterbirds, CelebA) and natural language processing (MultiNLI), and show that it outperforms existing concept removal methodsComment: Preprint. Under Review. 33 page

    A subpixel target detection algorithm for hyperspectral imagery

    Get PDF
    The goal of this research is to develop a new algorithm for the detection of subpixel scale target materials on the hyperspectral imagery. The signal decision theory is typically to decide the existence of a target signal embedded in the random noise. This implies that the detection problem can be mathematically formalized by signal decision theory based on the statistical hypothesis test. In particular, since any target signature provided by airborne/spaceborne sensors is embedded in a structured noise such as background or clutter signatures as well as broad band unstructured noise, the problem becomes more complicated, and particularly much more under the unknown noise structure. The approach is based on the statistical hypothesis method known as Generalized Likelihood Ratio Test (GLRT). The use of GLRT requires estimating the unknown parameters, and assumes the prior information of two subspaces describing target variation and background variation respectively. Therefore, this research consists of two parts, the implementation of GLRT and the characterization of two subspaces through new approaches. Results obtained from computer simulation, HYDICE image and AVI RIS image show that this approach is feasible
    • …
    corecore