258 research outputs found

    Pre-processing of Speech Signals for Robust Parameter Estimation

    Get PDF

    Cortical Dynamics of Language

    Get PDF
    The human capability for fluent speech profoundly directs inter-personal communication and, by extension, self-expression. Language is lost in millions of people each year due to trauma, stroke, neurodegeneration, and neoplasms with devastating impact to social interaction and quality of life. The following investigations were designed to elucidate the neurobiological foundation of speech production, building towards a universal cognitive model of language in the brain. Understanding the dynamical mechanisms supporting cortical network behavior will significantly advance the understanding of how both focal and disconnection injuries yield neurological deficits, informing the development of therapeutic approaches

    Statistical modeling of the long-range dependent structure of barrier island framework geology and surface geomorphology

    Get PDF
    Shorelines exhibit long-range dependence (LRD) and have been shown in some environments to be described in the wavenumber domain by a power law characteristic of scale-independence. Recent evidence suggests that the geomorphology of barrier islands can, however, exhibit scale dependence as a result of systematic variations of the underlying framework geology. The LRD of framework geology, which influences island geomorphology and its response to storms and sea level rise, has not been previously examined. Electromagnetic induction (EMI) surveys conducted along Padre Island National Seashore (PAIS), Texas, USA, reveal that the EMI apparent conductivity σa signal and, by inference, the framework geology exhibits LRD at scales up to 101 to 102 km. Our study demonstrates the utility of describing EMI σa and LiDAR spatial series by a fractional auto-regressive integrated moving average process that specifically models LRD. This method offers a robust and compact way for quantifying the geological variations along a barrier island shoreline using three parameters (p,d,q). We discuss how ARIMA (0,d,0) models that use a single parameter d provide a quantitative measure for determining free and forced barrier island evolutionary behavior across different scales. Statistical analyses at regional, intermediate, and local scales suggest that the geologic framework within an area of paleo-channels exhibits a first order control on dune height. The exchange of sediment amongst nearshore, beach and dune in areas outside this region are scale-independent, implying that barrier islands like PAIS exhibit a combination of free and forced behaviors that affect the response of the island to sea level rise

    On-line severity assessment of bearing damage via defect sensitive resonance identification and matched filtering

    Full text link
    A microcomputer based on-line bearing condition monitoring system was developed. Employing synchronised segmentation and parametric spectral comparisons, the system enabled on-line identification of defect sensitive resonances for an investigated bearing system at an early stage of damage. A matched filter was designed to keep track of the energy contributed by these resonances throughout the rest of bearing life. The magnitude of the energy was found to be well correlated with the development of bearing localised defects. It takes 38 sec for the identification of defect sensitive resonances and 7 sec for the matched filter to report each new assessment of bearing condition on a programmed PC-AT.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/27248/1/0000255.pd

    Compressed Sensing And Joint Acquisition Techniques In Mri

    Get PDF
    The relatively long scan times in Magnetic Resonance Imaging (MRI) limits some clinical applications and the ability to collect more information in a reasonable period of time. Practically, 3D imaging requires longer acquisitions which can lead to a reduction in image quality due to motion artifacts, patient discomfort, increased costs to the healthcare system and loss of profit to the imaging center. The emphasis in reducing scan time has been to a large degree through using limited k-space data acquisition and special reconstruction techniques. Among these approaches are data extrapolation methods such as constrained reconstruction techniques, data interpolation methods such as parallel imaging, and more recently another technique known as Compressed Sensing (CS). In order to recover the image components from far fewer measurements, CS exploits the compressible nature of MR images by imposing randomness in k-space undersampling schemes. In this work, we explore some intuitive examples of CS reconstruction leading to a primitive algorithm for CS MR imaging. Then, we demonstrate the application of this algorithm to MR angiography (MRA) with the goal of reducing the scan time. Our results showed reconstructions with comparable results to the fully sampled MRA images, providing up to three times faster image acquisition via CS. The CS performance in recovery of the vessels in MRA, showed slightly shrinkage of both the width of and amplitude of the vessels in 20% undersampling scheme. The spatial location of the vessels however remained intact during CS reconstruction. Another direction we pursue is the introduction of joint acquisition for accelerated multi data point MR imaging such as multi echo or dynamic imaging. Keyhole imaging and view sharing are two techniques for accelerating dynamic acquisitions, where some k-space data is shared between neighboring acquisitions. In this work, we combine the concept of CS random sampling with keyhole imaging and view sharing techniques, in order to improve the performance of each method by itself and reduce the scan time. Finally, we demonstrate the application of this new method in multi-echo spin echo (MSE) T2 mapping and compare the results with conventional methods. Our proposed technique can potentially provide up to 2.7 times faster image acquisition. The percentage difference error maps created from T2 maps generated from images with joint acquisition and fully sampled images, have a histogram with a 5-95 percentile of less than 5% error. This technique can potentially be applied to other dynamic imaging acquisitions such as multi flip angle T1 mapping or time resolved contrast enhanced MRA

    Signal processing techniques for the enhancement of marine seismic data

    Get PDF
    This thesis presents several signal processing techniques applied to the enhancement of marine seismic data. Marine seismic exploration provides an image of the Earth's subsurface from reflected seismic waves. Because the recorded signals are contaminated by various sources of noise, minimizing their effects with new attenuation techniques is necessary. A statistical analysis of background noise is conducted using Thomson’s multitaper spectral estimator and Parzen's amplitude density estimator. The results provide a statistical characterization of the noise which we use for the derivation of signal enhancement algorithms. Firstly, we focus on single-azimuth stacking methodologies and propose novel stacking schemes using either enhanced weighted sums or a Kalman filter. It is demonstrated that the enhanced methods yield superior results by their ability to exhibit cleaner and better defined reflected events as well as a larger number of reflections in deep waters. A comparison of the proposed stacking methods with existing ones is also discussed. We then address the problem of random noise attenuation and present an innovative application of sparse code shrinkage and independent component analysis. Sparse code shrinkage is a valuable method when a noise-free realization of the data is generated to provide data-driven shrinkages. Several models of distribution are investigated, but the normal inverse Gaussian density yields the best results. Other acceptable choices of density are discussed as well. Finally, we consider the attenuation of flow-generated nonstationary coherent noise and seismic interference noise. We suggest a multiple-input adaptive noise canceller that utilizes a normalized least mean squares alg orithm with a variable normalized step size derived as a function of instantaneous frequency. This filter attenuates the coherent noise successfully when used either by itself or in combination with a time-frequency median filter, depending on the noise spectrum and repartition along the data. Its application to seismic interference attenuation is also discussed

    Overcomplete Mathematical Models with Applications

    Get PDF
    Chen, Donoho a Saunders (1998) studují problematiku hledání řídké reprezentace vektorů (signálů) s použitím speciálních přeurčených systémů vektorů vyplňujících prostor signálu. Takovéto systémy (někdy jsou také nazývány frejmy) jsou typicky vytvořeny buď rozšířením existující báze, nebo sloučením různých bazí. Narozdíl od vektorů, které tvoří konečně rozměrné prostory, může být problém formulován i obecněji v rámci nekonečně rozměrných separabilních Hilbertových prostorů (Veselý, 2002b; Christensen, 2003). Tento funkcionální přístup nám umožňuje nacházet v těchto prostorech přesnější reprezentace objektů, které, na rozdíl od vektorů, nejsou diskrétní. V této disertační práci se zabývám hledáním řídkých representací v přeurčených modelech časových řad náhodných veličin s konečnými druhými momenty. Numerická studie zachycuje výhody a omezení tohoto přístupu aplikovaného na zobecněné lineární modely a na vícerozměrné ARMA modely. Analýzou mnoha numerických simulací i modelů reálných procesů můžeme říci, že tyto metody spolehlivě identifikují parametry blízké nule, a tak nám umožňují redukovat původně špatně podmíněný přeparametrizovaný model. Tímto významně redukují počet odhadovaných parametrů. V konečném důsledku se tak nemusíme starat o řády modelů, jejichž zjišťování je většinou předběžným krokem standardních technik. Pro kratší časové řady (100 a méně vzorků) řídké odhady dávají lepší predikce v porovnání s těmi, které jsou založené na standardních metodách (např. maximální věrohodnosti v MATLABu - MATLAB System Identification Toolbox (IDENT)). Pro delší časové řady (500 a více) obě techniky dávají v podstatě stejně přesné predikce. Na druhou stranu řešení těchto problémů je náročnější, a to i časově, nicméně výpočetní doba je stále přijatelná.Chen, Donoho a Saunders (1998) deal with the problem of sparse representation of vectors (signals) by using special overcomplete (redundant) systems of vectors spanning this space. Typically such systems (also called frames) are obtained either by refining existing basis or merging several such bases (refined or not) of various kinds (so-called packets). In contrast to vectors which belong to a finite-dimensional space, the problem of sparse representation may be formulated within a more general framework of (even infinite-dimensional) separable Hilbert space (Veselý, 2002b; Christensen, 2003). Such functional approach allows us to get more precise representation of objects from such space which, unlike vectors, are not discrete by their nature. In this Thesis, I attack the problem of sparse representation from overcomplete time series models using expansions in the Hilbert space of random variables of finite variance. A numerical study demonstrates benefits and limits of this approach when applied to generalized linear models or to overcomplete VARMA models of multivariate stationary time series, respectively. After having accomplished and analyzed a lot of numerical simulations as well as real data models, we can conclude that the sparse method reliably identifies nearly zero parameters allowing us to reduce the originally badly conditioned overparametrized model. Thus it significantly reduces the number of estimated parameters. Consequently there is no care about model orders the fixing of which is a common preliminary step used by standard techniques. For short time series paths (100 or less samples), the sparse parameter estimates provide more precise predictions compared with those based on standard maximum likelihood estimators from MATLAB's System Identification Toolbox (IDENT). For longer paths (500 or more), both techniques yield nearly equal prediction paths. On the other hand, solution of such problems requires more sophistication and that is why a computational speed is larger, but still comfortable.

    Enhancing brain-computer interfacing through advanced independent component analysis techniques

    No full text
    A Brain-computer interface (BCI) is a direct communication system between a brain and an external device in which messages or commands sent by an individual do not pass through the brain’s normal output pathways but is detected through brain signals. Some severe motor impairments, such as Amyothrophic Lateral Sclerosis, head trauma, spinal injuries and other diseases may cause the patients to lose their muscle control and become unable to communicate with the outside environment. Currently no effective cure or treatment has yet been found for these diseases. Therefore using a BCI system to rebuild the communication pathway becomes a possible alternative solution. Among different types of BCIs, an electroencephalogram (EEG) based BCI is becoming a popular system due to EEG’s fine temporal resolution, ease of use, portability and low set-up cost. However EEG’s susceptibility to noise is a major issue to develop a robust BCI. Signal processing techniques such as coherent averaging, filtering, FFT and AR modelling, etc. are used to reduce the noise and extract components of interest. However these methods process the data on the observed mixture domain which mixes components of interest and noise. Such a limitation means that extracted EEG signals possibly still contain the noise residue or coarsely that the removed noise also contains part of EEG signals embedded. Independent Component Analysis (ICA), a Blind Source Separation (BSS) technique, is able to extract relevant information within noisy signals and separate the fundamental sources into the independent components (ICs). The most common assumption of ICA method is that the source signals are unknown and statistically independent. Through this assumption, ICA is able to recover the source signals. Since the ICA concepts appeared in the fields of neural networks and signal processing in the 1980s, many ICA applications in telecommunications, biomedical data analysis, feature extraction, speech separation, time-series analysis and data mining have been reported in the literature. In this thesis several ICA techniques are proposed to optimize two major issues for BCI applications: reducing the recording time needed in order to speed up the signal processing and reducing the number of recording channels whilst improving the final classification performance or at least with it remaining the same as the current performance. These will make BCI a more practical prospect for everyday use. This thesis first defines BCI and the diverse BCI models based on different control patterns. After the general idea of ICA is introduced along with some modifications to ICA, several new ICA approaches are proposed. The practical work in this thesis starts with the preliminary analyses on the Southampton BCI pilot datasets starting with basic and then advanced signal processing techniques. The proposed ICA techniques are then presented using a multi-channel event related potential (ERP) based BCI. Next, the ICA algorithm is applied to a multi-channel spontaneous activity based BCI. The final ICA approach aims to examine the possibility of using ICA based on just one or a few channel recordings on an ERP based BCI. The novel ICA approaches for BCI systems presented in this thesis show that ICA is able to accurately and repeatedly extract the relevant information buried within noisy signals and the signal quality is enhanced so that even a simple classifier can achieve good classification accuracy. In the ERP based BCI application, after multichannel ICA the data just applied to eight averages/epochs can achieve 83.9% classification accuracy whilst the data by coherent averaging can reach only 32.3% accuracy. In the spontaneous activity based BCI, the use of the multi-channel ICA algorithm can effectively extract discriminatory information from two types of singletrial EEG data. The classification accuracy is improved by about 25%, on average, compared to the performance on the unpreprocessed data. The single channel ICA technique on the ERP based BCI produces much better results than results using the lowpass filter. Whereas the appropriate number of averages improves the signal to noise rate of P300 activities which helps to achieve a better classification. These advantages will lead to a reliable and practical BCI for use outside of the clinical laboratory

    Attenuation and velocity structure in the area of Pozzuoli-Solfatara (Campi Flegrei, Italy) for the estimate of local site response

    Get PDF
    In the present work I infer the 1D shear-wave velocity model in the volcanic area of Pozzuoli-Solfatara using the dispersion properties of both Rayleigh waves generated by artificial explosions and microtremor. The group-velocity dispersion curves are retrieved from application of the Multiple Filter Technique (MFT) to single-station recordings of air-gun sea shots. Seismic signals are filtered in different frequency bands and the dispersion curves are obtained by evaluating the arrival times of the envelope maxima of the filtered signals. Fundamental and higher modes are carefully recognized and separated by using a Phase Matched Filter (PMF). The obtained dispersion curves indicate Rayleigh-wave fundamental-mode group velocities ranging from about 0.8 to 0.6 km/sec over the 1-12 Hz frequency band. I also propose a new approach based on the autoregressive analysis, to recover group velocity dispersion. I first present a numerical example on a synthetic test signal and then I apply the technique to the data recorded in Solfatara, in order to compare the obtained results with those inferred from the MF analysis Moreover, I analyse ambient noise data recorded at a dense array, by using Aki’s correlation technique (SAC) and an extended version of this method (ESAC) The obtained phase velocities range from 1.5 km/s to 0.3 km/s over the 1-10 Hz frequency band. The group velocity dispersion curves are then inverted to infer a shallow shear-wave velocity model down to a depth of about 250 m, for the area of Pozzuoli-Solfatara. The shear-wave velocities thus obtained are compatible with those derived both from cross- and down-hole measurements in neighbour wells and from laboratory experiments. These data are eventually interpreted in the light of the geological setting of the area. I perform an attenuation study on array recordings of the signals generated by the shots. The  attenuation curve was retrieved by analysing the amplitude spectral decay of Rayleigh waves with the distance, in different frequency bands. The  attenuation curve was then inverted to infer the shallow Q inverse model. Using the obtained velocity and attenuation model, I calculate the theoretical ground response to a vertically-incident SH wave obtaining two main amplification peaks centered at frequencies of 2.1 and 5.4 Hz. The transfer function was compared with those obtained experimentally from the application of Nakamura’s technique to microtremor data, artificial explosions and local earthquakes. Agreement among the transfer functions is observed only for the amplification peak of frequency 5.4 Hz. Finally, as a complementary contribution that might be used for the assessment of seismic risk in the investigated area, I evaluate the peak ground acceleration (PGA) for the whole Campi Flegrei caldera and locally for the Pozzuoli-Solfatara area, by performing stochastic simulations of ground motion, partially constrained by the previously described results. Two different methods (random vibration theory (RVT) and ground motion generated from a Gaussian distribution (GMG)) are used, providing the PGA values of 0.04 g and 0.097 g for Campi Flegrei and Pozzuoli-Solfatara, respectively

    Subsurface and Transcutaneous Raman Spectroscopy, Imaging, and Tomography.

    Full text link
    Light scattering prevents the direct chemical monitoring of tissue and turbid materials, making it difficult to obtain accurate chemical information. We have developed novel fiber optic Raman probes for biomedical applications that are capable of recovering Raman spectra through several millimeters of overlying turbid materials such as skin, muscle, and adipose tissue. This is accomplished by spatially separating the region that is illuminated from the collection fields of view. In light scattering systems, this spatial separation emphasizes signal originating from below the surface of the scattering material. Engineering polymers and animal models have been used to investigate the depths at which accurate Raman spectrum recovery is achievable and to demonstrate the preservation of spatial information. Using these novel fiber optic probe configurations we have recovered accurate Raman spectra of bone tissue through 5 mm of overlying tissue; we have validated our measurements in vivo and demonstrated Raman tomography for the first time.Ph.D.ChemistryUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61653/1/schulmer_1.pd
    corecore