108 research outputs found

    Coxeter Groups and Wavelet Sets

    Full text link
    A traditional wavelet is a special case of a vector in a separable Hilbert space that generates a basis under the action of a system of unitary operators defined in terms of translation and dilation operations. A Coxeter/fractal-surface wavelet is obtained by defining fractal surfaces on foldable figures, which tesselate the embedding space by reflections in their bounding hyperplanes instead of by translations along a lattice. Although both theories look different at their onset, there exist connections and communalities which are exhibited in this semi-expository paper. In particular, there is a natural notion of a dilation-reflection wavelet set. We prove that dilation-reflection wavelet sets exist for arbitrary expansive matrix dilations, paralleling the traditional dilation-translation wavelet theory. There are certain measurable sets which can serve simultaneously as dilation-translation wavelet sets and dilation-reflection wavelet sets, although the orthonormal structures generated in the two theories are considerably different

    Novel approaches for quantitative electrogram analysis for rotor identification: Implications for ablation in patients with atrial fibrillation

    Get PDF
    University of Minnesota Ph.D. dissertation. May 2017. Major: Biomedical Engineering. Advisor: Elena Tolkacheva. 1 computer file (PDF); xxviii, 349 pages + 4 audio/video filesAtrial fibrillation (AF) is the most common sustained cardiac arrhythmia that causes stroke affecting more than 2.3 million people in the US. Catheter ablation with pulmonary vein isolation (PVI) to terminate AF is successful for paroxysmal AF but suffers limitations with persistent AF patients as current mapping methods cannot identify AF active substrates outside of PVI region. Recent evidences in the mechanistic understating of AF pathophysiology suggest that ectopic activity, localized re-entrant circuit with fibrillatory propagation and multiple circuit re-entries may all be involved in human AF. Accordingly, the hypothesis that rotor is an underlying AF mechanism is compatible with both the presence of focal discharges and multiple wavelets. Rotors are stable electrical sources which have characteristic spiral waves like appearance with a pivot point surrounded by peripheral region. Targeted ablation at the rotor pivot points in several animal studies have demonstrated efficacy in terminating AF. The objective of this dissertation was to develop robust spatiotemporal mapping techniques that can fully capture the intrinsic dynamics of the non-stationary time series intracardiac electrogram signal to accurately identify the rotor pivot zones that may cause and maintain AF. In this thesis, four time domain approaches namely multiscale entropy (MSE) recurrence period density entropy (RPDE), kurtosis and intrinsic mode function (IMF) complexity index and one frequency domain approach namely multiscale frequency (MSF) was proposed and developed for accurate identification of rotor pivot points. The novel approaches were validated using optical mapping data with induced ventricular arrhythmia in ex-vivo isolated rabbit heart with single, double and meandering rotors (including numerically simulated data). The results demonstrated the efficacy of the novel approaches in accurate identification of rotor pivot point. The chaotic nature of rotor pivot point resulted in higher complexity measured by MSE, RPDE, kurtosis, IMF and MSF compared to the stable rotor periphery that enabled its accurate identification. Additionally, the feasibility of using conventional catheter mapping system to generate patient specific 3D maps for intraprocedural guidance for catheter ablation using these novel approaches was demonstrated with 1055 intracardiac electrograms obtained from both atria’s in a persistent AF patient. Notably, the 3D maps did not provide any clinically significant information on rotor pivot point identification or the presence of rotors themselves. Validation of these novel approaches is required in large datasets with paroxysmal and persistent AF patients to evaluate their clinical utility in rotor identification as potential targets for AF ablation

    Novel therapies for hypertension and associated cardiovascular risk

    Get PDF
    University of Minnesota Ph.D. dissertation. August 2018. Major: Biomedical Engineering. Advisor: Alena Talkachova. 1 computer file (PDF); xvii, 134 pages.This thesis is comprised of two parts. The first part investigates a novel therapy, vagus nerve stimulation, for hypertension and hypertension-induced heart disease. Hypertension impacts over 1 billion people worldwide, and clinical management is challenging. Left uncontrolled, high blood pressure can significantly increase the risk of cardiovascular events. The majority of hypertensive patients are treated with anti-hypertensive drugs to control blood pressure, but many limitations exist including resistant hypertension, inability to tolerate therapy, and non-compliance with the medication regime. For these patients, an alternative approach is needed to control blood pressure. Recently, the imbalance in the autonomic nervous system, evident in hypertension, has been the target of novel device-based therapies such as vagus nerve stimulation. The main goal of this research is to evaluate the efficacy of vagus nerve stimulation to treat hypertension and hypertension-induced heart disease. This thesis investigates the impact of vagus nerve stimulation on disease progression, survival, and cardiovascular remodeling in Dahl salt-sensitive hypertensive rats. Overall, the results of this work provide evidence for the beneficial therapeutic effects of vagus nerve stimulation in hypertension and motivate future studies to optimize therapy parameters and further understand therapeutic mechanisms. The second part of this thesis focuses on atrial fibrillation and the evaluation of new mapping techniques for improving rotor localization for ablation procedures. Currently, success rates for ablation procedures for non-paroxysmal atrial fibrillation are low and require repeat procedures or a lifetime of pharmacological agents to reduce the risk of stroke. Improved signal processing techniques for mapping electrical activity in the atrium can help further our understanding of the generation and maintenance of atrial fibrillation and ultimately improve ablation procedure success rates and terminate the arrhythmia. The main goal of this work was to validate new signal processing techniques – multiscale frequency, kurtosis, Shannon entropy, and multiscale entropy – to identify regions of abnormal electrical activity. The results of this work demonstrate improved accuracy of these novel techniques in mapping rotors in cardiac arrhythmias and motivates further studies evaluating more complex arrhythmias and human intracardiac electrograms

    Single-molecule force spectroscopy studies of integrin-mediated cell signaling

    Get PDF
    Integrins constitute an important class of cell adhesion receptors, as they bidirectionally transduce information between the cytoplasm of biological cells and the surrounding extracellular matrix. By means of atomic force microscopy, spectroscopic measurements of the specific interactions of integrins with their corresponding ligands were performed. Basically, these experiments allow deep insights into cellular signal transduction, but despite sophisticated vibration isolation systems the acquired data exhibit very low signal-to-noise ratios that impair an accurate analysis. This drawback was overcome by a novel post-processing algorithm, which significantly reduces the noise and thus improves the signal-to-noise ratio. Thereby, previously invisible signal features can be revealed. Another important task when evaluating this kind of experiments is the identification of steplike transitions corresponding to unbinding events between the receptor-ligand bonds. To this end, a technique has been developed that can be adjusted to detect very low or narrow steps even if they are smooth and hidden by noise. By applying the noise reduction algorithm to force spectroscopy data obtained with living T lymphocytes, the onset force required for the extraction of a membrane tether could be observed for the first time. Using the step detection method, strong evidence of sub-10-pN steps was found. Moreover, it was shown that the chemokine SDF-1α leads to a strengthening of individual bonds between VLA-4, one type of integrins primarily involved in the early stages of chemokine-induced lymphocyte adhesion, and its ligand VCAM-1. The adhesion strengthening is accompanied by a stiffening of the integrins’ environment. It is independent of an intracellular binding site of VLA-4 to talin, the major intracellular factor involved in integrin affinity modulation. Further, the functional role of the integrin trans-membrane domains in receptor-ligand interactions was explored by analyzing the effects of two mutations of the integrin αvβ3 on cellular adhesion: a chimera encompassing the strongly dimerizing trans-membrane domain of glycophorin A and a point mutation known to induce trans-membrane domain dissociation. The results show that both constructs provoke strong cell adhesion. They correspond well to a three-state model of integrin activation. A resting state is activated by intracellular ligands to an intermediate state without trans-membrane domain separation. The dimerizing chimera mimics the intermediate state, which strengthens cellular adhesion

    Cenozoic Tectono-Stratigraphic Evolution And Petroleum System Of The Eastern Cordillera Foothills And Adjacent Basins (Colombia)

    Get PDF
    New gravity, magnetic, well and 3D seismic data provides a better understanding of regional structures and tectonic movement affecting the petroleum system and sedimentary history of an important region of oil production in the Foothills of the Eastern Cordillera of Colombia. To the south the Garzón fault was previously interpreted as a right-lateral strikeslip fault. New seismic, well, and gravity data demonstrate that this fault is also a low-angle (12–17 degrees) Andean fault that thrusts PreCambrian basement 10 to 17 km northwestward over Miocene rocks of the Upper Magdalena Valley (UMV) in a prospective footwall anticline. New geophysical data as well as previous field mapping were used to produce the first gravity and magnetic maps and retrodeformable structural cross section of the northern Garzón Massif. The new model distinguishes for the first time distinct episodes of “thin-skinned” and “thick-skinned” deformation in the Garzón Massif. To the north in the Foothills of the Eastern Cordillera spectral decomposition analysis of a 3D seismic volume and sequence stratigraphy of the Cenozoic sedimentary succession has developed new chronostratigraphic conceptual models and facies predictions. Earlier exploration in the Foothills was focused on structural traps that led to a number of significant discoveries, including the Cusiana and Cupiagua oil fields. However, more challenging stratigraphic play types have evolved from spectral decomposition (STFF, CWT, MP) analysis, enhancing our understanding of the fluvial reservoir architecture. Also this technique has helped improve our understanding of how regional structures and tectonic movements vi affected local sedimentary history and produced regional geomorphologic features. Use of this technology can open new opportunities in hydrocarbon exploration in the eastern foothills of the northern Andes and adjacent basins

    Condition assessment of bridge structures using statistical analysis of wavelets

    Get PDF
    La surveillance à distance des structures a émergé comme une préoccupation importante pour les ingénieurs afin de maintenir la sécurité et la fiabilité des infrastructures civiles pendant leur durée de vie. Les techniques de surveillance structurale (SHM) sont de plus en plus populaires pour fournir un diagnostic de "l'état" des structures en raison de leur vieillissement, de la dégradation des matériaux ou de défauts survenus pendant leur construction. Les limites de l'inspection visuelle et des techniques non destructives, qui sont couramment utilisées pour détecter des défauts extrêmes sur les parties accessibles des structures, ont conduit à la découverte de nouvelles technologies qui évaluent d’un seul tenant l'état global d'une structure surveillée. Les techniques de surveillance globale ont été largement utilisées pour la reconnaissance d'endommagement dans les grandes infrastructures civiles, telles que les ponts, sur la base d'une analyse modale de la réponse dynamique structurale. Cependant, en raison des caractéristiques complexes des structures oeuvrant sous des conditions environnementales variables et des incertitudes statistiques dans les paramètres modaux, les techniques de diagnostic actuelles n'ont pas été concluantes pour conduire à une méthodologie robuste et directe pour détecter les incréments de dommage avant qu'ils n'atteignent un stade critique. C’est ainsi que des techniques statistiques de reconnaissance de formes sont incorporées aux méthodes de détection d'endommagement basées sur les vibrations pour fournir une meilleure estimation de la probabilité de détection des dommages dans des applications in situ, ce qui est habituellement difficile compte tenu du rapport bruit à signal élevé. Néanmoins, cette partie du SHM est encore à son stade initial de développement et, par conséquent, d'autres tentatives sont nécessaires pour parvenir à une méthodologie fiable de détection de l'endommagement. Une stratégie de détection de dommages basée sur des aspects statistiques a été proposée pour détecter et localiser de faibles niveaux incrémentiels d'endommagement dans une poutre expérimentale pour laquelle tant le niveau d'endommagement que les conditions de retenue sont réglables (par exemple ancastrée-ancastrée et rotulée-rotulée). Premièrement, des expériences ont été effectuées dans des conditions de laboratoire contrôlées pour détecter de faibles niveaux d'endommagement induits (par exemple une fissure correspondant à 4% de la hauteur d’une section rectangulaire équivalente) simulant des scénarios d'endommagement de stade précoce pour des cas réels. Différents niveaux d'endommagement ont été simulés à deux endroits distincts le long de la poutre. Pour chaque série d'endommagement incrémentiel, des mesures répétées (~ 50 à 100) ont été effectuées pour tenir compte de l'incertitude et de la variabilité du premier mode de vibration de la structure en raison d'erreurs expérimentales et du bruit. Une technique d'analyse par ondelette basée sur les modes a été appliquée pour détecter les changements anormaux survenant dans les modes propres causées par le dommage. La réduction du bruit ainsi que les caractéristiques des agrégats ont été obtenues en mettant en œuvre l'analyse des composantes principales (PCA) pour l'ensemble des coefficients d'ondelettes calculés à des nœuds (ou positions) régulièrement espacés le long du mode propre. En rejetant les composantes qui contribuent le moins à la variance globale, les scores PCA correspondant aux premières composantes principales se sont révélés très corrélés avec de faibles niveaux d'endommagement incrémentiel. Des méthodes classiques d'essai d'hypothèses ont été effectuées sur les changements des paramètres de localisation des scores pour conclure objectivement et statistiquement, à un niveau de signification donné, sur la présence du dommage. Lorsqu'un dommage statistiquement significatif a été détecté, un nouvel algorithme basé sur les probabilités a été développé pour déterminer l'emplacement le plus probable de l'endommagement le long de la structure. Deuxièmement, se basant sur l'approche probabiliste, une série de tests a été effectuée dans une chambre environnementale à température contrôlée pour étudier les contributions relatives des effets de l’endommagement et de la température sur les propriétés dynamiques de la poutre afin d’estimer un facteur de correction pour l'ajustement des scores extraits. Il s'est avéré que la température avait un effet réversible sur la distribution des scores et que cet effet était plus grand lorsque le niveau d'endommagement était plus élevé. Les résultats obtenus pour les scores ajustés indiquent que la correction des effets réversibles de la température peut améliorer la probabilité de détection et minimiser les fausses alarmes. Les résultats expérimentaux indiquent que la contribution combinée des algorithmes utilisés dans cette étude était très efficace pour détecter de faibles niveaux d'endommagement incrémentiel à plusieurs endroits le long de la poutre tout en minimisant les effets indésirables du bruit et de la température dans les résultats. Les résultats de cette recherche démontrent que l'approche proposée est prometteuse pour la surveillance des structures. Cependant, une quantité importante de travail de validation est attendue avant sa mise en œuvre sur des structures réelles. Mots-clés : Détection et localisation des dommages, Poutre, Mode propre, Ondelette, Analyse des composantes principales, Rapport de probabilité, TempératureRemote monitoring of structures has emerged as an important concern for engineers to maintain safety and reliability of civil infrastructure during its service life. Structural Health Monitoring (SHM) techniques are increasingly becoming popular to provide ideas for diagnosis of the "state" of potential defects in structures due to aging, deterioration and fault during construction. The limitations of visual inspection and non-destructive techniques, which were commonly used to detect extreme defects on only accessible portions of structures, led to the discovery of new technologies which assess the "global state" of a monitored structure at once. Global monitoring techniques have been used extensively for the recognition of damage in large civil infrastructure, such as bridges, based on modal analysis of structural dynamic response. However, because of complicated features of real-life structures under varying environmental conditions and statistical uncertainties in modal parameters, current diagnosis techniques have not been conclusive in ascertaining a robust and straightforward methodology to detect damage increments before it reaches its critical stage. Statistical pattern recognition techniques are incorporated with vibration-based damage detection methods to provide a better estimate for the probability of the detection of damage in field applications, which is usually challenging given the high noise to signal ratio. Nevertheless, this part of SHM is still in its initial stage of development and, hence, further attempts are required to achieve a reliable damage detection methodology. A statistical-based damage detection strategy was proposed to detect and localize low levels of incremental damage in an experimental beam in which the level of damage and beam restraint conditions are adjustable (e.g. fixed-fixed and pinned-pinned). First, experiments were performed in controlled laboratory conditions to detect small levels of induced-damage (e.g. 4% crack height for an equivalent rectangular section) simulated for early stage damage scenarios in real cases. Various levels of damage were simulated at two distinct locations along the beam. For each sate of incremental damage, repeat measurements (~ 50 to 100) were performed to account for uncertainty and variability in the first vibration mode of the structure due to experimental errors and noise. A modal-based wavelet analysis technique was applied to detect abnormal changes occurring in the mode shapes caused by damage. Noise reduction as well as aggregate characteristics were obtained by implementing the Principal Component Analysis (PCA) into the set of wavelet coefficients computed at regularly spaced nodes along the mode shape. By discarding components that contribute least to the overall variance, the PCA scores corresponding to the first few PCs were found to be highly correlated with low levels of incremental damage. Classical hypothesis testing methods were performed on changes on the location parameters of the scores to conclude damage objectively and statistically at a given significance level. When a statistically significant damage was detected, a novel Likelihood-based algorithm was developed to determine the most likely location of damage along the structure. Secondly, given the likelihood approach, a series of tests were carried out in a climate-controlled room to investigate the relative contributions of damage and temperature effects on the dynamic properties of the beam and to estimate a correction factor for the adjustment of scores extracted. It was found that the temperature had a reversible effect on the distribution of scores and that the effect was larger when the damage level was higher. The resulted obtained for the adjusted scores indicated that the correction for reversible effects of temperature can improve the probability of detection and minimize false alarms. The experimental results indicate that the combined contribution of the algorithms used in this study were very efficient to detect small-scale levels of incremental damage at multiple locations along the beam, while minimizing undesired effects of noise and temperature in the results. The results of this research demonstrate that the proposed approach may be used as a promising tool for SHM of actual structures. However, a significant amount of challenging work is expected for implementing it on real structures. Key-words: Damage Detection and Localization, Beam, Mode Shape, Wavelet, Principal Component Analysis, Likelihood Ratio, Temperatur

    Statistical properties and time-frequency analysis of temperature, salinity and turbidity measured by the MAREL Carnot station in the coastal waters of Boulogne-sur-Mer (France)

    Get PDF
    In marine sciences, many fields display high variability over a large range of spatial and temporal scales, from seconds to thousands of years. The longer recorded time series, with an increasing sampling frequency, in this field are often nonlinear, nonstationary, multiscale and noisy. Their analysis faces new challenges and thus requires the implementation of adequate and specific methods. The objective of this paper is to highlight time series analysis methods already applied in econometrics, signal processing, health, etc. to the environmental marine domain, assess advantages and inconvenients and compare classical techniques with more recent ones. Temperature, turbidity and salinity are important quantities for ecosystem studies. The authors here consider the fluctuations of sea level, salinity, turbidity and temperature recorded from the MAREL Carnot system of Boulogne-sur-Mer (France), which is a moored buoy equipped with physico-chemical measuring devices, working in continuous and autonomous conditions. In order to perform adequate statistical and spectral analyses, it is necessary to know the nature of the considered time series. For this purpose, the stationarity of the series and the occurrence of unit-root are addressed with the Augmented–Dickey Fuller tests. As an example, the harmonic analysis is not relevant for temperature, turbidity and salinity due to the nonstationary condition, except for the nearly stationary sea level datasets. In order to consider the dominant frequencies associated to the dynamics, the large number of data provided by the sensors should enable the estimation of Fourier spectral analysis. Different power spectra show a complex variability and reveal an influence of environmental factors such as tides. However, the previous classical spectral analysis, namely the Blackman–Tukey method, requires not only linear and stationary data but also evenly-spaced data. Interpolating the time series introduces numerous artifacts to the data. The Lomb–Scargle algorithm is adapted to unevenly-spaced data and is used as an alternative. The limits of the method are also set out. It was found that beyond 50% of missing measures, few significant frequencies are detected, several seasonalities are no more visible, and even a whole range of high frequency disappears progressively. Furthermore, two time-frequency decomposition methods, namely wavelets and Hilbert–Huang Transformation (HHT), are applied for the analysis of the entire dataset. Using the Continuous Wavelet Transform (CWT), some properties of the time series are determined. Then, the inertial wave and several low-frequency tidal waves are identified by the application of the Empirical Mode Decomposition (EMD). Finally, EMD based Time Dependent Intrinsic Correlation (TDIC) analysis is applied to consider the correlation between two nonstationary time series

    EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces

    Full text link
    In recent years, deep learning (DL) has contributed significantly to the improvement of motor-imagery brain-machine interfaces (MI-BMIs) based on electroencephalography(EEG). While achieving high classification accuracy, DL models have also grown in size, requiring a vast amount of memory and computational resources. This poses a major challenge to an embedded BMI solution that guarantees user privacy, reduced latency, and low power consumption by processing the data locally. In this paper, we propose EEG-TCNet, a novel temporal convolutional network (TCN) that achieves outstanding accuracy while requiring few trainable parameters. Its low memory footprint and low computational complexity for inference make it suitable for embedded classification on resource-limited devices at the edge. Experimental results on the BCI Competition IV-2a dataset show that EEG-TCNet achieves 77.35% classification accuracy in 4-class MI. By finding the optimal network hyperparameters per subject, we further improve the accuracy to 83.84%. Finally, we demonstrate the versatility of EEG-TCNet on the Mother of All BCI Benchmarks (MOABB), a large scale test benchmark containing 12 different EEG datasets with MI experiments. The results indicate that EEG-TCNet successfully generalizes beyond one single dataset, outperforming the current state-of-the-art (SoA) on MOABB by a meta-effect of 0.25.Comment: 8 pages, 6 figures, 5 table
    • …
    corecore