1,517 research outputs found

    High Resolution Multi-parametric Diagnostics and Therapy of Atrial Fibrillation: Chasing Arrhythmia Vulnerabilities in the Spatial Domain

    Get PDF
    After a century of research, atrial fibrillation (AF) remains a challenging disease to study and exceptionally resilient to treatment. Unfortunately, AF is becoming a massive burden on the health care system with an increasing population of susceptible elderly patients and expensive unreliable treatment options. Pharmacological therapies continue to be disappointingly ineffective or are hampered by side effects due to the ubiquitous nature of ion channel targets throughout the body. Ablative therapy for atrial tachyarrhythmias is growing in acceptance. However, ablation procedures can be complex, leading to varying levels of recurrence, and have a number of serious risks. The high recurrence rate could be due to the difficulty of accurately predicting where to draw the ablation lines in order to target the pathophysiology that initiates and maintains the arrhythmia or an inability to distinguish sub-populations of patients who would respond well to such treatments. There are electrical cardioversion options but there is not a practical implanted deployment of this strategy. Under the current bioelectric therapy paradigm there is a trade-off between efficacy and the pain and risk of myocardial damage, all of which are positively correlated with shock strength. Contrary to ventricular fibrillation, pain becomes a significant concern for electrical defibrillation of AF due to the fact that a patient is conscious when experiencing the arrhythmia. Limiting the risk of myocardial injury is key for both forms of fibrillation. In this project we aim to address the limitations of current electrotherapy by diverging from traditional single shock protocols. We seek to further clarify the dynamics of arrhythmia drivers in space and to target therapy in both the temporal and spatial domain; ultimately culminating in the design of physiologically guided applied energy protocols. In an effort to provide further characterization of the organization of AF, we used transillumination optical mapping to evaluate the presence of three-dimensional electrical substrate variations within the transmural wall during acutely induced episodes of AF. The results of this study suggest that transmural propagation may play a role in AF maintenance mechanisms, with a demonstrated range of discordance between the epicardial and endocardial dynamic propagation patterns. After confirming the presence of epi-endo dyssynchrony in multiple animal models, we further investigated the anatomical structure to look for regional trends in transmural fiber orientation that could help explain the spectrum of observed patterns. Simultaneously, we designed and optimized a multi-stage, multi-path defibrillation paradigm that can be tailored to individual AF frequency content in the spatial and temporal domain. These studies continue to drive down the defibrillation threshold of electrotherapies in an attempt to achieve a pain-free AF defibrillation solution. Finally, we designed and characterized a novel platform of stretchable electronics that provide instrumented membranes across the epicardial surface or implanted within the transmural wall to provide physiological feedback during electrotherapy beyond just the electrical state of the tissue. By combining a spatial analysis of the arrhythmia drivers, the energy delivered and the resulting damage, we hope to enhance the biophysical understanding of AF electrical cardioversion and xiii design an ideal targeted energy delivery protocol to improve upon all limitations of current electrotherapy

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Simulation Studies of Digital Filters for the Phase-II Upgrade of the Liquid-Argon Calorimeters of the ATLAS Detector at the High-Luminosity LHC

    Get PDF
    Am Large Hadron Collider und am ATLAS-Detektor werden umfangreiche Aufrüstungsarbeiten vorgenommen. Diese Arbeiten sind in mehrere Phasen gegliedert und umfassen unter Anderem Änderungen an der Ausleseelektronik der Flüssigargonkalorimeter; insbesondere ist es geplant, während der letzten Phase ihren Primärpfad vollständig auszutauschen. Die Elektronik besteht aus einem analogen und einem digitalen Teil: während ersterer die Signalpulse verstärkt und sie zur leichteren Abtastung verformt, führt letzterer einen Algorithmus zur Energierekonstruktion aus. Beide Teile müssen während der Aufrüstung verbessert werden, damit der Detektor interessante Kollisionsereignisse präzise rekonstruieren und uninteressante effizient verwerfen kann. In dieser Dissertation werden Simulationsstudien präsentiert, die sowohl die analoge als auch die digitale Auslese der Flüssigargonkalorimeter optimieren. Die Korrektheit der Simulation wird mithilfe von Kalibrationsdaten geprüft, die im sog. Run 2 des ATLAS-Detektors aufgenommen worden sind. Der Einfluss verschiedener Parameter der Signalverformung auf die Energieauflösung wird analysiert und die Nützlichkeit einer erhöhten Abtastrate von 80 MHz untersucht. Des Weiteren gibt diese Arbeit eine Übersicht über lineare und nichtlineare Energierekonstruktionsalgorithmen. Schließlich wird eine Auswahl von ihnen hinsichtlich ihrer Leistungsfähigkeit miteinander verglichen. Es wird gezeigt, dass ein Erhöhen der Ordnung des Optimalfilters, der gegenwärtig verwendete Algorithmus, die Energieauflösung um 2 bis 3 % verbessern kann, und zwar in allen Regionen des Detektors. Der Wiener Filter mit Vorwärtskorrektur, ein nichtlinearer Algorithmus, verbessert sie um bis zu 10 % in einigen Regionen, verschlechtert sie aber in anderen. Ein Zusammenhang dieses Verhaltens mit der Wahrscheinlichkeit fälschlich detektierter Kalorimetertreffer wird aufgezeigt und mögliche Lösungen werden diskutiert.:1 Introduction 2 An Overview of High-Energy Particle Physics 2.1 The Standard Model of Particle Physics 2.2 Verification of the Standard Model 2.3 Beyond the Standard Model 3 LHC, ATLAS, and the Liquid-Argon Calorimeters 3.1 The Large Hadron Collider 3.2 The ATLAS Detector 3.3 The ATLAS Liquid-Argon Calorimeters 4 Upgrades to the ATLAS Liquid-Argon Calorimeters 4.1 Physics Goals 4.2 Phase-I Upgrade 4.3 Phase-II Upgrade 5 Noise Suppression With Digital Filters 5.1 Terminology 5.2 Digital Filters 5.3 Wiener Filter 5.4 Matched Wiener Filter 5.5 Matched Wiener Filter Without Bias 5.6 Timing Reconstruction, Optimal Filtering, and Selection Criteria 5.7 Forward Correction 5.8 Sparse Signal Restoration 5.9 Artificial Neural Networks 6 Simulation of the ATLAS Liquid-Argon Calorimeter Readout Electronics 6.1 AREUS 6.2 Hit Generation and Sampling 6.3 Pulse Shapes 6.4 Thermal Noise 6.5 Quantization 6.6 Digital Filters 6.7 Statistical Analysis 7 Results of the Readout Electronics Simulation Studies 7.1 Statistical Treatment 7.2 Simulation Verification Using Run-2 Data 7.3 Dependence of the Noise on the Shaping Time 7.4 The Analog Readout Electronics and the ADC 7.5 The Optimal Filter (OF) 7.6 The Wiener Filter 7.7 The Wiener Filter with Forward Correction (WFFC) 7.8 Final Comparison and Conclusions 8 Conclusions and Outlook AppendicesThe Large Hadron Collider and the ATLAS detector are undergoing a comprehensive upgrade split into multiple phases. This effort also affects the liquid-argon calorimeters, whose main readout electronics will be replaced completely during the final phase. The electronics consist of an analog and a digital portion: the former amplifies the signal and shapes it to facilitate sampling, the latter executes an energy reconstruction algorithm. Both must be improved during the upgrade so that the detector may accurately reconstruct interesting collision events and efficiently suppress uninteresting ones. In this thesis, simulation studies are presented that optimize both the analog and the digital readout of the liquid-argon calorimeters. The simulation is verified using calibration data that has been measured during Run 2 of the ATLAS detector. The influence of several parameters of the analog shaping stage on the energy resolution is analyzed and the utility of an increased signal sampling rate of 80 MHz is investigated. Furthermore, a number of linear and non-linear energy reconstruction algorithms is reviewed and the performance of a selection of them is compared. It is demonstrated that increasing the order of the Optimal Filter, the algorithm currently in use, improves energy resolution by 2 to 3 % in all detector regions. The Wiener filter with forward correction, a non-linear algorithm, gives an improvement of up to 10 % in some regions, but degrades the resolution in others. A link between this behavior and the probability of falsely detected calorimeter hits is shown and possible solutions are discussed.:1 Introduction 2 An Overview of High-Energy Particle Physics 2.1 The Standard Model of Particle Physics 2.2 Verification of the Standard Model 2.3 Beyond the Standard Model 3 LHC, ATLAS, and the Liquid-Argon Calorimeters 3.1 The Large Hadron Collider 3.2 The ATLAS Detector 3.3 The ATLAS Liquid-Argon Calorimeters 4 Upgrades to the ATLAS Liquid-Argon Calorimeters 4.1 Physics Goals 4.2 Phase-I Upgrade 4.3 Phase-II Upgrade 5 Noise Suppression With Digital Filters 5.1 Terminology 5.2 Digital Filters 5.3 Wiener Filter 5.4 Matched Wiener Filter 5.5 Matched Wiener Filter Without Bias 5.6 Timing Reconstruction, Optimal Filtering, and Selection Criteria 5.7 Forward Correction 5.8 Sparse Signal Restoration 5.9 Artificial Neural Networks 6 Simulation of the ATLAS Liquid-Argon Calorimeter Readout Electronics 6.1 AREUS 6.2 Hit Generation and Sampling 6.3 Pulse Shapes 6.4 Thermal Noise 6.5 Quantization 6.6 Digital Filters 6.7 Statistical Analysis 7 Results of the Readout Electronics Simulation Studies 7.1 Statistical Treatment 7.2 Simulation Verification Using Run-2 Data 7.3 Dependence of the Noise on the Shaping Time 7.4 The Analog Readout Electronics and the ADC 7.5 The Optimal Filter (OF) 7.6 The Wiener Filter 7.7 The Wiener Filter with Forward Correction (WFFC) 7.8 Final Comparison and Conclusions 8 Conclusions and Outlook Appendice

    Compressive Sensing with Side Information: Analysis, Measurements Design and Applications

    Get PDF
    Compressive sensing is a breakthrough technology in view of the fact that it enables the acquisition and reconstruction of certain signals with a number of measurements much lower than those dictated by the Shannon-Nyquist paradigm. It has also been recognised in the last few years that it is possible to improve compressive sensing systems by leveraging additional knowledge – so-called side information – that may be available about the signal of interest. The goal of this thesis is to investigate how to improve the acquisition and reconstruction process in compressive sensing systems in the presence of side information. In particular, by assuming that both the signal of interest and the side information obey a joint Gaussian mixture model (GMM), the thesis focuses on the analysis and the design of linear measurements for two different scenarios: i) the scenario where one wishes to design a linear projection matrix to capture the signal of interest; and ii) the scenario where one wishes to design a linear projection matrix to capture the side information. In both cases, we derive sufficient and (occasionally) necessary conditions on the number of measurements needed for the reliable reconstruction in the low-noise regime and we also derive linear measurement designs that are close to optimal. Numerical results are presented with synthetic data from both Gaussian and GMM distributions and with real world imaging data that confirm that analysis is well aligned with practice. We also showcase our measurement design scheme can lead to significant improvement on the application example associated with the reconstruction of high-resolution RGB images from gray scale images using low-resolution, compressive, hyperspectral measurements as side information

    Compressed sensing in FET based terahertz imaging

    Get PDF

    Face recognition by means of advanced contributions in machine learning

    Get PDF
    Face recognition (FR) has been extensively studied, due to both scientific fundamental challenges and current and potential applications where human identification is needed. FR systems have the benefits of their non intrusiveness, low cost of equipments and no useragreement requirements when doing acquisition, among the most important ones. Nevertheless, despite the progress made in last years and the different solutions proposed, FR performance is not yet satisfactory when more demanding conditions are required (different viewpoints, blocked effects, illumination changes, strong lighting states, etc). Particularly, the effect of such non-controlled lighting conditions on face images leads to one of the strongest distortions in facial appearance. This dissertation addresses the problem of FR when dealing with less constrained illumination situations. In order to approach the problem, a new multi-session and multi-spectral face database has been acquired in visible, Near-infrared (NIR) and Thermal infrared (TIR) spectra, under different lighting conditions. A theoretical analysis using information theory to demonstrate the complementarities between different spectral bands have been firstly carried out. The optimal exploitation of the information provided by the set of multispectral images has been subsequently addressed by using multimodal matching score fusion techniques that efficiently synthesize complementary meaningful information among different spectra. Due to peculiarities in thermal images, a specific face segmentation algorithm has been required and developed. In the final proposed system, the Discrete Cosine Transform as dimensionality reduction tool and a fractional distance for matching were used, so that the cost in processing time and memory was significantly reduced. Prior to this classification task, a selection of the relevant frequency bands is proposed in order to optimize the overall system, based on identifying and maximizing independence relations by means of discriminability criteria. The system has been extensively evaluated on the multispectral face database specifically performed for our purpose. On this regard, a new visualization procedure has been suggested in order to combine different bands for establishing valid comparisons and giving statistical information about the significance of the results. This experimental framework has more easily enabled the improvement of robustness against training and testing illumination mismatch. Additionally, focusing problem in thermal spectrum has been also addressed, firstly, for the more general case of the thermal images (or thermograms), and then for the case of facialthermograms from both theoretical and practical point of view. In order to analyze the quality of such facial thermograms degraded by blurring, an appropriate algorithm has been successfully developed. Experimental results strongly support the proposed multispectral facial image fusion, achieving very high performance in several conditions. These results represent a new advance in providing a robust matching across changes in illumination, further inspiring highly accurate FR approaches in practical scenarios.El reconeixement facial (FR) ha estat àmpliament estudiat, degut tant als reptes fonamentals científics que suposa com a les aplicacions actuals i futures on requereix la identificació de les persones. Els sistemes de reconeixement facial tenen els avantatges de ser no intrusius,presentar un baix cost dels equips d’adquisició i no la no necessitat d’autorització per part de l’individu a l’hora de realitzar l'adquisició, entre les més importants. De totes maneres i malgrat els avenços aconseguits en els darrers anys i les diferents solucions proposades, el rendiment del FR encara no resulta satisfactori quan es requereixen condicions més exigents (diferents punts de vista, efectes de bloqueig, canvis en la il·luminació, condicions de llum extremes, etc.). Concretament, l'efecte d'aquestes variacions no controlades en les condicions d'il·luminació sobre les imatges facials condueix a una de les distorsions més accentuades sobre l'aparença facial. Aquesta tesi aborda el problema del FR en condicions d'il·luminació menys restringides. Per tal d'abordar el problema, hem adquirit una nova base de dades de cara multisessió i multiespectral en l'espectre infraroig visible, infraroig proper (NIR) i tèrmic (TIR), sota diferents condicions d'il·luminació. En primer lloc s'ha dut a terme una anàlisi teòrica utilitzant la teoria de la informació per demostrar la complementarietat entre les diferents bandes espectrals objecte d’estudi. L'òptim aprofitament de la informació proporcionada pel conjunt d'imatges multiespectrals s'ha abordat posteriorment mitjançant l'ús de tècniques de fusió de puntuació multimodals, capaces de sintetitzar de manera eficient el conjunt d’informació significativa complementària entre els diferents espectres. A causa de les característiques particulars de les imatges tèrmiques, s’ha requerit del desenvolupament d’un algorisme específic per la segmentació de les mateixes. En el sistema proposat final, s’ha utilitzat com a eina de reducció de la dimensionalitat de les imatges, la Transformada del Cosinus Discreta i una distància fraccional per realitzar les tasques de classificació de manera que el cost en temps de processament i de memòria es va reduir de forma significa. Prèviament a aquesta tasca de classificació, es proposa una selecció de les bandes de freqüències més rellevants, basat en la identificació i la maximització de les relacions d'independència per mitjà de criteris discriminabilitat, per tal d'optimitzar el conjunt del sistema. El sistema ha estat àmpliament avaluat sobre la base de dades de cara multiespectral, desenvolupada pel nostre propòsit. En aquest sentit s'ha suggerit l’ús d’un nou procediment de visualització per combinar diferents bandes per poder establir comparacions vàlides i donar informació estadística sobre el significat dels resultats. Aquest marc experimental ha permès més fàcilment la millora de la robustesa quan les condicions d’il·luminació eren diferents entre els processos d’entrament i test. De forma complementària, s’ha tractat la problemàtica de l’enfocament de les imatges en l'espectre tèrmic, en primer lloc, pel cas general de les imatges tèrmiques (o termogrames) i posteriorment pel cas concret dels termogrames facials, des dels punt de vista tant teòric com pràctic. En aquest sentit i per tal d'analitzar la qualitat d’aquests termogrames facials degradats per efectes de desenfocament, s'ha desenvolupat un últim algorisme. Els resultats experimentals recolzen fermament que la fusió d'imatges facials multiespectrals proposada assoleix un rendiment molt alt en diverses condicions d’il·luminació. Aquests resultats representen un nou avenç en l’aportació de solucions robustes quan es contemplen canvis en la il·luminació, i esperen poder inspirar a futures implementacions de sistemes de reconeixement facial precisos en escenaris no controlats.Postprint (published version

    Automated analysis of non destructive evaluation data

    No full text
    Interpretation of NDE data can be unreliable and difficult due to the complex interaction between the instrument, object under inspection and noise and uncertainties about the system or data. A common method of reducing the complexity and volume of data is to use thresholds. However, many of these methods are based on making subjective assessments from the data or assumptions about the system which can be source of error. Reducing data whilst retaining important information is difficult and normally compromises have to be made. This thesis has developed methods that are based on sound mathematical and scientific principles and require the minimum use of assumptions and subjective choices. Optimisation has been shown to reduce data acquired from a multilayer composite panel and hence show the ply layers. The problem can be ill-posed. It is possible to obtain a solution close to optimum and obtain confidence on the result. Important factors are: the size of the search space, representation of the data and any assumptions and choices made. Further work is required in the use of model based optimisation to measure layer thicknesses from a metal laminate panel. A number of important factors that must be addressed have been identified. Two novel approaches to removing features from Transient Eddy-Current (TEC) data have been shown to improve the visibility of defects. The best approach to take depends on the available knowledge of the system. Principal Value Decomposition (PVD) has been shown to remove layer interface reflections from ultrasonic data. However, PVD is not suited to all problems such as the TEC data described. PVD is best suited in the later stages of data reduction. This thesis has demonstrated new methods and a roadmap for solving multivariate problems, these methods may be applied to a wide range of data and problems
    • …
    corecore