879 research outputs found

    Modeling Covariate Effects in Group Independent Component Analysis with Applications to Functional Magnetic Resonance Imaging

    Full text link
    Independent component analysis (ICA) is a powerful computational tool for separating independent source signals from their linear mixtures. ICA has been widely applied in neuroimaging studies to identify and characterize underlying brain functional networks. An important goal in such studies is to assess the effects of subjects' clinical and demographic covariates on the spatial distributions of the functional networks. Currently, covariate effects are not incorporated in existing group ICA decomposition methods. Hence, they can only be evaluated through ad-hoc approaches which may not be accurate in many cases. In this paper, we propose a hierarchical covariate ICA model that provides a formal statistical framework for estimating and testing covariate effects in ICA decomposition. A maximum likelihood method is proposed for estimating the covariate ICA model. We develop two expectation-maximization (EM) algorithms to obtain maximum likelihood estimates. The first is an exact EM algorithm, which has analytically tractable E-step and M-step. Additionally, we propose a subspace-based approximate EM, which can significantly reduce computational time while still retain high model-fitting accuracy. Furthermore, to test covariate effects on the functional networks, we develop a voxel-wise approximate inference procedure which eliminates the needs of computationally expensive covariance estimation. The performance of the proposed methods is evaluated via simulation studies. The application is illustrated through an fMRI study of Zen meditation.Comment: 36 pages, 5 figure

    Reconstructing ERP amplitude effects after compensating for trial-to-trial latency jitter: A solution based on a novel application of residue iteration decomposition

    Get PDF
    © 2016 The Authors Stimulus-locked averaged event-related potentials (ERPs) are among the most frequently used signals in Cognitive Neuroscience. However, the late, cognitive or endogenous ERP components are often variable in latency from trial to trial in a component-specific way, compromising the stability assumption underlying the averaging scheme. Here we show that trial-to-trial latency variability of ERP components not only blurs the average ERP waveforms, but may also attenuate existing or artificially induce condition effects in amplitude. Hitherto this problem has not been well investigated. To tackle this problem, a method to measure and compensate component-specific trial-to-trial latency variability is required. Here we first systematically analyze the problem of single trial latency variability for condition effects based on simulation. Then, we introduce a solution by applying residue iteration decomposition (RIDE) to experimental data. RIDE separates different clusters of ERP components according to their time-locking to stimulus onsets, response times, or neither, based on an algorithm of iterative subtraction. We suggest to reconstruct ERPs by re-aligning the component clusters to their most probable single trial latencies. We demonstrate that RIDE-reconstructed ERPs may recover amplitude effects that are diminished or exaggerated in conventional averages by trial-to-trial latency jitter. Hence, RIDE-corrected ERPs may be a valuable tool in conditions where ERP effects may be compromised by latency variability.Link_to_subscribed_fulltex

    Competing mechanisms of stress-assisted diffusivity and stretch-activated currents in cardiac electromechanics

    Full text link
    We numerically investigate the role of mechanical stress in modifying the conductivity properties of the cardiac tissue and its impact in computational models for cardiac electromechanics. We follow a theoretical framework recently proposed in [Cherubini, Filippi, Gizzi, Ruiz-Baier, JTB 2017], in the context of general reaction-diffusion-mechanics systems using multiphysics continuum mechanics and finite elasticity. In the present study, the adapted models are compared against preliminary experimental data of pig right ventricle fluorescence optical mapping. These data contribute to the characterization of the observed inhomogeneity and anisotropy properties that result from mechanical deformation. Our novel approach simultaneously incorporates two mechanisms for mechano-electric feedback (MEF): stretch-activated currents (SAC) and stress-assisted diffusion (SAD); and we also identify their influence into the nonlinear spatiotemporal dynamics. It is found that i) only specific combinations of the two MEF effects allow proper conduction velocity measurement; ii) expected heterogeneities and anisotropies are obtained via the novel stress-assisted diffusion mechanisms; iii) spiral wave meandering and drifting is highly mediated by the applied mechanical loading. We provide an analysis of the intrinsic structure of the nonlinear coupling using computational tests, conducted using a finite element method. In particular, we compare static and dynamic deformation regimes in the onset of cardiac arrhythmias and address other potential biomedical applications

    The low area probing detector as a countermeasure against invasive attacks

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksMicroprobing allows intercepting data from on-chip wires as well as injecting faults into data or control lines. This makes it a commonly used attack technique against security-related semiconductors, such as smart card controllers. We present the low area probing detector (LAPD) as an efficient approach to detect microprobing. It compares delay differences between symmetric lines such as bus lines to detect timing asymmetries introduced by the capacitive load of a probe. Compared with state-of-the-art microprobing countermeasures from industry, such as shields or bus encryption, the area overhead is minimal and no delays are introduced; in contrast to probing detection schemes from academia, such as the probe attempt detector, no analog circuitry is needed. We show the Monte Carlo simulation results of mismatch variations as well as process, voltage, and temperature corners on a 65-nm technology and present a simple reliability optimization. Eventually, we show that the detection of state-of-the-art commercial microprobes is possible even under extreme conditions and the margin with respect to false positives is sufficient.Peer ReviewedPostprint (author's final draft

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    On the predictivity of pore-scale simulations : estimating uncertainties with multilevel Monte Carlo

    Get PDF
    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics

    In-silico clinical trials for assessment of intracranial flow diverters

    Get PDF
    In-silico trials refer to pre-clinical trials performed, entirely or in part, using individualised computer models that simulate some aspect of drug effect, medical device, or clinical intervention. Such virtual trials reduce and optimise animal and clinical trials, and enable exploring a wider range of anatomies and physiologies. In the context of endovascular treatment of intracranial aneurysms, in-silico trials can be used to evaluate the effectiveness of endovascular devices over virtual populations of patients with different aneurysm morphologies and physiologies. However, this requires (i) a virtual endovascular treatment model to evaluate device performance based on a reliable performance indicator, (ii) models that represent intra- and inter-subject variations of a virtual population, and (iii) creation of cost-effective and fully-automatic workflows to enable a large number of simulations at a reasonable computational cost and time. Flow-diverting stents have been proven safe and effective in the treatment of large wide-necked intracranial aneurysms. The presented thesis aims to provide the ingredient models of a workflow for in-silico trials of flow-diverting stents and to enhance the general knowledge of how the ingredient models can be streamlined and accelerated to allow large-scale trials. This work contributed to the following aspects: 1) To understand the key ingredient models of a virtual treatment workflow for evaluation of the flow-diverter performance. 2) To understand the effect of input uncertainty and variability on the workflow outputs, 3) To develop generative statistical models that describe variability in internal carotid artery flow waveforms, and investigate the effect of uncertainties on quantification of aneurysmal wall shear stress, 4) As part of a metric to evaluate success of flow diversion, to develop and validate a thrombosis model to assess FD-induced clot stability, and 5) To understand how a fully-automatic aneurysm flow modelling workflow can be built and how computationally inexpensive models can reduce the computational costs

    Characterization and interpretation of cardiovascular and cardiorespiratory dynamics in cardiomyopathy patients

    Get PDF
    Aplicat embargament des de la data de defensa fins el dia 20/5/2022The main objective of this thesis was to study the variability of the cardiac, respiratory and vascular systems through electrocardiographic (ECG), respiratory flow (FLW) and blood pressure (BP) signals, in patients with idiopathic (IDC), dilated (DCM), or ischemic (ICM) disease. The aim of this work was to introduce new indices that could contribute to characterizing these diseases. With these new indices, we propose methods to classify cardiomyopathy patients (CMP) according to their cardiovascular risk or etiology. In addition, a new tool was proposed to reconstruct artifacts in biomedical signals. From the ECG, BP and FLW signals, different data series were extracted: beat to beat intervals (BBI - ECG), systolic and diastolic blood pressure (SBP and DBP - BP), and breathing duration (TT - FLW). -Firstly, we propose a novel artifact reconstruction method applied to biomedical signals. The reconstruction process makes use of information from neighboring events while maintaining the dynamics of the original signal. The method is based on detecting the cycles and artifacts, identifying the number of cycles to reconstruct, and predicting the cycles used to replace the artifact segments. The reconstruction results showed that most of the artifacts were correctly detected, and physiological cycles were incorrectly detected as artifacts in fewer than 1% of the cases. The second part is related to the cardiac death risk stratification of patients based on their left ventricular ejection (LVEF), using the Poincaré plot analysis, and classified as low (LVEF > 35%) or high (LVEF = 35%) risk. The BBI, SBP, and IT series of 46 CMP patients were applied. The linear discriminant analysis and support vector machines (SVM) classification methods were used. When comparing low risk vs high risk, an accuracy of 98 12% was obtained. Our results suggest that a dysfunction in the vagal activity could prevent the body from correctly maintaining circulatory homeostasis Next, we studied cardio-vascular couplings based on heart rate (HRV) and blood pressure (BPV) variability analyses in order to introduce new indices for noninvasive risk stratification in IDC patients. The ECG and BP signals of 91 IDC patients, and 49 healthy subjects were used. The patients were stratified by their sudden cardiac death risk as: high risk (IDCHR), when after two years the subject either died or suffered complications, or low risk (IDCLR) otherwise. Several indices were extracted from the BBI and SBP, and analyzed using the segmented Poincaré plot analysis, the high-resolution joint symbolic dynamics, and the normalized short time partial directed coherence methods. SVM models were built to classify these patients based on their sudden cardiac death risk. The SVM IDCLR vs IDCHR model achieved 98 9% accuracy with an area under the curve (AUC) of 0.96. Our results suggest that IDCHR patients have decreased HRV and increased BPV compared to both the IDCLR patients and the control subjects, suggesting a decrease in their vagal activity and the compensation of sympathetic activity. Lastly, we analyzed the cardiorespiratory interaction associated with the systems related to ICM and DCM disease. We propose an analysis based on vascular activity as the input and output of the baroreflex response. The aim was to analyze the suitability of cardiorespiratory and vascular interactions for the classification of ICM and DCM patients. We studied 41 CMP patients and 39 healthy subjects. Three new sub-spaces were defined: 'up' for increasing values, 'down' for decreasing values, and 'no change' otherwise, and a three-dimensional representation was created for each sub-space that was characterized statistically and morphologically. The resulting indices were used to classify the patients by their etiology through SVM models achieving 92.7% accuracy for ICM vs DCM patients comparison. The results reflected a more pronounced deterioration of the autonomous regulation in DCM patients.El objetivo de esta tesis fue estudiar la variabilidad de los sistemas cardíaco, respiratorio y vascular a través de señales electrocardiográficas (ECG), de flujo respiratorio (FLW) y de presión arterial (BP), en pacientes con cardiopatía idiopática (IDC). dilatada (DCM) o isquémica (ICM). El objetivo de este trabajo fue introducir nuevos indices que contribuyan a caracterizar estas enfermedades. Proponemos métodos para clasificar pacientes con cardiomiopatía (CMP) de acuerdo con su riesgo cardiovascular o etiología. Además, se propuso una nueva herramienta para reconstruir artefactos en señales biomédicas. De las señales de ECG, BP y FLW, se extrajeron diferentes series temporales: intervalos latido-a-latido (BBI - ECG), presión arterial sistólica y diastólica (SBP y DBP - BP) y la duración de la respiración (TT - FLW). En primer lugar, proponemos un método de reconstrucción de artefactos aplicado a señales biomédicas. El proceso de reconstrucción usa la información de eventos vecinos manteniendo la dinámica de la señal. El método se basa en detectar ciclos y artefactos, en identificar el número de ciclos a reconstruir y en predecir los ciclos utilizados para reemplazar los artefactos. La mayoría de los artefactos probados fueron detectados y reconstruidos correctamente y los ciclos fisiológicos fueron detectados incorrectamente como artefactos en menos del 1% de los casos, La segunda parte está relacionada con la estratificación de riesgo de muerte cardiovascular en función de la fracción de eyección ventricular izquierda (FEVI), mediante el análisis de Poincaré, en bajo (FEVI > 35%) y alto riesgo (FEVI 5 35%). Se utilizaron las series BBI, SBP y TT de 46 pacientes con CMP. Se utilizaron para la clasificación el análisis discriminante lineal y las máquinas de soporte vectorial (SVM). Al comparar los pacientes de bajo y alto riesgo, se obtuvo una exactitud del 98%. Los resultados sugieren la disfunción de la actividad vagal en pacientes de alto riesgo. A continuación, estudiamos los acoplamientos cardiovasculares basados en el análisis de la variabilidad de la frecuencia cardiaca (HRV) y la presión arterial (BPV) para introducir nuevos índices de estratificación de riesgo en pacientes con IDC. Se utilizaron las señales de ECG y BP de 91 pacientes con IDC y 49 sujetos sanos. Los pacientes fueron estratificados por su riesgo cardíaco como: alto riesgo (IDCHR), cuando después de dos años el sujeto murió, o bajo riesgo (IDCLR) en otro caso. Se extrajeron indices utilizando el análisis de Poincaré segmentado, la dinámica simbólica articulada de alta resolución y la coherencia parcial dirigida a corto plazo normalizada. Se construyeron modelos SVM para clasificar a estos pacientes en función de su riesgo cardiovascular. El modelo IDCLR vs IDCHR logró una exactitud del 98% con un área bajo la curva de 0.96. Los resultados sugieren que los pacientes IDCHR tienen sus HRV y BPV disminuidos en comparación con los pacientes IDCLR, lo que sugiere una disminución en su actividad vagal y la compensación de la actividad simpática. Finalmente, analizamos la interacción cardiorrespiratoria asociada con los sistemas relacionados con ICM y DCM. Proponemos un análisis basado en la actividad vascular como entrada y salida de la respuesta baroreflectora. El objetivo fue analizar la capacidad de las interacciones cardiorrespiratorias y vasculares para la clasificación de pacientes con ICM y DCM. Estudiamos 41 pacientes con CMP y 39 sujetos sanos. Se definieron tres sub-espacios: 'up' para valores crecientes, 'down' para los decrecientes, y 'no-change' en otro caso, y se creó una representación tridimensional que se caracterizó estadística y morfológicamente. Los indices resultantes se usaron para clasificar a los pacientes por su etiología con modelos SVM que lograron una exactitud de 92% cuando los pacientes ICM y DCM fueron comparados. Los resultados reflejaron un deterioro más pronunciado de la regulación autónoma en pacientes con DCM.Postprint (published version
    corecore