24 research outputs found
An accurate and time-efficient deep learning-based system for automated segmentation and reporting of cardiac magnetic resonance-detected ischemic scar
BACKGROUND AND OBJECTIVES: Myocardial infarction scar (MIS) assessment by cardiac magnetic resonance provides prognostic information and guides patients' clinical management. However, MIS segmentation is time-consuming and not performed routinely. This study presents a deep-learning-based computational workflow for the segmentation of left ventricular (LV) MIS, for the first time performed on state-of-the-art dark-blood late gadolinium enhancement (DB-LGE) images, and the computation of MIS transmurality and extent. METHODS: DB-LGE short-axis images of consecutive patients with myocardial infarction were acquired at 1.5T in two centres between Jan 1, 2019, and June 1, 2021. Two convolutional neural network (CNN) models based on the U-Net architecture were trained to sequentially segment the LV and MIS, by processing an incoming series of DB-LGE images. A 5-fold cross-validation was performed to assess the performance of the models. Model outputs were compared respectively with manual (LV endo- and epicardial border) and semi-automated (MIS, 4-Standard Deviation technique) ground truth to assess the accuracy of the segmentation. An automated post-processing and reporting tool was developed, computing MIS extent (expressed as relative infarcted mass) and transmurality. RESULTS: The dataset included 1355 DB-LGE short-axis images from 144 patients (MIS in 942 images). High performance (> 0.85) as measured by the Intersection over Union metric was obtained for both the LV and MIS segmentations on the training sets. The performance for both LV and MIS segmentations was 0.83 on the test sets. Compared to the 4-Standard Deviation segmentation technique, our system was five times quicker (<1 min versus 7 ± 3 min), and required minimal user interaction. CONCLUSIONS: Our solution successfully addresses different issues related to automatic MIS segmentation, including accuracy, time-effectiveness, and the automatic generation of a clinical report
An accurate and time-efficient deep learning-based system for automated segmentation and reporting of cardiac magnetic resonance-detected ischemic scar
Background and objectives: Myocardial infarction scar (MIS) assessment by cardiac magnetic resonance provides prognostic information and guides patients' clinical management. However, MIS segmentation is time-consuming and not performed routinely. This study presents a deep-learning-based computational workflow for the segmentation of left ventricular (LV) MIS, for the first time performed on state-of-the-art dark-blood late gadolinium enhancement (DB-LGE) images, and the computation of MIS transmurality and extent.Methods: DB-LGE short-axis images of consecutive patients with myocardial infarction were acquired at 1.5T in two centres between Jan 1, 2019, and June 1, 2021. Two convolutional neural network (CNN) mod-els based on the U-Net architecture were trained to sequentially segment the LV and MIS, by processing an incoming series of DB-LGE images. A 5-fold cross-validation was performed to assess the performance of the models. Model outputs were compared respectively with manual (LV endo-and epicardial border) and semi-automated (MIS, 4-Standard Deviation technique) ground truth to assess the accuracy of the segmentation. An automated post-processing and reporting tool was developed, computing MIS extent (expressed as relative infarcted mass) and transmurality.Results: The dataset included 1355 DB-LGE short-axis images from 144 patients (MIS in 942 images). High performance (> 0.85) as measured by the Intersection over Union metric was obtained for both the LV and MIS segmentations on the training sets. The performance for both LV and MIS segmentations was 0.83 on the test sets.Compared to the 4-Standard Deviation segmentation technique, our system was five times quicker ( <1 min versus 7 +/- 3 min), and required minimal user interaction. Conclusions: Our solution successfully addresses different issues related to automatic MIS segmentation, including accuracy, time-effectiveness, and the automatic generation of a clinical report.(c) 2022 Elsevier B.V. All rights reserved
Surfing on fitness landscapes: A boost on optimization by fourier surrogate modeling
Surfing in rough waters is not always as fun as wave riding the "big one". Similarly, in optimization problems, fitness landscapes with a huge number of local optima make the search for the global optimum a hard and generally annoying game. Computational Intelligence optimization metaheuristics use a set of individuals that "surf" across the fitness landscape, sharing and exploiting pieces of information about local fitness values in a joint effort to find out the global optimum. In this context, we designed surF, a novel surrogate modeling technique that leverages the discrete Fourier transform to generate a smoother, and possibly easier to explore, fitness landscape. The rationale behind this idea is that filtering out the high frequencies of the fitness function and keeping only its partial information (i.e., the low frequencies) can actually be beneficial in the optimization process. We prove our theory by combining surF with a settings free variant of Particle Swarm Optimization (PSO) based on Fuzzy Logic, called Fuzzy Self-Tuning PSO. Specifically, we introduce a new algorithm, named F3ST-PSO, which performs a preliminary exploration on the surrogate model followed by a second optimization using the actual fitness function. We show that F3ST-PSO can lead to improved performances, notably using the same budget of fitness evaluations
Metodo per determinare un tempo di inversione ottimale per una sequenza “Inversion Recovery” di risonanza magnetica utilizzabile per l’acquisizione di immagini tardive dopo somministrazione di un mezzo di contrasto paramagnetico
Il tempo di inversione (TI) è la misurazione del lasso di tempo tra impulsi di radiofrequenza e statici (di campionamento), necessaria per ottenere il segnale di risonanza magnetica (RM) e rilevare via immagini il rilassamento di un tessuto da esaminare. Ad oggi il TI viene stimato dall'operatore di RM, in base all'esperienza personale o spesso per tentativi e errori, o sottoponendo il paziente a diversi tempi di inversione per confrontarne le immagini acquisite, la cui qualità non è comunque garantita. Tempi e costi di somministrazione del test, qualità e leggibilità delle immagini possono però ora essere migliorati attraverso un modello di machine learning, capace di tararsi sui dati significativi del paziente e sui parametri specifici dell'esame da eseguire. Il modello è stato addestrato su una molteplicità di valutazioni campione in esami del miocardio ed è potenzialmente applicabile ad altri esami di RM
Simplifying fitness landscapes using dilation functions evolved with genetic programming
Several optimization problems have features that hinder the capabilities of searching heuristics. To cope with this issue, different methods have been proposed to manipulate search spaces and improve the optimization process. This paper focuses on Dilation Functions (DFs), which are one of the most promising techniques to manipulate the fitness landscape, by "expanding " or "compressing " specific regions. The definition of appropriate DFs is problem dependent and requires a-priori knowledge of the optimization problem. Therefore, it is essential to introduce an automatic and efficient strategy to identify optimal DFs. With this aim, we propose a novel method based on Genetic Programming, named GP4DFs, which is capable of evolving effective DFs. GP4DFs identifies optimal dilations, where a specific DF is applied to each dimension of the search space. Moreover, thanks to a knowledge-driven initialization strategy, GP4DFs converges to better solutions with a reduced number of fitness evaluations, compared to the state-of-the-art approaches. The performance of GP4DFs is assessed on a set of 43 benchmark functions mimicking several features of real-world optimization problems. The obtained results indicate the suitability of the generated DFs
On the automatic calibration of fully analogical spiking neuromorphic chips
Nowadays, understanding the topology of biological neural networks and sampling their activity is possible thanks to various laboratory protocols that provide a large amount of experimental data, thus paving the way to accurate modeling and simulation. Neuromorphic systems were developed to simulate the dynamics of biological neural networks by means of electronic circuits, offering an efficient alternative to classic simulations based on systems of differential equations, from both the points of view of the energy consumed and the overall computational effort. Spikey is a configurable neuromorphic chip based on the Leaky Integrate-And-Fire model, which gives the user the possibility to model an arbitrary neural topology and simulate the temporal evolution of membrane potentials. To accurately reproduce the behavior of a specific biological network, a detailed parameterization of all neurons in the neuromorphic chip is necessary. Determining such parameters is a hard, error-prone, and generally time consuming task. In this work, we propose a novel methodology for the automatic calibration of neuromorphic chips that exploits a given neural activity as target. Our results show that, in the case of small networks with a low complexity, the method can estimate a vector of parameters capable of reproducing the target activity. Conversely, in the case of more complex networks, the simulations with Spikey can be highly affected by noise, which causes small variations in the simulations outcome even when identical networks are simulated, hindering the convergence to optimal parameterizations
Shaping and Dilating the Fitness Landscape for Parameter Estimation in Stochastic Biochemical Models
The parameter estimation (PE) of biochemical reactions is one of the most challenging tasks in systems biology given the pivotal role of these kinetic constants in driving the behavior of biochemical systems. PE is a non-convex, multi-modal, and non-separable optimization problem with an unknown fitness landscape; moreover, the quantities of the biochemical species appearing in the system can be low, making biological noise a non-negligible phenomenon and mandating the use of stochastic simulation. Finally, the values of the kinetic parameters typically follow a log-uniform distribution; thus, the optimal solutions are situated in the lowest orders of magnitude of the search space. In this work, we further elaborate on a novel approach to address the PE problem based on a combination of adaptive swarm intelligence and dilation functions (DFs). DFs require prior knowledge of the characteristics of the fitness landscape; therefore, we leverage an alternative solution to evolve optimal DFs. On top of this approach, we introduce surrogate Fourier modeling to simplify the PE, by producing a smoother version of the fitness landscape that excludes the high frequency components of the fitness function. Our results show that the PE exploiting evolved DFs has a performance comparable with that of the PE run with a custom DF. Moreover, surrogate Fourier modeling allows for improving the convergence speed. Finally, we discuss some open problems related to the scalability of our methodology
Estimation of Fuzzy Models from Mixed Data Sets with pyFUME
pyFUME is a python package for the automatic estimation of fuzzy inference systems. Fuzzy models are considered among the most interpretable, understandable, and transparent methods that are currently available, making them ideal for the development of Interpretable AI systems. Such models are suitable for the creation of decision support systems in extremely sensitive domains where the right to an explanation is particularly important, like medicine and healthcare. pyFUME can automatically estimate the antecedent sets and the consequent parameters of a Takagi-Sugeno fuzzy model directly from data, and deliver an executable fuzzy model implemented with the Simpful python library. The main limitation of pyFUME was that it was not well-equipped to deal with purely categorical, non-ordinal variables since it used distance metrics suitable for continuous variables to cluster the data for determining the fuzzy model’s structure. In this paper, we introduce a new version of pyFUME that supports mixed (i.e., continuous and categorical) data sets, relying on a novel version of fuzzy Cprototypes clustering. Our results show that our new approach is effective, leading to better fitting with respect to models based only on continuous features. We also present alternative plotting methods tailored for categorical variables, which improves the overall interpretability of the estimated discrete fuzzy sets
If You Can't Beat It, Squash It: Simplify Global Optimization by Evolving Dilation Functions
Optimization problems represent a class of pervasive and complex tasks in Computer Science, aimed at identifying the global optimum of a given objective function. Optimization problems are typically noisy, multi-modal, non-convex, non-separable, and often non-differentiable. Because of these features, they mandate the use of sophisticated population-based meta-heuristics to effectively explore the search space. Additionally, computational techniques based on the manipulation of the optimization landscape, such as Dilation Functions (DFs), can be effectively exploited to either “compress” or “dilate” some target regions of the search space, in order to improve the exploration and exploitation capabilities of any meta-heuristic. The main limitation of DFs is that they must be tailored on the specific optimization problem under investigation. In this work, we propose a solution to this issue, based on the idea of evolving the DFs. Specifically, we introduce a two-layered evolutionary framework, which combines Evolutionary Computation and Swarm Intelligence to solve the meta-problem of optimizing both the structure and the parameters of DFs. We evolved optimal DFs on a variety of benchmark problems, showing that this approach yields extremely simpler versions of the original optimization problems
The domination game: dilating bubbles to fill up Pareto fronts
Multi-objective optimization algorithms might struggle in finding optimal dominating solutions, especially in real-case scenarios where problems are generally characterized by non-separability, non-differentiability, and multi-modality issues. An effective strategy that already showed to improve the outcome of optimization algorithms consists in manipulating the search space, in order to explore its most promising areas. In this work, starting from a Pareto front identified by an optimization strategy, we exploit Local Bubble Dilation Functions (LBDFs) to manipulate a locally bounded region of the search space containing non-dominated solutions. We tested our approach on the benchmark functions included in the DTLZ and WFG suites, showing that the Pareto front obtained after the application of LBDFs is most of the time characterized by an increased hyper-volume value. Our results confirm that LBDFs are an effective means to identify additional non-dominated solutions that can improve the quality of the Pareto front