459 research outputs found

    ReDO: Cross-Layer Multi-Objective Design-Exploration Framework for Efficient Soft Error Resilient Systems

    Get PDF
    Designing soft errors resilient systems is a complex engineering task, which nowadays follows a cross-layer approach. It requires a careful planning for different fault-tolerance mechanisms at different system's layers: starting from the technology up to the software domain. While these design decisions have a positive effect on the reliability of the system, they usually have a detrimental effect on its size, power consumption, performance and cost. Design space exploration for cross-layer reliability is therefore a multi-objective search problem in which reliability must be traded-off with other design dimensions. This paper proposes a cross-layer multi-objective design space exploration algorithm developed to help designers when building soft error resilient electronic systems. The algorithm exploits a system-level Bayesian reliability estimation model to analyze the effect of different cross-layer combinations of protection mechanisms on the reliability of the full system. A new heuristic based on the extremal optimization theory is used to efficiently explore the design space. An extended set of simulations shows the capability of this framework when applied both to benchmark applications and realistic systems, providing optimized systems that outperform those obtained by applying state-of-the-art cross-layer reliability techniques

    Automatic Classification of Epilepsy Lesions

    Get PDF
    Epilepsy is a common and diverse set of chronic neurological disorders characterized by seizures. Epileptic seizures result from abnormal, excessive or hypersynchronous neuronal activity in the brain. Seizure types are organized firstly according to whether the source of the seizure within the brain is localized or distributed. In this work, our objective is to validate the use of MRI (Magnetic Resonance Imaging) for localizing seizure focus for improved surgical planning. We apply computer vision and machine learning techniques to tackle the problem of epilepsy lesion classification. First datasets of digitized histology images from brain cortexes of different patients are obtained by medical imaging scientists and provided to us. Some of the images are pre-labeled as normal or lesion. We evaluate a variety of image feature types that are popular in computer vision community to find those features that are appropriate for the epilepsy lesion classification. Finally we test Boosting, Support Vector Machines (SVM) and the Nearest Neighbor machine learning methods to train and classify the images into normal and lesion ones. We obtain at least 90.0% of accuracy for most of the classification experiments and the best accuracy rate we get is 93.3%. We also automatically compute neuron densities. As far as we know, our work of performing histology image classification and automatic quantification of focal cortical dysplasia in the correlation study of MRI and epilepsy histopathology is the first of its kind. Our method could potentially provide useful information for surgical planning

    Advanced Signal Processing Techniques Applied to Power Systems Control and Analysis

    Get PDF
    The work published in this book is related to the application of advanced signal processing in smart grids, including power quality, data management, stability and economic management in presence of renewable energy sources, energy storage systems, and electric vehicles. The distinct architecture of smart grids has prompted investigations into the use of advanced algorithms combined with signal processing methods to provide optimal results. The presented applications are focused on data management with cloud computing, power quality assessment, photovoltaic power plant control, and electrical vehicle charge stations, all supported by modern AI-based optimization methods

    Improvement of SPIDER tomographic diagnostic

    Get PDF
    openThe Neutral Beam Injection (NBI) consists in firing a high-energy beam of neutral particles inside the fusion plasma in a tokamak for fuel heating and fusion reaction triggering. One of such tokamak machines is ITER. For successful ITER operation, the beam must satisfy strict specifications, for example in terms of ionic current throughput (40 A at 1 MV), homogeneity (more than 90%) and divergence (less than 7 mrad). To reach these requirements, in Padua, at Consorzio RFX, two experiments are hosted: SPIDER and MITICA. SPIDER is the full- size prototype of ITER NBI negative ion source. This work focuses on one of its diagnostics, namely the visible tomography of the beam which will be installed in the future also on MITICA, ITER’s NBI full-scale prototype. Visible tomography is a non-invasive diagnostic which uses two-dimensional visible cameras collecting the light emitted from the ion beam-background interactions. The camera signal is an integrated measure of the two-dimensional beam emissivity along different Lines-Of-Signt (LOSs), which can be reconstructed using a suitable inversion algorithm. Therefore, the reconstructed emissivity can be used to characterise the beam divergence and homogeneity and, using suitable spectroscopic models, allows to estimate the beam current density. This work aims at improving the current SPIDER tomography by introducing two-dimensional LOSs in the reconstruction algorithm, to better account for the geometry of the diagnostic, and by further developing a model for the beam emission (by introducing new reactions and accounting for the effect of secondary electrons on the beam light), to better interpret the reconstructed emissivity in terms of negative ion current density. Testing on experimental data shows good agreement between the previous reconstruction results and the improved 2D-LOS one. Further testing on full-beam simulations shows that the algorithm performance is not affected by the beam features (e.g. beamlet width and uniformity) and the reconstruction error in ideal conditions (no light background and no signal noise) remains around 10% at 5-beamlet resolution or lower for lower resolutions (i.e. 10, 20, 40 or 80-beamlet resolution), showing the possibility for successful application of such upgrades with a sufficient level of detail using two-dimensional LOSs. Light background is shown to impact the most the reconstruction accuracy of the beam, up to 30% in the worst case of 5% background intensity (compared to the nominal beamlet luminosity). The beam emissivity as a function of the beam energy is assessed, showing a reduction as the beam energy increases. It also demonstrates that the single stripping dominates the beam emission at all energies, increasing as the beam energy does. At SPIDER’s nominal acceleration of 100 keV, single stripping processes account for 87.7% of the total emissivity, followed by excitation (5.7%) and secondary electrons (4.1%), also representing a possible cause of light background. Cameras are calibrated using a calibrated source and a Hα filter in order to obtain an equivalent Hα source whose emissivity is known. Setting the camera in front of the equivalent source allows to link the signal, collected by the completely illuminated pixels, to the emissivity, obtaining a calibration constant which is used to convert the integrated camera counts into radiant power integrals. The reconstructed emissivity allows, using the results from the beam model, to obtain, for the first time, the 2D pattern of the beam current density from its emissivity, which matches the same order of magnitude of the direct electrical measurements of the STRIKE calorimeter and the Beam Current Monitor.The Neutral Beam Injection (NBI) consists in firing a high-energy beam of neutral particles inside the fusion plasma in a tokamak for fuel heating and fusion reaction triggering. One of such tokamak machines is ITER. For successful ITER operation, the beam must satisfy strict specifications, for example in terms of ionic current throughput (40 A at 1 MV), homogeneity (more than 90%) and divergence (less than 7 mrad). To reach these requirements, in Padua, at Consorzio RFX, two experiments are hosted: SPIDER and MITICA. SPIDER is the full- size prototype of ITER NBI negative ion source. This work focuses on one of its diagnostics, namely the visible tomography of the beam which will be installed in the future also on MITICA, ITER’s NBI full-scale prototype. Visible tomography is a non-invasive diagnostic which uses two-dimensional visible cameras collecting the light emitted from the ion beam-background interactions. The camera signal is an integrated measure of the two-dimensional beam emissivity along different Lines-Of-Signt (LOSs), which can be reconstructed using a suitable inversion algorithm. Therefore, the reconstructed emissivity can be used to characterise the beam divergence and homogeneity and, using suitable spectroscopic models, allows to estimate the beam current density. This work aims at improving the current SPIDER tomography by introducing two-dimensional LOSs in the reconstruction algorithm, to better account for the geometry of the diagnostic, and by further developing a model for the beam emission (by introducing new reactions and accounting for the effect of secondary electrons on the beam light), to better interpret the reconstructed emissivity in terms of negative ion current density. Testing on experimental data shows good agreement between the previous reconstruction results and the improved 2D-LOS one. Further testing on full-beam simulations shows that the algorithm performance is not affected by the beam features (e.g. beamlet width and uniformity) and the reconstruction error in ideal conditions (no light background and no signal noise) remains around 10% at 5-beamlet resolution or lower for lower resolutions (i.e. 10, 20, 40 or 80-beamlet resolution), showing the possibility for successful application of such upgrades with a sufficient level of detail using two-dimensional LOSs. Light background is shown to impact the most the reconstruction accuracy of the beam, up to 30% in the worst case of 5% background intensity (compared to the nominal beamlet luminosity). The beam emissivity as a function of the beam energy is assessed, showing a reduction as the beam energy increases. It also demonstrates that the single stripping dominates the beam emission at all energies, increasing as the beam energy does. At SPIDER’s nominal acceleration of 100 keV, single stripping processes account for 87.7% of the total emissivity, followed by excitation (5.7%) and secondary electrons (4.1%), also representing a possible cause of light background. Cameras are calibrated using a calibrated source and a Hα filter in order to obtain an equivalent Hα source whose emissivity is known. Setting the camera in front of the equivalent source allows to link the signal, collected by the completely illuminated pixels, to the emissivity, obtaining a calibration constant which is used to convert the integrated camera counts into radiant power integrals. The reconstructed emissivity allows, using the results from the beam model, to obtain, for the first time, the 2D pattern of the beam current density from its emissivity, which matches the same order of magnitude of the direct electrical measurements of the STRIKE calorimeter and the Beam Current Monitor

    A Multigrid Method for the Efficient Numerical Solution of Optimization Problems Constrained by Partial Differential Equations

    Get PDF
    We study the minimization of a quadratic functional subject to constraints given by a linear or semilinear elliptic partial differential equation with distributed control. Further, pointwise inequality constraints on the control are accounted for. In the linear-quadratic case, the discretized optimality conditions yield a large, sparse, and indefinite system with saddle point structure. One main contribution of this thesis consists in devising a coupled multigrid solver which avoids full constraint elimination. To this end, we define a smoothing iteration incorporating elements from constraint preconditioning. A local mode analysis shows that for discrete optimality systems, we can expect smoothing rates close to those obtained with respect to the underlying constraint PDE. Our numerical experiments include problems with constraints where standard pointwise smoothing is known to fail for the underlying PDE. In particular, we consider anisotropic diffusion and convection-diffusion problems. The framework of our method allows to include line smoothers or ILU-factorizations, which are suitable for such problems. In all cases, numerical experiments show that convergence rates do not depend on the mesh size of the finest level and discrete optimality systems can be solved with a small multiple of the computational cost which is required to solve the underlying constraint PDE. Employing the full multigrid approach, the computational cost is proportional to the number of unknowns on the finest grid level. We discuss the role of the regularization parameter in the cost functional and show that the convergence rates are robust with respect to both the fine grid mesh size and the regularization parameter under a mild restriction on the next to coarsest mesh size. Incorporating spectral filtering for the reduced Hessian in the control smoothing step allows us to weaken the mesh size restriction. As a result, problems with near-vanishing regularization parameter can be treated efficiently with a negligible amount of additional computational work. For fine discretizations, robust convergence is obtained with rates which are independent of the regularization parameter, the coarsest mesh size, and the number of levels. In order to treat linear-quadratic problems with pointwise inequality constraints on the control, the multigrid approach is modified to solve subproblems generated by a primal-dual active set strategy (PDAS). Numerical experiments demonstrate the high efficiency of this approach due to mesh-independent convergence of both the outer PDAS method and the inner multigrid solver. The PDAS-multigrid method is incorporated in the sequential quadratic programming (SQP) framework. Inexact Newton techniques further enhance the computational efficiency. Globalization is implemented with a line search based on the augmented Lagrangian merit function. Numerical experiments highlight the efficiency of the resulting SQP-multigrid approach. In all cases, locally superlinear convergence of the SQP method is observed. In combination with the mesh-independent convergence rate of the inner solver, a solution method with optimal efficiency is obtained
    corecore