87 research outputs found

    Convergent plug-and-play with proximal denoiser and unconstrained regularization parameter

    Full text link
    In this work, we present new proofs of convergence for Plug-and-Play (PnP) algorithms. PnP methods are efficient iterative algorithms for solving image inverse problems where regularization is performed by plugging a pre-trained denoiser in a proximal algorithm, such as Proximal Gradient Descent (PGD) or Douglas-Rachford Splitting (DRS). Recent research has explored convergence by incorporating a denoiser that writes exactly as a proximal operator. However, the corresponding PnP algorithm has then to be run with stepsize equal to 11. The stepsize condition for nonconvex convergence of the proximal algorithm in use then translates to restrictive conditions on the regularization parameter of the inverse problem. This can severely degrade the restoration capacity of the algorithm. In this paper, we present two remedies for this limitation. First, we provide a novel convergence proof for PnP-DRS that does not impose any restrictions on the regularization parameter. Second, we examine a relaxed version of the PGD algorithm that converges across a broader range of regularization parameters. Our experimental study, conducted on deblurring and super-resolution experiments, demonstrate that both of these solutions enhance the accuracy of image restoration.Comment: arXiv admin note: substantial text overlap with arXiv:2301.1373

    A relaxed proximal gradient descent algorithm for convergent plug-and-play with proximal denoiser

    Full text link
    This paper presents a new convergent Plug-and-Play (PnP) algorithm. PnP methods are efficient iterative algorithms for solving image inverse problems formulated as the minimization of the sum of a data-fidelity term and a regularization term. PnP methods perform regularization by plugging a pre-trained denoiser in a proximal algorithm, such as Proximal Gradient Descent (PGD). To ensure convergence of PnP schemes, many works study specific parametrizations of deep denoisers. However, existing results require either unverifiable or suboptimal hypotheses on the denoiser, or assume restrictive conditions on the parameters of the inverse problem. Observing that these limitations can be due to the proximal algorithm in use, we study a relaxed version of the PGD algorithm for minimizing the sum of a convex function and a weakly convex one. When plugged with a relaxed proximal denoiser, we show that the proposed PnP-α\alphaPGD algorithm converges for a wider range of regularization parameters, thus allowing more accurate image restoration

    Convergent Bregman Plug-and-Play Image Restoration for Poisson Inverse Problems

    Full text link
    Plug-and-Play (PnP) methods are efficient iterative algorithms for solving ill-posed image inverse problems. PnP methods are obtained by using deep Gaussian denoisers instead of the proximal operator or the gradient-descent step within proximal algorithms. Current PnP schemes rely on data-fidelity terms that have either Lipschitz gradients or closed-form proximal operators, which is not applicable to Poisson inverse problems. Based on the observation that the Gaussian noise is not the adequate noise model in this setting, we propose to generalize PnP using theBregman Proximal Gradient (BPG) method. BPG replaces the Euclidean distance with a Bregman divergence that can better capture the smoothness properties of the problem. We introduce the Bregman Score Denoiser specifically parametrized and trained for the new Bregman geometry and prove that it corresponds to the proximal operator of a nonconvex potential. We propose two PnP algorithms based on the Bregman Score Denoiser for solving Poisson inverse problems. Extending the convergence results of BPG in the nonconvex settings, we show that the proposed methods converge, targeting stationary points of an explicit global functional. Experimental evaluations conducted on various Poisson inverse problems validate the convergence results and showcase effective restoration performance

    Spatial and Morphological Control of Cyanine Dye Thin Films

    Get PDF
    Cyanine dyes are organic semiconductor compounds with light absorption and emission properties useful for emerging technologies such as solar cells and light-emitting devices. The characteristics of these materials in the solid state depend on their organization of the constituting building blocks. This thesis focuses on controlling the morphology of cyanine dye thin films at different length scales and clarifying the resulting properties. When microstructures present features whose size matches visible light wavelengths, new properties may arise from light-matter interactions. Here the properties resulting from the light-matter interactions of cyanine droplet films cast from solution are studied. Based on experimental evidence, it is shown that dye droplet ensembles scatter light with different efficiencies and wavelength ranges depending on their dimensions. FDTD simulations are used to show that this effect results fromscattering enhancement at the absorption edge of the dye where the refractive index varies considerably. Simulations also provide a better understanding of individual dropletsâ interaction with light. While earlier work had hypothesized that the observed scattering phenomenon were due to crystalline clusters within the droplets, this work highlights the contribution of the dye filmmorphology. Cyanines also form single crystals whose fabrication induces molecular-scale order in the material. Previous work demonstrated that thin single crystals could be grown by solvent vapor annealing of dye droplets. Here it is shown that in uncontrolled conditions, cyanine single crystals destabilize to formdendritic crystals. In-situ microscopy observations highlight the solute reservoir role of the droplet distribution surrounding a growing crystal; when the distance between droplets and the crystal front is large, the solute supply is diffusion-limited. Moreover variations in local pressure equilibrium between the droplets and crystal front lead to advection fluxes which perturb the crystal growth. These observations help design configurations to either prevent crystal destabilization or take advantage of the dendritic growth in a controlled manner. In addition, the patterning of crystals on a substrate is relevant for their application in devices. A practical challenge is to induce single crystal growth at specific locations. Here, surfaces patterned with SAMs of hydrophilic and hydrophobic thiols are used to create dye droplet arrays from which crystals can be grown. This method is shown to yield local crystallization of the dye and to prevent crystal destabilization through better control of the droplet distribution. By varying the dye solution concentration, partial control over crystal density is achieved, however it proved difficult to control the number of nuclei per droplet. A more controlled evaporation and solvent vapor annealing system might be necessary to master the nucleation process. Finally the structure and optical properties of cyanine single crystals are addressed. The crystal structure was determined by X-ray diffraction. Structural aspects are shown to lead to excitonic couplings, which are evidenced by orientation-dependent spectroscopic measurements of single crystals. Although further investigation of the absorption band structure is necessary, the results are promising for photovoltaic devices as they might improve exciton transport compared to amorphous layers

    Use of integral experiments for the assessment of a new 235

    Full text link
    The Working Party on International Nuclear Data Evaluation Co-operation (WPEC) subgroup 29 (SG 29) was established to investigate an issue with the 235U capture cross-section in the energy range from 0.1 to 2.25 keV, due to a possible overestimation of 10% or more. To improve the 235U capture crosssection, a new 235U evaluation has been proposed by the Institut de Radioprotection et de Sûreté Nucléaire (IRSN) and the CEA, mainly based on new time-of-flight 235U capture cross-section measurements and recent fission cross-section measurements performed at the n_TOF facility from CERN. IRSN and CEA Cadarache were in charge of the thermal to 2.25 keV energy range, whereas the CEA DIF was responsible of the high energy region. Integral experiments showing a strong 235U sensitivity are used to assess the new evaluation, using Monte-Carlo methods. The keff calculations were performed with the 5.D.1 beta version of the MORET 5 code, using the JEFF-3.2 library and the new 235U evaluation, as well as the JEFF-3.3T1 library in which the new 235U has been included. The benchmark selection allowed highlighting a significant improvement on keff due to the new 235U evaluation. The results of this data testing are presented here

    Integral experiments at Sandia with molybdenum sleeves for testing 95Mo cross sections in thermal energy spectrum

    No full text
    International audienceMolybdenum plays an important role in nuclear criticality-safety as it is a fission product in reprocessing plants under the shape of highly concentrated UPuMoZr deposits but also in research reactors, naval and space reactors. 95Mo, 96Mo and 97Mo are the main isotopes of interest in terms of abundance; 95Mo has the largest capture cross section. Few integral experiments sensitive to the cross sections of molybdenum are available in the literature. The French MIRTE experiments [1] consisting of a 10 mm molybdenum screen between four lattices of Valduc U(4.738 %)O2 rods, are sensitive to the capture cross section of 95Mo mainly in thermal energy range. However, due to the limitation in the number of available rods, the French MIRTE experiments are not sensitive enough to molybdenum, mainly for the first resonance of 95Mo around 45 eV. Unfortunately, high sensitivity in this range is important to improve accuracy of criticality computations of some French reprocessing plants application cases.As a result, to address the needs of criticality-safety, an IER sheet was submitted in 2015 and approved by NCSP program management team. A preliminary design report (CED-1) was issued in 2018 and a final design report (CED-2) at the end of 2020. This latter design proposes to test natural 0.762-mm thick molybdenum sleeves (external diameter = 12.7 mm) around 397 7uPCX rods (UO2 rods enriched at 6.90%), in a test zone that is surrounded by un-sleeved 7uPCX rods. The UO2 rods were set according with a hexagonal 1.55-cm pitch. The total number of rods was 877. The approach to criticality is reached through the addition of rods, the rods being totally immersed in water. Reference experiments without molybdenum are planned to have feedback on cross sections and reproducibility experiments are proposed for confirming the estimate of uncertainties. The material supply study analyzing the costs was done beginning of 2021 and the sleeves were ordered in April 2021. They have been sent to SANDIA national laboratories last September. Experiments are assumed to be performed by the end of 2021 or at the beginning of 2022. The final paper will address the final design, an estimation of experimental uncertainties and will give the calculated keff values with various codes and nuclear data libraries, as well as sensitivities of keff to the main reactions. These experiments will also help improving molybdenum cross sections in thermal energy range as they will complement the work performed on differential measurements (transmission, capture) from JPARC, n-TOF, GELINA and RPI on natural and enriched molybdenum samples and the assessment work of IRSN with the SAMMY and CONRAD codes aiming at producing new resonance parameters and covariance data.[1] International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)/03, OECD NEA, Paris, France – LEU-COMP-THERM-106 – “4.738-Wt.%-Enriched-Uranium-Dioxide-Fuel-Rod Arrays In Water Reflected Or Separated By Various Structural Materials (Sodium Chloride, Rhodium, PVC, Molybdenum, Chromium, Manganese)”, N. LECLAIRE, F.X. LE DAUPHIN, I. DUHAMEL, J.BESS

    Hardware and Software architectures for deep learning acceleration on embedded multi-processor

    No full text
    Les réseaux de neurones convolutifs (CNN) sont largement utilisés dans le domaine la reconnaissance d'images et donnent de très bons résultats comparés aux algorithmes classiques. Pour améliorer le taux de reconnaissance et augmenter le nombre de classes reconnaissables, les réseaux de neurones sont de plus en plus profonds et exigeants en calculs.Dans le contexte embarqué, les contraintes de ressources ne permettent pas d'exécuter les réseaux de neurones les plus coûteux en temps-réel. Les plateformes peuvent manquer de puissance de calculs et/ou de mémoire pour stocker l'ensemble des paramètres.Ce manuscrit propose plusieurs optimisations afin d'améliorer les performances des opérations de convolution dans le contexte multiprocesseurs embarqués.La première est l'optimisation des ressources matérielles de la plateforme : le nombre d'unités de calcul, de registres, la taille mémoire, la bande passante sont des éléments à prendre en compte pour utiliser au mieux la puissance de calculs disponible.La seconde est l'optimisation des algorithmes de convolution. L'algorithme de Winograd permet de réduire la complexité arithmétique de la convolution jusqu'à un facteur 2.25 en transformant les opérations de convolution en multiplications élément par élément. Cependant, Winograd requiert plus de registres afin de garder les données intermédiaires lors de l'exécution de l'algorithme, ce qui pénalise ses performances par rapport à l'algorithme de convolution direct.Pour résoudre ce problème, des évolutions architecturales sont proposées avec l'ajout de registres supplémentaires et de nouvelles instructions afin d'améliorer le facteur d'accélération de Winograd par rapport à la convolution.Les performances ont été évaluées sur la plateforme ASMP (Application Specific Multi Processors) à base de 8 processeurs STxP70 de STMicroelectronics en utilisant un co-processeur vectorielle permettant de réaliser jusqu'à 8 MAC (Multiplication Accumulation) par cycle en monocoeur et jusqu'à 64 MAC par cycle sur 8 processeurs à la fois.Convolutional Neural Networks (CNNs) are widely used in the field of image recognition and give very good results compared to classical algorithms. To improve the recognition rate and increase the number of recognizable classes, neural networks are increasingly deep and computationally demanding.In the on-board context, resource constraints do not allow the most expensive neural networks to be executed in real time. The platforms may lack computing power and / or memory to store all the parameters.This manuscript proposes several optimizations in order to improve the performance of convolution operations in the embedded multiprocessor context.The first is the optimization of the hardware resources of the platform: the number of computing units, registers, memory size, bandwidth are elements to be taken into account in order to make the best use of the available computing power.The second is the optimization of convolution algorithms. The Winograd algorithm makes it possible to reduce the arithmetic complexity of the convolution up to a factor of 2.25 by transforming the convolution operations into element-by-element multiplications. However, Winograd requires more registers in order to keep the intermediate data during the execution of the algorithm, which penalizes its performance compared to the direct convolution algorithm.To solve this problem, architectural evolutions are proposed with the addition of additional registers and new instructions in order to improve the acceleration factor of Winograd compared to convolution.The performances were evaluated on the ASMP (Application Specific Multi Processors) platform based on 8 STMicroelectronics STxP70 processors using a vector co-processor allowing up to 8 MAC (Multiplication Accumulation) per cycle in single-core and up to 64 MAC per cycle on 8 processors at a time

    Integral experiments at Sandia with molybdenum sleeves for testing 95Mo cross sections in thermal energy spectrum

    No full text
    International audienceMolybdenum plays an important role in nuclear criticality-safety as it is a fission product in reprocessing plants under the shape of highly concentrated UPuMoZr deposits but also in research reactors, naval and space reactors. 95Mo, 96Mo and 97Mo are the main isotopes of interest in terms of abundance; 95Mo has the largest capture cross section. Few integral experiments sensitive to the cross sections of molybdenum are available in the literature. The French MIRTE experiments [1] consisting of a 10 mm molybdenum screen between four lattices of Valduc U(4.738 %)O2 rods, are sensitive to the capture cross section of 95Mo mainly in thermal energy range. However, due to the limitation in the number of available rods, the French MIRTE experiments are not sensitive enough to molybdenum, mainly for the first resonance of 95Mo around 45 eV. Unfortunately, high sensitivity in this range is important to improve accuracy of criticality computations of some French reprocessing plants application cases.As a result, to address the needs of criticality-safety, an IER sheet was submitted in 2015 and approved by NCSP program management team. A preliminary design report (CED-1) was issued in 2018 and a final design report (CED-2) at the end of 2020. This latter design proposes to test natural 0.762-mm thick molybdenum sleeves (external diameter = 12.7 mm) around 397 7uPCX rods (UO2 rods enriched at 6.90%), in a test zone that is surrounded by un-sleeved 7uPCX rods. The UO2 rods were set according with a hexagonal 1.55-cm pitch. The total number of rods was 877. The approach to criticality is reached through the addition of rods, the rods being totally immersed in water. Reference experiments without molybdenum are planned to have feedback on cross sections and reproducibility experiments are proposed for confirming the estimate of uncertainties. The material supply study analyzing the costs was done beginning of 2021 and the sleeves were ordered in April 2021. They have been sent to SANDIA national laboratories last September. Experiments are assumed to be performed by the end of 2021 or at the beginning of 2022. The final paper will address the final design, an estimation of experimental uncertainties and will give the calculated keff values with various codes and nuclear data libraries, as well as sensitivities of keff to the main reactions. These experiments will also help improving molybdenum cross sections in thermal energy range as they will complement the work performed on differential measurements (transmission, capture) from JPARC, n-TOF, GELINA and RPI on natural and enriched molybdenum samples and the assessment work of IRSN with the SAMMY and CONRAD codes aiming at producing new resonance parameters and covariance data.[1] International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)/03, OECD NEA, Paris, France – LEU-COMP-THERM-106 – “4.738-Wt.%-Enriched-Uranium-Dioxide-Fuel-Rod Arrays In Water Reflected Or Separated By Various Structural Materials (Sodium Chloride, Rhodium, PVC, Molybdenum, Chromium, Manganese)”, N. LECLAIRE, F.X. LE DAUPHIN, I. DUHAMEL, J.BESS

    Comparison of two methods for assessment of the rod positioning uncertainty and consequences on the evaluation of correlation factors

    Get PDF
    International audienceIn this paper two major families of methods to deal with the assessment of the rod positioning uncertainty in a lattice are tested: a traditional one described in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) Handbook and the other one consisting in sampling the position of rods with Monte Carlo techniques (ISO Uncertainty Guidelines). They are applied on a benchmark with tight-packed lattice of UO 2 rods that is sensitive to the rod positioning as it is clearly under-moderated. It is shown that the choice of the method has a great impact on the propagated uncertainty, the traditional one leading to a significant overestimation of the overall uncertainty and can also contribute to a bias in the correlation factors that are used for assessing biases due to nuclear data using GLLSM methodologies. The paper briefly describes the tight-packed lattice experimental program performed at the Valduc Research Centre, which is at the origin of these concerns. Then it proposes a simple model on which to apply simulations of rod positioning to be performed with MORET 5 Monte Carlo code using the Prométhée tool. Results demonstrate that use of Monte Carlo methodologies provide more realistic uncertainty estimates in fuel pitch that are consistent with repeatability/reproducibility experiments. The current comparisons use light water reactor systems, which is directly relevant to some small modular reactor designs. However, accurate prediction and estimate of uncertainties in pitch for advanced reactor systems is also relevant. The application of unrealistic uncertainty analysis methods can incur larger margins in advanced reactor design, safety, and operation than are necessary

    Validation of the MORET 5 Monte Carlo Transport Code on Reactor Physics Experiments

    No full text
    International audienceThe MORET 5 code, which has been developed over more than 50 years at IRSN, has recently evolved, in its continuous energy version, from a criticality oriented code to a code also focused on reactor physics applications. Some developments such as the implementation of kinetics parameters contribute to that evolution. The aim of the paper is to present the validation of the code for the keff multiplication factor used in criticality studies as well as for other parameters commonly used in reactor physics applications. Special attention will be paid on commission tests performed in the CABRI French Reactor (CABRI is a pool-type research reactor operated by CEA and located in the Cadarache site in southern France used to simulate a sudden and instantaneous increase in power, known as a power transient, typical of a reactivity-initiated accident (RIA).) and the IPEN/MB-01 LCT-077 benchmark
    • …
    corecore