21 research outputs found

    When Does Re-initialization Work?

    Full text link
    Re-initializing a neural network during training has been observed to improve generalization in recent works. Yet it is neither widely adopted in deep learning practice nor is it often used in state-of-the-art training protocols. This raises the question of when re-initialization works, and whether it should be used together with regularization techniques such as data augmentation, weight decay and learning rate schedules. In this work, we conduct an extensive empirical comparison of standard training with a selection of re-initialization methods to answer this question, training over 15,000 models on a variety of image classification benchmarks. We first establish that such methods are consistently beneficial for generalization in the absence of any other regularization. However, when deployed alongside other carefully tuned regularization techniques, re-initialization methods offer little to no added benefit for generalization, although optimal generalization performance becomes less sensitive to the choice of learning rate and weight decay hyperparameters. To investigate the impact of re-initialization methods on noisy data, we also consider learning under label noise. Surprisingly, in this case, re-initialization significantly improves upon standard training, even in the presence of other carefully tuned regularization techniques.Comment: Published in PMLR Volume 187; spotlight presentation at I Can't Believe It's Not Better Workshop at NeurIPS 202

    A study on the plasticity of neural networks

    Full text link
    One aim shared by multiple settings, such as continual learning or transfer learning, is to leverage previously acquired knowledge to converge faster on the current task. Usually this is done through fine-tuning, where an implicit assumption is that the network maintains its plasticity, meaning that the performance it can reach on any given task is not affected negatively by previously seen tasks. It has been observed recently that a pretrained model on data from the same distribution as the one it is fine-tuned on might not reach the same generalisation as a freshly initialised one. We build and extend this observation, providing a hypothesis for the mechanics behind it. We discuss the implication of losing plasticity for continual learning which heavily relies on optimising pretrained models

    Implementing Argumentation-enabled Empathic Agents

    Full text link
    In a previous publication, we introduced the core concepts of empathic agents as agents that use a combination of utility-based and rule-based approaches to resolve conflicts when interacting with other agents in their environment. In this work, we implement proof-of-concept prototypes of empathic agents with the multi-agent systems development framework Jason and apply argumentation theory to extend the previously introduced concepts to account for inconsistencies between the beliefs of different agents. We then analyze the feasibility of different admissible set-based argumentation semantics to resolve these inconsistencies. As a result of the analysis we identify the maximal ideal extension as the most feasible argumentation semantics for the problem in focus.Comment: Accepted for/presented at the 16th European Conference on Multi-Agent Systems (EUMAS 2018

    Strengths-Weaknesses-Opportunities-Threats analysis of carbon footprint indicator and derived recommendations

    Get PDF
    ABSTRACT: Demand for a low carbon footprint may be a key factor in stimulating innovation, while prompting politicians to promote sustainable consumption. However, the variety of methodological approaches and techniques used to quantify life-cycle emissions prevents their successful and widespread implementation. This study aims to offer recommendations for researchers, policymakers and practitioners seeking to achieve a more consistent approach for carbon footprint analysis. This assessment is made on the basis of a comprehensive Strengths-Weaknesses-Opportunities-Threats or SWOT Analysis of the carbon footprint indicator. It is carried out bringing together the collective experience from the Carbonfeel Project following the Delphi technique principles. The results include the detailed SWOT Analysis from which specific recommendations to cope with the threats and the weaknesses are identified. In particular, results highlight the importance of the integrated approach to combine organizational and product carbon footprinting in order to achieve a more standardized and consistent approach. These recommendations can therefore serve to pave the way for the development of new, specific and highly-detailed guidelines

    Rôle de l'altération de la fonction rénale dans les anémies modérées sans étiologie retrouvée des sujets âgés

    No full text
    L anémie modérée est fréquente chez le sujet âgé fragile hospitalisé. Son origine, lorsque un bilan étiologique est réalisé, reste souvent négative.L objectif de notre travail a été de rechercher les liens entre l insuffisance rénale modérée liée au vieillissement et les anémies modérées dans une large population de sujets âgés hospitalisés.Il s est agi d un travail retrospectif réalisé sur compte-rendus d hospitalisation, sur 3 années(1998-2000) dans le service Hamburger de l hopital R.Muret à Sevran. Ont été inclus tous les patients âgés de plus de 60 ans, hospitalisés en court et moyen séjour. Les principaux paramètres étudiés étaient-administratifs, cliniques( sexe, âge, poids, amaigrissement, antécédents, diagnostic à l entrée), thérapeutiques d entrée, biologiques(NFS,VS,CRP, fibrinogène,protides, albumine, préalbumine, hormones thyroidiennes, créatinine sanguine, clairance de la créatinine calculée selon la formule de Cockcroft,vitamine B12, folates, ferritine).Les résultats ont montré une population de 643 malades(dont 402 femmes et 241 hommes), avec une moyenne d âge de 82,4+/-8,2 ans.329 cas avaient un taux d hémoglobine inférieur à 12g/dL(51,2%) dont 238 cas une hémoglobine située entre 10-12 g/dL(anémie modérée).L analyse de la clairance de la créatinine retrouve 204 cas avec une fonction rénale normale(Clcr>6OmL/min), 318 cas avec une insuffisance rénale modérée(Clcr=30-60 mL/min) et 118 cas avec une insuffisance rénale sévère/terminale(Clcr<3OmL/min). L analyse comparative entre l hémoglobine et la clairance de la créatinine retrouve 150 patients avec anémie et une insuffisance rénale modérée(Clcr=30-6OmL/min), dont 115 cas avaient une anémie modérée(Hb=10-12g/dL) ce qui fait 17,9 % de la population globale analysée.Les analyses des corrélations linéaires et multivariées mettent en relation l hémoglobine et la clairance de la créatinine, mais également la VS et la préàlbumine dans le sous-groupe de patients anémiques(Hb et celui des patients avec anémie modérée(Hb=10-12g/dL). En conclusion, l étude montre que la clairance de la créatinine calculée par la formule de Cockcroft est un facteur indépendant associé au taux d hémoglobine dans cette population. Ces anémies modérées du sujet âgé semblent multifactorielles, liées à l insuffisance rénale du vieillissement, l inflammation et la nutrition.PARIS13-BU Serge Lebovici (930082101) / SudocSudocFranceF

    Electromagnetic Tracker Measurement Error Simulation and Tool Design

    No full text
    Abstract. Electromagnetic tracking is prone to errors due to the distortions caused by the metallic objects situated in the electromagnetic field’s proximity. Hence, a tool with electromagnetic sensors has first to be tested in different environments before it could be safely used in a surgical intervention. Such tests are often succeeded by design iterations meant to minimize the effect of such distortions. In order to accelerate this process, a simulation software was proposed to virtually test the tools before the physical building. The simulated tests are based on apriori measurements which yield a characterization of the error filed for the specific environments.

    Research on the Degree of Contamination of Surface and Groundwater used as Sources for Drinking Water Production

    No full text
    The paper presents detailed results of the analysis of surface water (MureÂş river) and groundwater (CriÂş basin) on the content of ammonium, nitrites, nitrates, arsenic, iron, zinc, cadmium and copper, compared Keywords: drinking water, pollutants, hydrogeology, contamination, UV-VIS and Atomic Absorption Spectrometry Drinkable water is used for domestic needs (drinking, cooking, washing), public needs (trade, institutions, maintenance of streets and green spaces, ornamental fountains), the needs of livestock, industrial needs and firefighting. In the recent specialized literature, there are similar studies dealing with the geochemical and microbiological evolution of water quality indicators, carried on different water bodies [1, 2] Also, recent studies regarding the water quality from lakes or river systems in South-Eastern Europe have been carried out In order to become drinking water, the water extracted from rivers or from underground wells needs to be subject to a rigorous treatment, whose scheme is dependent on water quantity and quality required by the consumer on one hand, and on the other hand on the available water sources and the percentage of water pollutants from these sources Drinking water quality is regulated by law; water intended for human consumption must comply with maximum permissible values of parameters, these values being presented in table 1 Based on these considerations, this paper aims to present the results of detailed analyzes of water from the MureÂş river (surface water) and CriÂş basin (water from underground wells) which are used as a source of drinking water for human consumption. In the same time, knowing the concentration of toxic chemical compounds existing in these waters will be the starting point in determining the proper treatment technology for preparing it to be drinking water. Hydrogeological Considerations In the western part of Romania, Neogene deposits include medium-deep and deep aquifers. The total thickness is higher than 2500 m The shallow aquifer (groundwater aquifer) is fed mainly from atmospheric precipitation and surface waters. This presents large variations in terms of flow. The groundwater

    Spectral normalisation for deep reinforcement learning: an optimisation perspective

    No full text
    Most of the recent deep reinforcement learning advances take an RL-centric perspective and focus on refinements of the training objective. We diverge from this view and show we can recover the performance of these developments not by changing the objective, but by regularising the value-function estimator. Constraining the Lipschitz constant of a single layer using spectral normalisation is sufficient to elevate the performance of a Categorical-DQN agent to that of a more elaborated RAINBOW agent on the challenging Atari domain. We conduct ablation studies to disentangle the various effects normalisation has on the learning dynamics and show that is sufficient to modulate the parameter updates to recover most of the performance of spectral normalisation. These findings hint towards the need to also focus on the neural component and its learning dynamics to tackle the peculiarities of Deep Reinforcement Learning
    corecore