147 research outputs found

    Influence of chemical speciation on the separation of metal ions from chelating agents by nanofiltration membranes

    Get PDF
    The simultaneous separation of various metal ions (nickel, copper, calcium, and iron) from chelating agents (EDTA and citric acid in water streams using Nanofiltration membranes is analyzed. Assuming that multiply-charged species are highly rejected, chemical speciation com-10 putations reproduce the observed patterns of metal and ligand rejection at different pH values and concentrations.Postprint (updated version

    Adaptive temperature scaling for robust calibration of deep neural networks

    Get PDF
    In this paper, we study the post-hoc calibration of modern neural networks, a problem that has drawn a lot of attention in recent years. Despite the plethora of calibration methods proposed, there is no consensus yet on the inherent complexity of the task and, while some authors claim that simple functions solve the problem, others suggest that more expressive models are needed to capture misscalibration. As a first approach, we focus on the task of confidence scaling, specifically on posthoc methods that generalize Temperature Scaling, which we refer to as the Adaptive Temperature Scaling family. We begin by demonstrating that while complex models like neural networks provide an advantage when there is ample data, they fail in scenarios where it is limited, notably common in fields like medical diagnosis. We then show how under this ideal data conditions the more expressive methods learn a relationship between the entropy of a prediction and its level of overconfidence, and based on this observation, we propose Entropy-based Temperature Scaling, a simple method that scales the confidence of a prediction according to this relationship. Results show that our method obtains state-of-the-art performance and is robust against data scarcity. Moreover, our proposed model enables a deeper understanding of the calibration process by the interpretation of the entropy as a measure of uncertainty in the network outputsPID2021-125943OBI00, PID2019-106827GB-I0

    Generación automática de conjuntos de evaluación de camuflaje

    Full text link
    Background subtraction has become a key step in several computer vision algorithms. There are plenty of studies proposing different and varied approaches. However, the problem of background subtraction is not yet fully addressed. One reason might be the fact that each method has been developed for different tasks, e.g. video surveillance or optical motion capture. The recent appearance of comprehensive datasets provides a common framework for evaluating background subtraction algorithms. These datasets present a balanced repertoire of sequences in which common challenges are present. This leads to extensive overall scores in which robustness against different challenges is considered, but not particularized to these challenges. A particularly barely studied challenge, and the focus of our work, is camouflage: the resemblance between background and foreground samples. The research community agrees that there isn’t yet a commonly accepted approach to handle camouflage. In this work, we propose a novel solution for modeling camouflage based on the Jung’s theorem. Based on this solution, we generate camouflage likelihoods for every foreground pixel in a sequence using available ground-truth information to discriminate the background from the foreground. The evaluation of the proposed solution is performed in discrepancy terms by thresholding the camouflage likelihoods to obtain a binary mask on which we apply classical classification metrics. Thereby, we are able to further analyze the effect of the features selected by different background subtraction algorithms in handling camouflage. Furthermore, the proposed solution also permits the ranking of a set of sequences in terms of camouflage. The experiments carried out on the popular CDNET2014 dataset suggest that the use of certain alternative features to color—e,g, motion—is beneficial to robustly handle camouflage

    Gaussianization of LA-ICP-MS features to improve calibration in forensic glass comparison

    Full text link
    The forensic comparison of glass aims to compare a glass sample of an unknown source with a control glass sample of a known source. In this work, we use multi-elemental features from Laser Ablation Inductively Coupled Plasma with Mass Spectrometry (LA-ICP-MS) to compute a likelihood ratio. This calculation is a complex procedure that generally requires a probabilistic model including the within-source and betweensource variabilities of the features. Assuming the within-source variability to be normally distributed is a practical premise with the available data. However, the between-source variability is generally assumed to follow a much more complex distribution, typically described with a kernel density function. In this work, instead of modeling distributions with complex densities, we propose the use of simpler models and the introduction of a data pre-processing step consisting on the Gaussianization of the glass features. In this context, to obtain a better fit of the features with the Gaussian model assumptions, we explore the use of different normalization techniques of the LA-ICP-MS glass features, namely marginal Gaussianization based on histogram matching, marginal Gaussianization based on Yeo-Johnson transformation and a more complex joint Gaussianization using normalizing flows. We report an improvement in the performance of the Likelihood Ratios computed with the previously Gaussianized feature vectors, particularly relevant in their calibration, which implies a more reliable forensic glass comparisonThis work has been supported by the Spanish Ministerio de Ciencia e Innovación through grant PID2021-125943OB-I0

    Gaussian Processes for radiation dose prediction in nuclear power plant reactors

    Full text link
    In nuclear power plants, there are high-exposure jobs, like refuelling and maintenance, that require getting close to the reactor between operation cycles. Therefore, reducing radiation dose during these periods is of paramount importance regarding safety regulations. While there are some manipulable variables, like levels of certain corrosion products, that can influence the final level of radiation dose, there is no way to determine it in a principled way. In this work, we propose to use Machine Learning to predict the radiation dose in the reactor at the cycle end based on information available during the cycle operation. In particular, we use a Gaussian Process to model the relation between cobalt radioisotopes (a certain kind of corrosion product) and radiation dose levels. Gaussian Processes acknowledge the uncertainty on their predictions, a desirable property considering the high-risk nature of the present application. We report experiments on real data gathered from five different power plants in Spain. Results show that these models can be used to estimate the future values of radiation dose in a data-driven way. Moreover, there are tools based on these models currently in development for their application in power plantsThe authors from the UAM are funded by the Spanish Ministerio de Ciencia, Innovacion y Universidades (MCIU) and Agencia Estatal de Investigacion (AEI), and also by the European Regional Development Fund (FEDER in Spanish, ERDF in English), by project RTI2018-098091- B-I00. The work has been conducted in the context of a signed collaboration agreement between AUDIAS-UAM and ENUSA Industrias Avanzadas S. A

    Minding the gap between secondary school and university

    Get PDF
    The renewal of engineering education requires an education that is more affected by students' circumstances which, if known, will help to guide them into the future. It is about channelling the students towards learning, taking into account the factors related to the acquisition of knowledge and how they can share this knowledge with the teachers. The specific aim of the current study was to examine what it means for students to transition from secondary school to university and introduce changes to reduce the failures it generates. The causes of low grades in the initial phase of university are analysed; subsequently some remedies are included. First, to gather information, student surveys and interview activities, led by an expert, were conducted. Subsequently, compensatory actions were organized by experts, for students and teachers. The surveys were designed to provide a self-assessment of new students regarding dedication and performance, and were given to those who failed the first important exam, capturing how they experienced university entrance and their first failure. They point to some personal causes of low performance: time organization deficiencies, impediments to devoting themselves to continuous study, and difficulties to adapt. Half believe their dedication merits better learnings and marks, and stress the difficulties associated with an insufficient level of secondary education and with the types of exams. This study, encompassed within the framework of the activities dedicated to educational improvement at UPC, highlights the need to implement guidance and accompaniment actions devoted to first-year students
    corecore