146 research outputs found

    Influence of chemical speciation on the separation of metal ions from chelating agents by nanofiltration membranes

    Get PDF
    The simultaneous separation of various metal ions (nickel, copper, calcium, and iron) from chelating agents (EDTA and citric acid in water streams using Nanofiltration membranes is analyzed. Assuming that multiply-charged species are highly rejected, chemical speciation com-10 putations reproduce the observed patterns of metal and ligand rejection at different pH values and concentrations.Postprint (updated version

    Generación automática de conjuntos de evaluación de camuflaje

    Full text link
    Background subtraction has become a key step in several computer vision algorithms. There are plenty of studies proposing different and varied approaches. However, the problem of background subtraction is not yet fully addressed. One reason might be the fact that each method has been developed for different tasks, e.g. video surveillance or optical motion capture. The recent appearance of comprehensive datasets provides a common framework for evaluating background subtraction algorithms. These datasets present a balanced repertoire of sequences in which common challenges are present. This leads to extensive overall scores in which robustness against different challenges is considered, but not particularized to these challenges. A particularly barely studied challenge, and the focus of our work, is camouflage: the resemblance between background and foreground samples. The research community agrees that there isn’t yet a commonly accepted approach to handle camouflage. In this work, we propose a novel solution for modeling camouflage based on the Jung’s theorem. Based on this solution, we generate camouflage likelihoods for every foreground pixel in a sequence using available ground-truth information to discriminate the background from the foreground. The evaluation of the proposed solution is performed in discrepancy terms by thresholding the camouflage likelihoods to obtain a binary mask on which we apply classical classification metrics. Thereby, we are able to further analyze the effect of the features selected by different background subtraction algorithms in handling camouflage. Furthermore, the proposed solution also permits the ranking of a set of sequences in terms of camouflage. The experiments carried out on the popular CDNET2014 dataset suggest that the use of certain alternative features to color—e,g, motion—is beneficial to robustly handle camouflage

    Gaussianization of LA-ICP-MS features to improve calibration in forensic glass comparison

    Full text link
    The forensic comparison of glass aims to compare a glass sample of an unknown source with a control glass sample of a known source. In this work, we use multi-elemental features from Laser Ablation Inductively Coupled Plasma with Mass Spectrometry (LA-ICP-MS) to compute a likelihood ratio. This calculation is a complex procedure that generally requires a probabilistic model including the within-source and betweensource variabilities of the features. Assuming the within-source variability to be normally distributed is a practical premise with the available data. However, the between-source variability is generally assumed to follow a much more complex distribution, typically described with a kernel density function. In this work, instead of modeling distributions with complex densities, we propose the use of simpler models and the introduction of a data pre-processing step consisting on the Gaussianization of the glass features. In this context, to obtain a better fit of the features with the Gaussian model assumptions, we explore the use of different normalization techniques of the LA-ICP-MS glass features, namely marginal Gaussianization based on histogram matching, marginal Gaussianization based on Yeo-Johnson transformation and a more complex joint Gaussianization using normalizing flows. We report an improvement in the performance of the Likelihood Ratios computed with the previously Gaussianized feature vectors, particularly relevant in their calibration, which implies a more reliable forensic glass comparisonThis work has been supported by the Spanish Ministerio de Ciencia e Innovación through grant PID2021-125943OB-I0

    Gaussian Processes for radiation dose prediction in nuclear power plant reactors

    Full text link
    In nuclear power plants, there are high-exposure jobs, like refuelling and maintenance, that require getting close to the reactor between operation cycles. Therefore, reducing radiation dose during these periods is of paramount importance regarding safety regulations. While there are some manipulable variables, like levels of certain corrosion products, that can influence the final level of radiation dose, there is no way to determine it in a principled way. In this work, we propose to use Machine Learning to predict the radiation dose in the reactor at the cycle end based on information available during the cycle operation. In particular, we use a Gaussian Process to model the relation between cobalt radioisotopes (a certain kind of corrosion product) and radiation dose levels. Gaussian Processes acknowledge the uncertainty on their predictions, a desirable property considering the high-risk nature of the present application. We report experiments on real data gathered from five different power plants in Spain. Results show that these models can be used to estimate the future values of radiation dose in a data-driven way. Moreover, there are tools based on these models currently in development for their application in power plantsThe authors from the UAM are funded by the Spanish Ministerio de Ciencia, Innovacion y Universidades (MCIU) and Agencia Estatal de Investigacion (AEI), and also by the European Regional Development Fund (FEDER in Spanish, ERDF in English), by project RTI2018-098091- B-I00. The work has been conducted in the context of a signed collaboration agreement between AUDIAS-UAM and ENUSA Industrias Avanzadas S. A

    Minding the gap between secondary school and university

    Get PDF
    The renewal of engineering education requires an education that is more affected by students' circumstances which, if known, will help to guide them into the future. It is about channelling the students towards learning, taking into account the factors related to the acquisition of knowledge and how they can share this knowledge with the teachers. The specific aim of the current study was to examine what it means for students to transition from secondary school to university and introduce changes to reduce the failures it generates. The causes of low grades in the initial phase of university are analysed; subsequently some remedies are included. First, to gather information, student surveys and interview activities, led by an expert, were conducted. Subsequently, compensatory actions were organized by experts, for students and teachers. The surveys were designed to provide a self-assessment of new students regarding dedication and performance, and were given to those who failed the first important exam, capturing how they experienced university entrance and their first failure. They point to some personal causes of low performance: time organization deficiencies, impediments to devoting themselves to continuous study, and difficulties to adapt. Half believe their dedication merits better learnings and marks, and stress the difficulties associated with an insufficient level of secondary education and with the types of exams. This study, encompassed within the framework of the activities dedicated to educational improvement at UPC, highlights the need to implement guidance and accompaniment actions devoted to first-year students

    A snapshot of cancer-associated thromboembolic disease in 2018-2019: First data from the TESEO prospective registry

    Get PDF
    BACKGROUND: The ever-growing complexity of cancer-associated thrombosis (CAT), with new antineoplastic drugs and anticoagulants, distinctive characteristics, and decisions with low levels of evidence, justifies this registry. METHOD: TESEO is a prospective registry promoted by the Spanish Society of Medical Oncology to which 34 centers contribute cases. It seeks to provide an epidemiological description of CAT in Spain. RESULTS: Participants (N=939) with CAT diagnosed between July 2018 and December 2019 were recruited. Most subjects had advanced colon (21.4%), non-small cell lung (19.2%), and breast (11.1%) cancers, treated with dual-agent chemotherapy (28.4%), monochemotherapy (14.4%), or immune checkpoint inhibitors (3.6%). Half (51%) were unsuspected events, albeit only 57.1% were truly asymptomatic. Pulmonary embolism (PE) was recorded in 571 (58.3%); in 120/571 (21.0%), there was a concurrent deep venous thromboembolism (VTE). Most initially received low molecular weight heparin (89.7%). Suspected and unsuspected VTE had an OS rate of 9.9 (95% CI, 7.3-non-computable) and 14.4 months (95% CI, 12.6-non-computable) (p=0.00038). Six-month survival was 80.9%, 55.9%, and 55.5% for unsuspected PE, unsuspected PE admitted for another reason, and suspected PE, respectively (p<0.0001). The 12-month cumulative incidence of venous rethrombosis was 7.1% (95% CI, 4.7-10.2) in stage IV vs 3.0% (95% CI, 0.9-7.1) in stages I-III. The 12-month cumulative incidence of major/clinically relevant bleeding was 9.6% (95% CI, 6.1-14.0) in the presence of risk factors. CONCLUSION: CAT continues to be a relevant problem in the era of immunotherapy and targeted therapies. The initial TESEO data highlight the evolution of CAT, with new agents and thrombotic risk factors
    corecore