206 research outputs found

    Absorption, distribution, metabolism and excretion of selenium following oral administration of elemental selenium nanoparticles or selenite in rats

    Get PDF
    A suspension of nanoparticles of BSA-stabilized red amorphous elemental selenium (Se) or an aqueous solution of sodium selenite was repeatedly administered by oral gavage for 28 days at 0.05 mg/kg bw/day (low dose) or at 0.5 mg/kg bw/day (high dose) as Se to female rats. Prior to administration, the size distribution of the Se nanoparticles was characterized by dynamic light scattering and transmission electron microscopy, which showed that the particles’ mean diameter was 19 nm and ranged in size from 10-80 nm. Following administration of the high dose of Se nanoparticles or selenite the concentration of Se was determined by ICP-MS in liver, kidney, urine, feces, stomach, lungs, plasma at µg/g level and in brain and muscle tissue at sub-µg/g level. In order to test if any elemental Se was present in liver, kidney or feces, an in situ derivatization selective to elemental Se was made by treatment with sulfite, which resulted in formation of the selenosulfate anion. This Se species was selectively and quantitatively determined by anion exchange HPLC with ICP-MS detection. The results showed that elemental Se was present in the livers, kidneys and feces from animals exposed to low and high doses of elemental Se nanoparticles or to selenite, and was detected also in the same samples from control animals. The fraction of Se present as elemental Se in livers and kidneys from the high dose animals was significantly larger than the similar fraction in samples from the low dose animals or from the controls. This suggested that the natural metabolic pathways of Se were exhausted when given the high dose of elemental Se or selenite resulting in a non-metabolized pool of elemental Se. Both dosage forms of Se were bioavailable as demonstrated by the blood biomarker selenoprotein P, which was equally up-regulated in the high-dose animals for both dosage forms of Se. Finally, the excretion of Se in urine and its occurrence as Se-methylseleno-N-Acetyl-galactosamine and trimethylselenonium-ion demonstrated that both dosage forms were metabolized and excreted. The results of the study showed that both forms of Se were equally absorbed, distributed, metabolized and excreted, but the detailed mechanism of the fate of the administered elemental Se or selenite in the gastro-intestinal tract of rats remains unclear

    Synthesizers: A Meta-Framework for Generating and Evaluating High-Fidelity Tabular Synthetic Data

    Get PDF
    Synthetic data is by many expected to have a significant impact on data science by enhancing data privacy, reducing biases in datasets, and enabling the scaling of datasets beyond their original size. However, the current landscape of tabular synthetic data generation is fragmented, with numerous frameworks available, only some of which have integrated evaluation modules. synthesizers is a meta-framework that simplifies the process of generating and evaluating tabular synthetic data. It provides a unified platform that allows users to select generative models and evaluation tools from open-source implementations in the research field and apply them to datasets of any format. The aim of synthesizers is to consolidate the diverse efforts in tabular synthetic data research, making it more accessible to researchers from different sub-domains, including those with less technical expertise such as health researchers. This could foster collaboration and increase the use of synthetic data tools, ultimately leading to more effective research outcomes

    Sharing is CAIRing:Characterizing principles and assessing properties of universal privacy evaluation for synthetic tabular data

    Get PDF
    Data sharing is a necessity for innovative progress in many domains, especially in healthcare. However, the ability to share data is hindered by regulations protecting the privacy of natural persons. Synthetic tabular data provide a promising solution to address data sharing difficulties but does not inherently guarantee privacy. Still, there is a lack of agreement on appropriate methods for assessing the privacy-preserving capabilities of synthetic data, making it difficult to compare results across studies. To the best of our knowledge, this is the first work to identify properties that constitute good universal privacy evaluation metrics for synthetic tabular data. The goal of universally applicable metrics is to enable comparability across studies and to allow non-technical stakeholders to understand how privacy is protected. We identify four principles for the assessment of metrics: Comparability, Applicability, Interpretability, and Representativeness (CAIR). To quantify and rank the degree to which evaluation metrics conform to the CAIR principles, we design a rubric using a scale of 1–4. Each of the four properties is scored on four parameters, yielding 16 total dimensions. We study the applicability and usefulness of the CAIR principles and rubric by assessing a selection of metrics popular in other studies. The results provide granular insights into the strengths and weaknesses of existing metrics that not only rank the metrics but highlight areas of potential improvements. We expect that the CAIR principles will foster agreement among researchers and organizations on which universal privacy evaluation metrics are appropriate for synthetic tabular data

    A Dynamic Evaluation Metric for Feature Selection:Feature Selection Dynamic Evaluation Metric

    Get PDF
    xpressive evaluation metrics are indispensable for informative experiments in all areas, and while several metrics are established in some areas, in others, such as feature selection, only indirect or otherwise limited evaluation metrics are found. In this paper, we propose a novel evaluation metric to address several problems of its predecessors and allow for flexible and reliable evaluation of feature selection algorithms. The proposed metric is a dynamic metric with two properties that can be used to evaluate both the performance and the stability of a feature selection algorithm. We conduct several empirical experiments to illustrate the use of the proposed metric in the successful evaluation of feature selection algorithms. We also provide a comparison and analysis to show the different aspects involved in the evaluation of the feature selection algorithms. The results indicate that the proposed metric is successful in carrying out the evaluation task for feature selection algorithms

    Syntheval:A Framework for Detailed Utility and Privacy Evaluation of Tabular Synthetic Data

    Get PDF
    With the growing demand for synthetic data to address contemporary issues in machine learning, such as data scarcity, data fairness, and data privacy, having robust tools for assessing the utility and potential privacy risks of such data becomes crucial. SynthEval, a novel open-source evaluation framework distinguishes itself from existing tools by treating categorical and numerical attributes with equal care, without assuming any special kind of preprocessing steps. This makes it applicable to virtually any synthetic dataset of tabular records. Our tool leverages statistical and machine learning techniques to comprehensively evaluate synthetic data fidelity and privacy-preserving integrity. SynthEval integrates a wide selection of metrics that can be used independently or in highly customisable benchmark configurations, and can easily be extended with additional metrics. In this paper, we describe SynthEval and illustrate its versatility with examples. The framework facilitates better benchmarking and more consistent comparisons of model capabilities.</p

    New colleague or gimmick hurdle?:A user-centric scoping review of the barriers and facilitators of robots in hospitals

    Get PDF
    Healthcare systems are confronted with a multitude of challenges, including the imperative to enhance accessibility, efficiency, cost-effectiveness, and the quality of healthcare delivery. These challenges are exacerbated by current healthcare personnel shortages, prospects of future shortfalls, insufficient recruitment efforts, increasing prevalence of chronic diseases, global viral concerns, and ageing populations. To address this escalating demand for healthcare services, healthcare systems are increasingly adopting robotic technology and artificial intelligence (AI), which promise to optimise costs, improve working conditions, and increase the quality of care. This article focuses on deepening our understanding of the barriers and facilitators associated with integrating robotic technologies in hospital environments. To this end, we conducted a scoping literature review to consolidate emerging themes pertaining to the experiences, viewpoints perspectives, and behaviours of hospital employees as professional users of robots in hospitals. Through screening 501 original research articles from Web-of-Science, we identified and reviewed in full-text 40 pertinent user-centric studies of the integration of robots into hospitals. Our review revealed and analysed 14 themes in-depth, of which we identified seven as barriers and seven as facilitators. Through a structuring of the barriers and facilitators, we reveal a notable misalignment between these barriers and facilitators: Finding that organisational aspects are at the core of most barriers, we suggest that future research should investigate the dynamics between hospital employees as professional users and the procedures and workflows of the hospitals as institutions, as well as the ambivalent role of anthropomorphisation of hospital robots, and emerging issues of privacy and confidentiality raised by increasingly communicative robots. Ultimately, this perspective on the integration of robots in hospitals transcends debates on the capabilities and limits of the robotic technology itself, shedding light on the complexity of integrating new technologies into hospital environments and contributing to an understanding of possible futures in healthcare innovatio

    A systematic review of privacy‑preserving techniques for synthetic tabular health data

    Get PDF
    The amount of tabular health data being generated is rapidly increasing, which forces regulations to be put in place to ensure the privacy of individuals. However, the regulations restrict how data can be shared, limiting the research that can be conducted. Synthetic Data Generation (SDG) aims to solve that issue by generating data that mimics the statistical properties of real data without privacy concerns. Privacy is often assumed to exist in synthetic data without evaluating the model or the data. Accordingly, it is unclear how well various SDG methods preserve privacy. This review aims at uncovering how well privacy is preserved in tabular health data for different SDG methods and how privacy can be explicitly implemented in the SDG process. Relevant literature published from January 1, 2018–October 31, 2023 has been reviewed with a focus on privacy. The reported results and methods are compared to provide a standard frame of reference for future literature. The identified articles for the review total 32, with many explicitly implementing privacy constraints and all evaluating the privacy level. We found that methods for explicitly implementing privacy vary across generative models and identified a lack of standardization of privacy evaluation as an overarching theme. Our results show that SDG is a viable approach for ensuring patient confidentiality in tabular data. Still, to establish a solid foundation for future research, standardization of privacy evaluation is needed

    Systematic Review of Generative Modelling Tools and Utility Metrics for Fully Synthetic Tabular Data

    Get PDF
    Sharing data with third parties is essential for advancing science, but it is becoming more and more difficult with the rise of data protection regulations, ethical restrictions, and growing fear of misuse. Fully synthetic data, which transcends anonymisation, may be the key to unlocking valuable untapped insights stored away in secured data vaults. This review examines current synthetic data generation methods and their utility measurement. We found that more traditional generative models such as Classification and Regression Tree models alongside Bayesian Networks remain highly relevant and are still capable of surpassing deep learning alternatives like Generative Adversarial Networks. However, our findings also display the same lack of agreement on metrics for evaluation, uncovered in earlier reviews, posing a persistent obstacle to advancing the field. We propose a tool for evaluating the utility of synthetic data and illustrate how it can be applied to three synthetic data generation models. By streamlining evaluation and promoting agreement on metrics, researchers can explore novel methods and generate compelling results that will convince data curators and lawmakers to embrace synthetic data. Our review emphasises the potential of synthetic data and highlights the need for greater collaboration and standardisation to unlock its full potential
    corecore