42 research outputs found

    Data Shepherding in Nanotechnology : The Exposure Field Campaign Template

    Get PDF
    In this paper, we demonstrate the realization process of a pragmatic approach on developing a template for capturing field monitoring data in nanomanufacturing processes. The template serves the fundamental principles which make data scientifically Findable, Accessible, Interoperable and Reusable (FAIR principles), as well as encouraging individuals to reuse it. In our case, the data shepherds’ (the guider of data) template creation workflow consists of the following steps: (1) Identify relevant stakeholders, (2) Distribute questionnaires to capture a general description of the data to be generated, (3) Understand the needs and requirements of each stakeholder, (4) Interactive simple communication with the stakeholders for variables/descriptors selection, and (5) Design of the template and annotation of descriptors. We provide an annotated template for capturing exposure field campaign monitoring data, and increase their interoperability, while comparing it with existing templates. This paper enables the data creators of exposure field campaign data to store data in a FAIR way and helps the scientific community, such as data shepherds, by avoiding extensive steps for template creation and by utilizing the pragmatic structure and/or the template proposed herein, in the case of a nanotechnology project (Anticipating Safety Issues at the Design of Nano Product Development, ASINA).In this paper, we demonstrate the realization process of a pragmatic approach on developing a template for capturing field monitoring data in nanomanufacturing processes. The template serves the fundamental principles which make data scientifically Findable, Accessible, Interoperable and Reusable (FAIR principles), as well as encouraging individuals to reuse it. In our case, the data shepherds' (the guider of data) template creation workflow consists of the following steps: (1) Identify relevant stakeholders, (2) Distribute questionnaires to capture a general description of the data to be generated, (3) Understand the needs and requirements of each stakeholder, (4) Interactive simple communication with the stakeholders for variables/descriptors selection, and (5) Design of the template and annotation of descriptors. We provide an annotated template for capturing exposure field campaign monitoring data, and increase their interoperability, while comparing it with existing templates. This paper enables the data creators of exposure field campaign data to store data in a FAIR way and helps the scientific community, such as data shepherds, by avoiding extensive steps for template creation and by utilizing the pragmatic structure and/or the template proposed herein, in the case of a nanotechnology project (Anticipating Safety Issues at the Design of Nano Product Development, ASINA).Peer reviewe

    Assessment of exposure determinants and exposure levels by using stationary concentration measurements and a probabilistic near-field/far-field exposure model

    Get PDF
    Funding Information: The authors thank Prof. Paul Hewett (Exposure Assessment Solutions, Inc., Morgantown, WV) for his assistance with revising the probabilistic exposure model parametrization and interpretation of the results. Publisher Copyright: © 2021 Koivisto AJ et al.Background: The Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) regulation requires the establishment of Conditions of Use (CoU) for all exposure scenarios to ensure good communication of safe working practices. Setting CoU requires the risk assessment of all relevant Contributing Scenarios (CSs) in the exposure scenario. A new CS has to be created whenever an Operational Condition (OC) is changed, resulting in an excessive number of exposure assessments. An efficient solution is to quantify OC concentrations and to identify reasonable worst-case scenarios with probabilistic exposure modeling. Methods: Here, we appoint CoU for powder pouring during the industrial manufacturing of a paint batch by quantifying OC exposure levels and exposure determinants. The quantification was performed by using stationary measurements and a probabilistic Near-Field/Far-Field (NF/FF) exposure model. Work shift and OC concentration levels were quantified for pouring TiO 2 from big bags and small bags, pouring Micro Mica from small bags, and cleaning. The impact of exposure determinants on NF concentration level was quantified by (1) assessing exposure determinants correlation with the NF exposure level and (2) by performing simulations with different OCs. Results: Emission rate, air mixing between NF and FF and local ventilation were the most relevant exposure determinants affecting NF concentrations. Potentially risky OCs were identified by performing Reasonable Worst Case (RWC) simulations and by comparing the exposure 95 th percentile distribution with 10% of the occupational exposure limit value (OELV). The CS was shown safe except in RWC scenario (ventilation rate from 0.4 to 1.6 1/h, 100 m 3 room, no local ventilation, and NF ventilation of 1.6 m 3/min). Conclusions: The CoU assessment was considered to comply with European Chemicals Agency (ECHA) legislation and EN 689 exposure assessment strategy for testing compliance with OEL values. One RWC scenario would require measurements since the exposure level was 12.5% of the OELV.Peer reviewe

    A roadmap towards safe and sustainable by design nanotechnology: implementation for nano-silver-based antimicrobial textile coatings production by ASINA project

    Get PDF
    This report demonstrates a case study within the ASINA project, aimed at instantiating a roadmap with quantitative metrics for Safe(r) and (more) Sustainable by Design (SSbD) options. We begin with a description of ASINA’s methodology across the product lifecycle, outlining the quantitative elements within: Physical-Chemical Features (PCFs), Key Decision Factors (KDFs), and Key Performance Indicators (KPIs). Subsequently, we delve in a proposed decision support tool for implementing the SSbD objectives across various dimensions—functionality, cost, environment, and human health safety—within a broader European context. We then provide an overview of the technical processes involved, including design rationales, experimental procedures, and tools/models developed within ASINA in delivering nano-silver-based antimicrobial textile coatings. The result is pragmatic, actionable metrics intended to be estimated and assessed in future SSbD applications and to be adopted in a common SSbD roadmap aligned with the EU’s Green Deal objectives. The methodological approach is transparently and thoroughly described to inform similar projects through the integration of KPIs into SSbD and foster data-driven decision-making. Specific results and project data are beyond this work’s scope, which is to demonstrate the ASINA roadmap and thus foster SSbD-oriented innovation in nanotechnology

    Metadata stewardship in nanosafety research: learning from the past, preparing for an "on-the-fly" FAIR future

    Get PDF
    Introduction: Significant progress has been made in terms of best practice in research data management for nanosafety. Some of the underlying approaches to date are, however, overly focussed on the needs of specific research projects or aligned to a single data repository, and this "silo" approach is hampering their general adoption by the broader research community and individual labs.Methods: State-of-the-art data/knowledge collection, curation management FAIrification, and sharing solutions applied in the nanosafety field are reviewed focusing on unique features, which should be generalised and integrated into a functional FAIRification ecosystem that addresses the needs of both data generators and data (re)users.Results: The development of data capture templates has focussed on standardised single-endpoint Test Guidelines, which does not reflect the complexity of real laboratory processes, where multiple assays are interlinked into an overall study, and where non-standardised assays are developed to address novel research questions and probe mechanistic processes to generate the basis for read-across from one nanomaterial to another. By focussing on the needs of data providers and data users, we identify how existing tools and approaches can be re-framed to enable "on-the-fly" (meta) data definition, data capture, curation and FAIRification, that are sufficiently flexible to address the complexity in nanosafety research, yet harmonised enough to facilitate integration of datasets from different sources generated for different research purposes. By mapping the available tools for nanomaterials safety research (including nanomaterials characterisation, nonstandard (mechanistic-focussed) methods, measurement principles and experimental setup, environmental fate and requirements from new research foci such as safe and sustainable by design), a strategy for integration and bridging between silos is presented. The NanoCommons KnowledgeBase has shown how data from different sources can be integrated into a one-stop shop for searching, browsing and accessing data (without copying), and thus how to break the boundaries between data silos.Discussion: The next steps are to generalise the approach by defining a process to build consensus (meta)data standards, develop solutions to make (meta)data more machine actionable (on the fly ontology development) and establish a distributed FAIR data ecosystem maintained by the community beyond specific projects. Since other multidisciplinary domains might also struggle with data silofication, the learnings presented here may be transferrable to facilitate data sharing within other communities and support harmonization of approaches across disciplines to prepare the ground for cross-domain interoperability

    Metadata stewardship in nanosafety research: learning from the past, preparing for an "on-the-fly" FAIR future

    Get PDF
    Introduction: Significant progress has been made in terms of best practice in research data management for nanosafety. Some of the underlying approaches to date are, however, overly focussed on the needs of specific research projects or aligned to a single data repository, and this “silo” approach is hampering their general adoption by the broader research community and individual labs. Methods: State-of-the-art data/knowledge collection, curation management FAIRification, and sharing solutions applied in the nanosafety field are reviewed focusing on unique features, which should be generalised and integrated into a functional FAIRification ecosystem that addresses the needs of both data generators and data (re)users. Results: The development of data capture templates has focussed on standardised single-endpoint Test Guidelines, which does not reflect the complexity of real laboratory processes, where multiple assays are interlinked into an overall study, and where non-standardised assays are developed to address novel research questions and probe mechanistic processes to generate the basis for read-across from one nanomaterial to another. By focussing on the needs of data providers and data users, we identify how existing tools and approaches can be re-framed to enable “on-the-fly” (meta) data definition, data capture, curation and FAIRification, that are sufficiently flexible to address the complexity in nanosafety research, yet harmonised enough to facilitate integration of datasets from different sources generated for different research purposes. By mapping the available tools for nanomaterials safety research (including nanomaterials characterisation, non-standard (mechanistic-focussed) methods, measurement principles and experimental setup, environmental fate and requirements from new research foci such as safe and sustainable by design), a strategy for integration and bridging between silos is presented. The NanoCommons KnowledgeBase has shown how data from different sources can be integrated into a one-stop shop for searching, browsing and accessing data (without copying), and thus how to break the boundaries between data silos. Discussion: The next steps are to generalise the approach by defining a process to build consensus (meta)data standards, develop solutions to make (meta)data more machine actionable (on the fly ontology development) and establish a distributed FAIR data ecosystem maintained by the community beyond specific projects. Since other multidisciplinary domains might also struggle with data silofication, the learnings presented here may be transferable to facilitate data sharing within other communities and support harmonization of approaches across disciplines to prepare the ground for cross-domain interoperability. Visit WorldFAIR online at http://worldfair-project.eu. WorldFAIR is funded by the EC HORIZON-WIDERA-2021-ERA-01-41 Coordination and Support Action under Grant Agreement No. 101058393

    Metadata stewardship in nanosafety research: learning from the past, preparing for an “on-the-fly” FAIR future

    Get PDF
    Introduction: Significant progress has been made in terms of best practice in research data management for nanosafety. Some of the underlying approaches to date are, however, overly focussed on the needs of specific research projects or aligned to a single data repository, and this “silo” approach is hampering their general adoption by the broader research community and individual labs.Methods: State-of-the-art data/knowledge collection, curation management FAIrification, and sharing solutions applied in the nanosafety field are reviewed focusing on unique features, which should be generalised and integrated into a functional FAIRification ecosystem that addresses the needs of both data generators and data (re)users.Results: The development of data capture templates has focussed on standardised single-endpoint Test Guidelines, which does not reflect the complexity of real laboratory processes, where multiple assays are interlinked into an overall study, and where non-standardised assays are developed to address novel research questions and probe mechanistic processes to generate the basis for read-across from one nanomaterial to another. By focussing on the needs of data providers and data users, we identify how existing tools and approaches can be re-framed to enable “on-the-fly” (meta) data definition, data capture, curation and FAIRification, that are sufficiently flexible to address the complexity in nanosafety research, yet harmonised enough to facilitate integration of datasets from different sources generated for different research purposes. By mapping the available tools for nanomaterials safety research (including nanomaterials characterisation, nonstandard (mechanistic-focussed) methods, measurement principles and experimental setup, environmental fate and requirements from new research foci such as safe and sustainable by design), a strategy for integration and bridging between silos is presented. The NanoCommons KnowledgeBase has shown how data from different sources can be integrated into a one-stop shop for searching, browsing and accessing data (without copying), and thus how to break the boundaries between data silos.Discussion: The next steps are to generalise the approach by defining a process to build consensus (meta)data standards, develop solutions to make (meta)data more machine actionable (on the fly ontology development) and establish a distributed FAIR data ecosystem maintained by the community beyond specific projects. Since other multidisciplinary domains might also struggle with data silofication, the learnings presented here may be transferrable to facilitate data sharing within other communities and support harmonization of approaches across disciplines to prepare the ground for cross-domain interoperability

    Health and environmental safety of nanomaterials: O data, where art thou?

    Get PDF
    Nanotechnology keeps drawing attention due to the great tunable properties of nanomaterials in comparison to their bulk conventional materials. The growth of nanotechnology in combination with the digitization era has led to an increased need of safety related data. In addition to safety, new data-driven paradigms on safe and sustainable by design materials are stressing the necessity of data even more. Data is a fundamental asset to the scientific community in studying and analysing the entire life-cycle of nanomaterials. Unfortunately, data exist in a scattered fashion, in different sources and formats. To our knowledge, there is no study focusing on aspects of actual data-structure knowledge that exists in literature and databases. The purpose of this review research is to transparently and comprehensively, display to the nanoscience community the datasets readily available for machine learning purposes making it convenient and more efficient for the next users such as modellers or data curators to retrieve information. We systematically recorded the features and descriptors available in the datasets and provide synopsised information on their ranges, forms and metrics in the supplementary material

    A machine learning examination of nanomaterial safety

    Get PDF
    Nanotechnology is an emerging technologies with enormous potential for innovative applications. The introduction of nanoparticles (NPs) offers significant societal benefits and economic opportunities while posing major challenges in research and regulatory bodies regarding their safety. NPs display high heterogeneity concerning their physicochemical and quantum-mechanical properties and as such, their toxicological affect, narrowing their risk assessment to an ad hoc testing process. Traditional, toxicological risk assessment relies heavily on costly, ethically disputed animal testing One alternative to test the hazard of NPs is in silico techniques. Given that risk assessment as a subject of academic research is multidisciplinary by character, this thesis provides a multidimensional research in the premises of the challenge triangle of nanoscience, toxicology and machine learning. Over the last decades, various types of Machine Learning (ML) tools have been developed for predicting toxicological effects of nanoforms. In this thesis, I initially document the work that has been carried out, systematically. We investigate in details and bookmark ML methodologies used to predict toxicological outcomes and provide a review of the sequenced steps involved in implementing a model. Additionally, this thesis records the data used in published studies that predict endpoints and maps the pathways followed, involving biological features in relation to NPs exposure, their physicochemical characteristics and the most commonly predicted outcomes. The results, derived from published research of the last decade, are summarized visually, providing prior-based data mining paradigms to be readily used by the nanotoxicology community in computational studies. A bridging physicochemical properties of NPs, experimental exposure conditions and in vitro characteristics with biological effects of NPs on a molecular cellular level from transcriptomics studies, is demonstrated. The bridging is achieved by developing and implementing Bayesian Networks with or without data preprocessing. Early stage nanotoxicity measurements represent a challenge, not least when attempting to predict adverse outcomes and modeling is critical to understanding the biological effects of exposure to NPs. In this thesis, categories of ML classifiers are compared to investigate their performance in predicting NPs in vitro toxicity. Physicochemical properties, toxicological and quantum-mechanical attributes and experimental conditions were used as input variables to predict the toxicity of NPs based on cell viability. Voting, an ensemble meta-classifier was used to combine base models to optimize the classification prediction of toxicity. To facilitate inter-comparison, a Copeland Index was applied that ranks the classifiers according to their performance and suggested the optimal classifier. In summary, this Thesis explores past work in the field, systematically capturing information regarding the data used in computational tools (Chapter 2). It demonstrates methodologies and the state-of-the-art approaches (Chapter 3) and creates an original Bayesian tool that can predict multiple toxicological outcomes in a molecular level from transcriptomics outcomes (Chapter 4). Finally, it develops and demonstrates a clever and compact methodology for researchers to compare and choose the optimal classifiers in their unique case of data (Chapter 5

    Predicting In Vitro Neurotoxicity Induced by Nanoparticles Using Machine Learning

    No full text
    The practice of non-testing approaches in nanoparticles hazard assessment is necessary to identify and classify potential risks in a cost effective and timely manner. Machine learning techniques have been applied in the field of nanotoxicology with encouraging results. A neurotoxicity classification model for diverse nanoparticles is presented in this study. A data set created from multiple literature sources consisting of nanoparticles physicochemical properties, exposure conditions and in vitro characteristics is compiled to predict cell viability. Pre-processing techniques were applied such as normalization methods and two supervised instance methods, a synthetic minority over-sampling technique to address biased predictions and production of subsamples via bootstrapping. The classification model was developed using random forest and goodness-of-fit with additional robustness and predictability metrics were used to evaluate the performance. Information gain analysis identified the exposure dose and duration, toxicological assay, cell type, and zeta potential as the five most important attributes to predict neurotoxicity in vitro. This is the first tissue-specific machine learning tool for neurotoxicity prediction caused by nanoparticles in in vitro systems. The model performs better than non-tissue specific models
    corecore