585 research outputs found

    Constraining work fluctuations of non-Hermitian dynamics across the exceptional point of a superconducting qubit

    Full text link
    Thermodynamics constrains changes to the energy of a system, both deliberate and random, via its first and second laws. When the system is not in equilibrium, fluctuation theorems such as the Jarzynski equality further restrict the distributions of deliberate work done. Such fluctuation theorems have been experimentally verified in small, non-equilibrium quantum systems undergoing unitary or decohering dynamics. Yet, their validity in systems governed by a non-Hermitian Hamiltonian has long been contentious, due to the false premise of the Hamiltonian's dual and equivalent roles in dynamics and energetics. Here we show that work fluctuations in a non-Hermitian qubit obey the Jarzynski equality even if its Hamiltonian has complex or purely imaginary eigenvalues. With post-selection on a dissipative superconducting circuit undergoing a cyclic parameter sweep, we experimentally quantify the work distribution using projective energy measurements and show that the fate of the Jarzynski equality is determined by the parity-time symmetry of, and the energetics that result from, the corresponding non-Hermitian, Floquet Hamiltonian. By distinguishing the energetics from non-Hermitian dynamics, our results provide the recipe for investigating the non-equilibrium quantum thermodynamics of such open systems.Comment: 7 pages, 5 figure

    Transcriptomics in Toxicogenomics, Part III : Data Modelling for Risk Assessment

    Get PDF
    Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics.Peer reviewe

    Transcriptomics in Toxicogenomics, Part III: Data Modelling for Risk Assessment

    Get PDF
    Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics

    Transcriptomics in Toxicogenomics, Part I: Experimental Design, Technologies, Publicly Available Data, and Regulatory Aspects

    Get PDF
    The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organisms’ responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series

    Transcriptomics in Toxicogenomics, Part II : Preprocessing and Differential Expression Analysis for High Quality Data

    Get PDF
    Preprocessing of transcriptomics data plays a pivotal role in the development of toxicogenomics-driven tools for chemical toxicity assessment. The generation and exploitation of large volumes of molecular profiles, following an appropriate experimental design, allows the employment of toxicogenomics (TGx) approaches for a thorough characterisation of the mechanism of action (MOA) of different compounds. To date, a plethora of data preprocessing methodologies have been suggested. However, in most cases, building the optimal analytical workflow is not straightforward. A careful selection of the right tools must be carried out, since it will affect the downstream analyses and modelling approaches. Transcriptomics data preprocessing spans across multiple steps such as quality check, filtering, normalization, batch effect detection and correction. Currently, there is a lack of standard guidelines for data preprocessing in the TGx field. Defining the optimal tools and procedures to be employed in the transcriptomics data preprocessing will lead to the generation of homogeneous and unbiased data, allowing the development of more reliable, robust and accurate predictive models. In this review, we outline methods for the preprocessing of three main transcriptomic technologies including microarray, bulk RNA-Sequencing (RNA-Seq), and single cell RNA-Sequencing (scRNA-Seq). Moreover, we discuss the most common methods for the identification of differentially expressed genes and to perform a functional enrichment analysis. This review is the second part of a three-article series on Transcriptomics in Toxicogenomics.Peer reviewe

    Transcriptomics in Toxicogenomics, Part I: Experimental Design, Technologies, Publicly Available Data, and Regulatory Aspects

    Get PDF
    The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organisms’ responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series

    Prevalence and Risk Factors of Porcine Cysticercosis in Angónia District, Mozambique

    Get PDF
    Taenia solium is an important zoonosis in many developing countries. Cysticercosis poses a serious public health risk and incurs sizeable economic losses to pig production. Because data on the epidemiology of porcine cysticercosis in Mozambique are scarce, the present study was conducted to determine the prevalence and risk factors for porcine cysticercosis. A cross-sectional survey was carried out in 11 villages in Angónia district, Tete province in northwestern Mozambique. Between September and November, 2007, a total of 661 pigs were tested serologically and examined by tongue inspection. Serum samples were tested for the presence of circulating parasite antigen using a monoclonal antibody-based sandwich enzyme-linked immunosorbent assay (Ag-ELISA). In addition, a questionnaire survey to collect information on pig production, occurrence and transmission of porcine cysticercosis, risk factors and awareness of porcine cysticercosis was conducted in the selected households from which pigs were sampled. Two hundred thirty-one samples (34.9%) were found positive by the Ag-ELISA, while by tongue inspection on the same animals cysticerci were detected in 84 pigs (12.7%). Increasing age (OR = 1.63; 95% CI = 1.13–2.37) and free-range pig husbandry system (OR = 3.81; 95% CI = 2.08–7.06) were important risk factors for porcine cysticercosis in the district. The present findings indicate that porcine cysticercosis is endemic in the region, and that increasing pig age and pig husbandry practices contribute significantly to porcine cysticercosis transmission. Further epidemiological studies on the prevalence and transmission of porcine cysticercosis in rural communities in Mozambique are needed to enable collection of more baseline data and implementation of effective control strategies within the country

    A FAIR guide for data providers to maximise sharing of human genomic data

    Get PDF
    It is generally acknowledged that, for reproducibility and progress of human genomic research, data sharing is critical. For every sharing transaction, a successful data exchange is produced between a data consumer and a data provider. Providers of human genomic data (e.g., publicly or privately funded repositories and data archives) fulfil their social contract with data donors when their shareable data conforms to FAIR (findable, accessible, interoperable, reusable) principles. Based on our experiences via Repositive (https://repositive.io), a leading discovery platform cataloguing all shared human genomic datasets, we propose guidelines for data providers wishing to maximise their shared data’s FAIRness. Citation: Corpas M, Kovalevskaya NV, McMurray A, Niel

    Electrostatic free energy landscapes for nucleic acid helix assembly

    Get PDF
    Metal ions are crucial for nucleic acid folding. From the free energy landscapes, we investigate the detailed mechanism for ion-induced collapse for a paradigm system: loop-tethered short DNA helices. We find that Na(+) and Mg(2+) play distinctive roles in helix–helix assembly. High [Na(+)] (>0.3 M) causes a reduced helix–helix electrostatic repulsion and a subsequent disordered packing of helices. In contrast, Mg(2+) of concentration >1 mM is predicted to induce helix–helix attraction and results in a more compact and ordered helix–helix packing. Mg(2+) is much more efficient in causing nucleic acid compaction. In addition, the free energy landscape shows that the tethering loops between the helices also play a significant role. A flexible loop, such as a neutral loop or a polynucleotide loop in high salt concentration, enhances the close approach of the helices in order to gain the loop entropy. On the other hand, a rigid loop, such as a polynucleotide loop in low salt concentration, tends to de-compact the helices. Therefore, a polynucleotide loop significantly enhances the sharpness of the ion-induced compaction transition. Moreover, we find that a larger number of helices in the system or a smaller radius of the divalent ions can cause a more abrupt compaction transition and a more compact state at high ion concentration, and the ion size effect becomes more pronounced as the number of helices is increased
    • …
    corecore