1,494 research outputs found
The mu problem and sneutrino inflation
We consider sneutrino inflation and post-inflation cosmology in the singlet
extension of the MSSM with approximate Peccei-Quinn(PQ) symmetry, assuming that
supersymmetry breaking is mediated by gauge interaction. The PQ symmetry is
broken by the intermediate-scale VEVs of two flaton fields, which are
determined by the interplay between radiative flaton soft masses and higher
order terms. Then, from the flaton VEVs, we obtain the correct mu term and the
right-handed(RH) neutrino masses for see-saw mechanism. We show that the RH
sneutrino with non-minimal gravity coupling drives inflation, thanks to the
same flaton coupling giving rise to the RH neutrino mass. After inflation,
extra vector-like states, that are responsible for the radiative breaking of
the PQ symmetry, results in thermal inflation with the flaton field, solving
the gravitino problem caused by high reheating temperature. Our model predicts
the spectral index to be n_s\simeq 0.96 due to the additional efoldings from
thermal inflation. We show that a right dark matter abundance comes from the
gravitino of 100 keV mass and a successful baryogenesis is possible via
Affleck-Dine leptogenesis.Comment: 27 pages, no figures, To appear in JHE
Transcriptomics in Toxicogenomics, Part III: Data Modelling for Risk Assessment
Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics
Transcriptomics in Toxicogenomics, Part I: Experimental Design, Technologies, Publicly Available Data, and Regulatory Aspects
The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organismsâ responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series
Transcriptomics in Toxicogenomics, Part III : Data Modelling for Risk Assessment
Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics.Peer reviewe
Jejunal Variceal Bleeding Successfully Treated with Percutaneous Coil Embolization
A 52-yr-old male with alcoholic liver cirrhosis was hospitalized for hematochezia. He had undergone small-bowel resection due to trauma 15 yr previously. Esophagogastroduodenoscopy showed grade 1 esophageal varices without bleeding. No bleeding lesion was seen on colonoscopy, but capsule endoscopy showed suspicious bleeding from angiodysplasia in the small bowel. After 2 weeks of conservative treatment, the hematochezia stopped. However, 1 week later, the patient was re-admitted with hematochezia and a hemoglobin level of 5.5 g/dL. Capsule endoscopy was performed again and showed active bleeding in the mid-jejunum. Abdominal computed tomography revealed a varix in the jejunal branch of the superior mesenteric vein. A direct portogram performed via the transhepatic route showed portosystemic collaterals at the distal jejunum. The patient underwent coil embolization of the superior mesenteric vein just above the portosystemic collaterals and was subsequently discharged without re-bleeding. At 8 months after discharge, his condition has remained stable, without further bleeding episodes
Transcriptomics in Toxicogenomics, Part I: Experimental Design, Technologies, Publicly Available Data, and Regulatory Aspects
The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organismsâ responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series
Transcriptomics in Toxicogenomics, Part II : Preprocessing and Differential Expression Analysis for High Quality Data
Preprocessing of transcriptomics data plays a pivotal role in the development of toxicogenomics-driven tools for chemical toxicity assessment. The generation and exploitation of large volumes of molecular profiles, following an appropriate experimental design, allows the employment of toxicogenomics (TGx) approaches for a thorough characterisation of the mechanism of action (MOA) of different compounds. To date, a plethora of data preprocessing methodologies have been suggested. However, in most cases, building the optimal analytical workflow is not straightforward. A careful selection of the right tools must be carried out, since it will affect the downstream analyses and modelling approaches. Transcriptomics data preprocessing spans across multiple steps such as quality check, filtering, normalization, batch effect detection and correction. Currently, there is a lack of standard guidelines for data preprocessing in the TGx field. Defining the optimal tools and procedures to be employed in the transcriptomics data preprocessing will lead to the generation of homogeneous and unbiased data, allowing the development of more reliable, robust and accurate predictive models. In this review, we outline methods for the preprocessing of three main transcriptomic technologies including microarray, bulk RNA-Sequencing (RNA-Seq), and single cell RNA-Sequencing (scRNA-Seq). Moreover, we discuss the most common methods for the identification of differentially expressed genes and to perform a functional enrichment analysis. This review is the second part of a three-article series on Transcriptomics in Toxicogenomics.Peer reviewe
Physical Passaging of Embryoid Bodies Generated from Human Pluripotent Stem Cells
Spherical three-dimensional cell aggregates called embryoid bodies (EBs), have been widely used in in vitro differentiation protocols for human pluripotent stem cells including human embryonic stem cells (hESCs) and human induced pluripotent stem cells (hiPSCs). Recent studies highlight the new devices and techniques for hEB formation and expansion, but are not involved in the passaging or subculture process. Here, we provide evidence that a simple periodic passaging markedly improved hEB culture condition and thus allowed the size-controlled, mass production of human embryoid bodies (hEBs) derived from both hESCs and hiPSCs. hEBs maintained in prolonged suspension culture without passaging (>2 weeks) showed a progressive decrease in the cell growth and proliferation and increase in the apoptosis compared to 7-day-old hEBs. However, when serially passaged in suspension, hEB cell populations were significantly increased in number while maintaining the normal rates of cell proliferation and apoptosis and the differentiation potential. Uniform-sized hEBs produced by manual passaging using a 1â¶4 split ratio have been successfully maintained for over 20 continuous passages. The passaging culture method of hEBs, which is simple, readily expandable, and reproducible, could be a powerful tool for improving a robust and scalable in vitro differentiation system of human pluripotent stem cells
- âŠ