438 research outputs found

    The context of gene expression regulation

    Get PDF
    Recent advances in sequencing technologies have uncovered a world of RNAs that do not code for proteins, known as non-protein coding RNAs, that play important roles in gene regulation. Along with histone modifications and transcription factors, non-coding RNA is part of a layer of transcriptional control on top of the DNA code. This layer of components and their interactions specifically enables (or disables) the modulation of three-dimensional folding of chromatin to create a context for transcriptional regulation that underlies cell-specific transcription. In this perspective, we propose a structural and functional hierarchy, in which the DNA code, proteins and non-coding RNAs act as context creators to fold chromosomes and regulate genes

    Ultra-Stretchable Interconnects for High-Density Stretchable Electronics

    Full text link
    The exciting field of stretchable electronics (SE) promises numerous novel applications, particularly in-body and medical diagnostics devices. However, future advanced SE miniature devices will require high-density, extremely stretchable interconnects with micron-scale footprints, which calls for proven standardized (complementary metal-oxide semiconductor (CMOS)-type) process recipes using bulk integrated circuit (IC) microfabrication tools and fine-pitch photolithography patterning. Here, we address this combined challenge of microfabrication with extreme stretchability for high-density SE devices by introducing CMOS-enabled, free-standing, miniaturized interconnect structures that fully exploit their 3D kinematic freedom through an interplay of buckling, torsion, and bending to maximize stretchability. Integration with standard CMOS-type batch processing is assured by utilizing the Flex-to-Rigid (F2R) post-processing technology to make the back-end-of-line interconnect structures free-standing, thus enabling the routine microfabrication of highly-stretchable interconnects. The performance and reproducibility of these free-standing structures is promising: an elastic stretch beyond 2000% and ultimate (plastic) stretch beyond 3000%, with 10 million cycles at 1000% stretch with <1% resistance change. This generic technology provides a new route to exciting highly-stretchable miniature devices.Comment: 13 pages, 5 figure, journal publicatio

    Hi-C 3.0: Improved Protocol for Genome-Wide Chromosome Conformation Capture

    Get PDF
    The intricate folding of chromatin enables living organisms to store genomic material in an extremely small volume while facilitating proper cell function. Hi-C is a chromosome conformation capture (3C)-based technology to detect pair-wise chromatin interactions genome-wide, and has become a benchmark tool to study genome organization. In Hi-C, chromatin conformation is first captured by chemical cross-linking of cells. Cells are then lysed and subjected to restriction enzyme digestion, before the ends of the resulting fragments are marked with biotin. Fragments within close 3D proximity are ligated, and the biotin label is used to selectively enrich for ligated junctions. Finally, isolated ligation products are prepared for high-throughput sequencing, which enables the mapping of pair-wise chromatin interactions genome-wide. Over the past decade, next-generation sequencing has become cheaper and easier to perform, enabling more interactions to be sampled to obtain higher resolution in chromatin interaction maps. Here, we provide an in-depth guide to performing an up-to-date Hi-C procedure on mammalian cell lines. These protocols include recent improvements that increase the resolution potential of the assay, namely by enhancing cross-linking and using a restriction enzyme cocktail. These improvements result in a versatile Hi-C procedure that enables the detection of genome folding features at a wide range of distances. Basic Protocol 1: Fixation of nuclear conformation Basic Protocol 2: Chromosome conformation capture Basic Protocol 3: Hi-C sequencing library preparation

    Transformation and integration of heterogeneous health data in a privacy-preserving distributed learning infrastructure

    Get PDF
    Problem statement: A growing volume and variety of personal health data are being collected by different entities, such as healthcare providers, insurance companies, and wearable device manufacturers. Combining heterogeneous health data offers unprecedented opportunities to augment our understanding of human health and disease. However, a major challenge to research lies in the difficulty of accessing and analyzing health data that are dispersed in their format (e.g. CSV, XML), sources (e.g., medical records, laboratory data), representation (unstructured, structured), and governance (e.g., data collection and maintenance)[2]. Such considerations are crucial when we link and use personal health data across multiple legal entities with different data governance and privacy concerns

    Annotation of existing databases using Semantic Web technologies:Making data more FAIR

    Get PDF
    Making data FAIR is an elaborate task. Hospitals and/or departments have to invest into technologies usually unknown and often do not have the resources to make data FAIR. Our work aims to provide a framework and tooling where users can easily make their data (more) FAIR. This framework uses RDF and OWL-based inferencing to annotate existing databases or comma-separated files. For every database, a custom ontology is build based on the database schema, which can be annotated to describe matching standardized terminologies. In this work, we describe the tooling developed, and the current implementation in an institutional datawarehouse pertaining over 3000 rectal cancer patients. We report on the performance (time) of the extraction and annotation process by the developed tooling. Furthermore, we do show that annotation of existing databases using OWL2-based reasoning is possible. Furthermore, we show that the ontology extracted from existing databases can provide a description framework to describe and annotate existing data sources. This would target mostly the “Interoperable” aspect of FAIR

    Colorectal cancer health and care quality indicators in a federated setting using the Personal Health Train

    Get PDF
    Objective: Hospitals and healthcare providers should assess and compare the quality of care given to patients and based on this improve the care. In the Netherlands, hospitals provide data to national quality registries, which in return provide annual quality indicators. However, this process is time-consuming, resource intensive and risks patient privacy and confidentiality. In this paper, we presented a multicentric ‘Proof of Principle’ study for federated calculation of quality indicators in patients with colorectal cancer. The findings suggest that the proposed approach is highly time-efficient and consume significantly lesser resources. Materials and methods: Two quality indicators are calculated in an efficient and privacy presevering federated manner, by i) applying the Findable Accessible Interoperable and Reusable (FAIR) data principles and ii) using the Personal Health Train (PHT) infrastructure. Instead of sharing data to a centralized registry, PHT enables analysis by sending algorithms and sharing only insights from the data. Results: ETL process extracted data from the Electronic Health Record systems of the hospitals, converted them to FAIR data and hosted in RDF endpoints within each hospital. Finally, quality indicators from each center are calculated using PHT and the mean result along with the individual results plotted. Discussion and conclusion: PHT and FAIR data principles can efficiently calculate quality indicators in a privacy-preserving federated approach and the work can be scaled up both nationally and internationally. Despite this, application of the methodology was largely hampered by ELSI issues. However, the lessons learned from this study can provide other hospitals and researchers to adapt to the process easily and take effective measures in building quality of care infrastructures.</p

    Colorectal cancer health and care quality indicators in a federated setting using the Personal Health Train

    Get PDF
    Objective: Hospitals and healthcare providers should assess and compare the quality of care given to patients and based on this improve the care. In the Netherlands, hospitals provide data to national quality registries, which in return provide annual quality indicators. However, this process is time-consuming, resource intensive and risks patient privacy and confidentiality. In this paper, we presented a multicentric ‘Proof of Principle’ study for federated calculation of quality indicators in patients with colorectal cancer. The findings suggest that the proposed approach is highly time-efficient and consume significantly lesser resources. Materials and methods: Two quality indicators are calculated in an efficient and privacy presevering federated manner, by i) applying the Findable Accessible Interoperable and Reusable (FAIR) data principles and ii) using the Personal Health Train (PHT) infrastructure. Instead of sharing data to a centralized registry, PHT enables analysis by sending algorithms and sharing only insights from the data. Results: ETL process extracted data from the Electronic Health Record systems of the hospitals, converted them to FAIR data and hosted in RDF endpoints within each hospital. Finally, quality indicators from each center are calculated using PHT and the mean result along with the individual results plotted. Discussion and conclusion: PHT and FAIR data principles can efficiently calculate quality indicators in a privacy-preserving federated approach and the work can be scaled up both nationally and internationally. Despite this, application of the methodology was largely hampered by ELSI issues. However, the lessons learned from this study can provide other hospitals and researchers to adapt to the process easily and take effective measures in building quality of care infrastructures.</p

    Segmentation uncertainty estimation as a sanity check for image biomarker studies

    Get PDF
    SIMPLE SUMMARY: Radiomics is referred to as quantitative image biomarker analysis. Due to the uncertainty in image acquisition, processing, and segmentation (delineation) protocols, the radiomic biomarkers lack reproducibility. In this manuscript, we show how this protocol-induced uncertainty can drastically reduce prognostic model performance and propose some insights on how to use it for developing better prognostic models. ABSTRACT: Problem. Image biomarker analysis, also known as radiomics, is a tool for tissue characterization and treatment prognosis that relies on routinely acquired clinical images and delineations. Due to the uncertainty in image acquisition, processing, and segmentation (delineation) protocols, radiomics often lack reproducibility. Radiomics harmonization techniques have been proposed as a solution to reduce these sources of uncertainty and/or their influence on the prognostic model performance. A relevant question is how to estimate the protocol-induced uncertainty of a specific image biomarker, what the effect is on the model performance, and how to optimize the model given the uncertainty. Methods. Two non-small cell lung cancer (NSCLC) cohorts, composed of 421 and 240 patients, respectively, were used for training and testing. Per patient, a Monte Carlo algorithm was used to generate three hundred synthetic contours with a surface dice tolerance measure of less than 1.18 mm with respect to the original GTV. These contours were subsequently used to derive 104 radiomic features, which were ranked on their relative sensitivity to contour perturbation, expressed in the parameter η. The top four (low η) and the bottom four (high η) features were selected for two models based on the Cox proportional hazards model. To investigate the influence of segmentation uncertainty on the prognostic model, we trained and tested the setup in 5000 augmented realizations (using a Monte Carlo sampling method); the log-rank test was used to assess the stratification performance and stability of segmentation uncertainty. Results. Although both low and high η setup showed significant testing set log-rank p-values (p = 0.01) in the original GTV delineations (without segmentation uncertainty introduced), in the model with high uncertainty, to effect ratio, only around 30% of the augmented realizations resulted in model performance with p < 0.05 in the test set. In contrast, the low η setup performed with a log-rank p < 0.05 in 90% of the augmented realizations. Moreover, the high η setup classification was uncertain in its predictions for 50% of the subjects in the testing set (for 80% agreement rate), whereas the low η setup was uncertain only in 10% of the cases. Discussion. Estimating image biomarker model performance based only on the original GTV segmentation, without considering segmentation, uncertainty may be deceiving. The model might result in a significant stratification performance, but can be unstable for delineation variations, which are inherent to manual segmentation. Simulating segmentation uncertainty using the method described allows for more stable image biomarker estimation, selection, and model development. The segmentation uncertainty estimation method described here is universal and can be extended to estimate other protocol uncertainties (such as image acquisition and pre-processing)

    Distributed learning: Developing a predictive model based on data from multiple hospitals without data leaving the hospital – A real life proof of concept

    Get PDF
    AbstractPurposeOne of the major hurdles in enabling personalized medicine is obtaining sufficient patient data to feed into predictive models. Combining data originating from multiple hospitals is difficult because of ethical, legal, political, and administrative barriers associated with data sharing. In order to avoid these issues, a distributed learning approach can be used. Distributed learning is defined as learning from data without the data leaving the hospital.Patients and methodsClinical data from 287 lung cancer patients, treated with curative intent with chemoradiation (CRT) or radiotherapy (RT) alone were collected from and stored in 5 different medical institutes (123 patients at MAASTRO (Netherlands, Dutch), 24 at Jessa (Belgium, Dutch), 34 at Liege (Belgium, Dutch and French), 48 at Aachen (Germany, German) and 58 at Eindhoven (Netherlands, Dutch)).A Bayesian network model is adapted for distributed learning (watch the animation: http://youtu.be/nQpqMIuHyOk). The model predicts dyspnea, which is a common side effect after radiotherapy treatment of lung cancer.ResultsWe show that it is possible to use the distributed learning approach to train a Bayesian network model on patient data originating from multiple hospitals without these data leaving the individual hospital. The AUC of the model is 0.61 (95%CI, 0.51–0.70) on a 5-fold cross-validation and ranges from 0.59 to 0.71 on external validation sets.ConclusionDistributed learning can allow the learning of predictive models on data originating from multiple hospitals while avoiding many of the data sharing barriers. Furthermore, the distributed learning approach can be used to extract and employ knowledge from routine patient data from multiple hospitals while being compliant to the various national and European privacy laws
    corecore