230 research outputs found

    Rotary replication for freeze-etching.

    Full text link

    Development of a risk score for early saphenous vein graft failure: An individual patient data meta-analysis

    Get PDF
    Objectives: Early saphenous vein graft (SVG) occlusion is typically attributed to technical factors. We aimed at exploring clinical, anatomical, and operative factors associated with the risk of early SVG occlusion (within 12 months postsurgery). Methods: Published literature in MEDLINE was searched for studies reporting the incidence of early SVG occlusion. Individual patient data (IPD) on early SVG occlusion were used from the SAFINOUS-CABG Consortium. A derivation (n = 1492 patients) and validation (n = 372 patients) cohort were used for model training (with 10-fold cross-validation) and external validation respectively. Results: In aggregate data meta-analysis (48 studies, 41,530 SVGs) the pooled estimate for early SVG occlusion was 11%. The developed IPD model for early SVG occlusion, which included clinical, anatomical, and operative characteristics (age, sex, dyslipidemia, diabetes mellitus, smoking, serum creatinine, endoscopic vein harvesting, use of complex grafts, grafted target vessel, and number of SVGs), had good performance in the derivation (c-index = 0.744; 95% confidence interval [CI], 0.701-0.774) and validation cohort (c-index = 0.734; 95% CI, 0.659-0.809). Based on this model. we constructed a simplified 12-variable risk score system (SAFINOUS score) with good performance for early SVG occlusion (c-index = 0.700, 95% CI, 0.684-0.716). Conclusions: From a large international IPD collaboration, we developed a novel risk score to assess the individualized risk for early SVG occlusion. The SAFINOUS risk score could be used to identify patients that are more likely to benefit from aggressive treatment strategies

    The development of a multidisciplinary system to understand causal factors in road crashes

    Get PDF
    The persistent lack of crash causation data to help inform and monitor road and vehicle safety policy is a major obstacle. Data are needed to assess the performance of road and vehicle safety stakeholders and is needed to support the development of further actions. A recent analysis conducted by the European Transport Safety Council identified that there was no single system in place that could meet all of the needs and that there were major gaps including in-depth crash causation information. This paper describes the process of developing a data collection and analysis system designed to fill these gaps. A project team with members from 7 countries was set up to devise appropriate variable lists to collect crash causation information under the following topic levels: accident, road environment, vehicle, and road user, using two quite different sets of resources: retrospective detailed police reports (n=1300) and prospective, independent, on-scene accident research investigations (n=1000). Data categorisation and human factors analysis methods based on Cognitive Reliability and Error Analysis Method (Hollnagel, 1998) were developed to enable the causal factors to be recorded, linked and understood. A harmonised, prospective “on-scene” method for recording the root causes and critical events of road crashes was developed. Where appropriate, this includes interviewing road users in collaboration with more routine accident investigation techniques. The typical level of detail recorded is a minimum of 150 variables for each accident. The project will enable multidisciplinary information on the circumstances of crashes to be interpreted to provide information on the causal factors. This has major applications in the areas of active safety systems, infrastructure and road safety, as well as for tailoring behavioural interventions. There is no direct model available internationally that uses such a systems based approach

    The development of a multidisciplinary system to understand causal factors in road crashes

    Get PDF
    The persistent lack of crash causation data to help inform and monitor road and vehicle safety policy is a major obstacle. Data are needed to assess the performance of road and vehicle safety stakeholders and is needed to support the development of further actions. A recent analysis conducted by the European Transport Safety Council identified that there was no single system in place that could meet all of the needs and that there were major gaps including in-depth crash causation information. This paper describes the process of developing a data collection and analysis system designed to fill these gaps. A project team with members from 7 countries was set up to devise appropriate variable lists to collect crash causation information under the following topic levels: accident, road environment, vehicle, and road user, using two quite different sets of resources: retrospective detailed police reports (n=1300) and prospective, independent, on-scene accident research investigations (n=1000). Data categorisation and human factors analysis methods based on Cognitive Reliability and Error Analysis Method (Hollnagel, 1998) were developed to enable the causal factors to be recorded, linked and understood. A harmonised, prospective “on-scene” method for recording the root causes and critical events of road crashes was developed. Where appropriate, this includes interviewing road users in collaboration with more routine accident investigation techniques. The typical level of detail recorded is a minimum of 150 variables for each accident. The project will enable multidisciplinary information on the circumstances of crashes to be interpreted to provide information on the causal factors. This has major applications in the areas of active safety systems, infrastructure and road safety, as well as for tailoring behavioural interventions. There is no direct model available internationally that uses such a systems based approach

    An organoid biobank for childhood kidney cancers that captures disease and tissue heterogeneity

    Get PDF
    Kidney tumours are among the most common solid tumours in children, comprising distinct subtypes differing in many aspects, including cell-of-origin, genetics, and pathology. Pre-clinical cell models capturing the disease heterogeneity are currently lacking. Here, we describe the first paediatric cancer organoid biobank. It contains tumour and matching normal kidney organoids from over 50 children with different subtypes of kidney cancer, including Wilms tumours, malignant rhabdoid tumours, renal cell carcinomas, and congenital mesoblastic nephromas. Paediatric kidney tumour organoids retain key properties of native tumours, useful for revealing patient-specific drug sensitivities. Using single cell RNA-sequencing and high resolution 3D imaging, we further demonstrate that organoid cultures derived from Wilms tumours consist of multiple different cell types, including epithelial, stromal and blastemal-like cells. Our organoid biobank captures the heterogeneity of paediatric kidney tumours, providing a representative collection of well-characterised models for basic cancer research, drug-screening and personalised medicine

    Measuring, in solution, multiple-fluorophore labeling by combining Fluorescence Correlation Spectroscopy and photobleaching

    Get PDF
    Determining the number of fluorescent entities that are coupled to a given molecule (DNA, protein, etc.) is a key point of numerous biological studies, especially those based on a single molecule approach. Reliable methods are important, in this context, not only to characterize the labeling process, but also to quantify interactions, for instance within molecular complexes. We combined Fluorescence Correlation Spectroscopy (FCS) and photobleaching experiments to measure the effective number of molecules and the molecular brightness as a function of the total fluorescence count rate on solutions of cDNA (containing a few percent of C bases labeled with Alexa Fluor 647). Here, photobleaching is used as a control parameter to vary the experimental outputs (brightness and number of molecules). Assuming a Poissonian distribution of the number of fluorescent labels per cDNA, the FCS-photobleaching data could be easily fit to yield the mean number of fluorescent labels per cDNA strand (@ 2). This number could not be determined solely on the basis of the cDNA brightness, because of both the statistical distribution of the number of fluorescent labels and their unknown brightness when incorporated in cDNA. The statistical distribution of the number of fluorophores labeling cDNA was confirmed by analyzing the photon count distribution (with the cumulant method), which showed clearly that the brightness of cDNA strands varies from one molecule to the other.Comment: 38 pages (avec les figures

    Lectures on the functional renormalization group method

    Full text link
    These introductory notes are about functional renormalization group equations and some of their applications. It is emphasised that the applicability of this method extends well beyond critical systems, it actually provides us a general purpose algorithm to solve strongly coupled quantum field theories. The renormalization group equation of F. Wegner and A. Houghton is shown to resum the loop-expansion. Another version, due to J. Polchinski, is obtained by the method of collective coordinates and can be used for the resummation of the perturbation series. The genuinely non-perturbative evolution equation is obtained in a manner reminiscent of the Schwinger-Dyson equations. Two variants of this scheme are presented where the scale which determines the order of the successive elimination of the modes is extracted from external and internal spaces. The renormalization of composite operators is discussed briefly as an alternative way to arrive at the renormalization group equation. The scaling laws and fixed points are considered from local and global points of view. Instability induced renormalization and new scaling laws are shown to occur in the symmetry broken phase of the scalar theory. The flattening of the effective potential of a compact variable is demonstrated in case of the sine-Gordon model. Finally, a manifestly gauge invariant evolution equation is given for QED.Comment: 47 pages, 11 figures, final versio

    Hybrid Correlation and Causal Feature Selection for Ensemble Classifiers

    Get PDF
    PC and TPDA algorithms are robust and well known prototype algorithms, incorporating constraint-based approaches for causal discovery. However, both algorithms cannot scale up to deal with high dimensional data, that is more than few hundred features. This chapter presents hybrid correlation and causal feature selection for ensemble classifiers to deal with this problem. Redundant features are removed by correlation-based feature selection and then irrelevant features are eliminated by causal feature selection. The number of eliminated features, accuracy, the area under the receiver operating characteristic curve (AUC) and false negative rate (FNR) of proposed algorithms are compared with correlation-based feature selection (FCBF and CFS) and causal based feature selection algorithms (PC, TPDA, GS, IAMB)

    A survey on independence-based Markov networks learning

    Full text link
    This work reports the most relevant technical aspects in the problem of learning the \emph{Markov network structure} from data. Such problem has become increasingly important in machine learning, and many other application fields of machine learning. Markov networks, together with Bayesian networks, are probabilistic graphical models, a widely used formalism for handling probability distributions in intelligent systems. Learning graphical models from data have been extensively applied for the case of Bayesian networks, but for Markov networks learning it is not tractable in practice. However, this situation is changing with time, given the exponential growth of computers capacity, the plethora of available digital data, and the researching on new learning technologies. This work stresses on a technology called independence-based learning, which allows the learning of the independence structure of those networks from data in an efficient and sound manner, whenever the dataset is sufficiently large, and data is a representative sampling of the target distribution. In the analysis of such technology, this work surveys the current state-of-the-art algorithms for learning Markov networks structure, discussing its current limitations, and proposing a series of open problems where future works may produce some advances in the area in terms of quality and efficiency. The paper concludes by opening a discussion about how to develop a general formalism for improving the quality of the structures learned, when data is scarce.Comment: 35 pages, 1 figur
    • …
    corecore