17 research outputs found

    A new view of the spin echo diffusive diffraction on porous structures

    Full text link
    Analysis with the characteristic functional of stochastic motion is used for the gradient spin echo measurement of restricted motion to clarify details of the diffraction-like effect in a porous structure. It gives the diffusive diffraction as an interference of spin phase shifts due to the back-flow of spins bouncing at the boundaries, when mean displacement of scattered spins is equal to the spin phase grating prepared by applied magnetic field gradients. The diffraction patterns convey information about morphology of the surrounding media at times long enough that opposite boundaries are restricting displacements. The method explains the dependence of diffraction on the time and width of gradient pulses, as observed at the experiments and the simulations. It also enlightens the analysis of transport properties by the spin echo, particularly in systems, where the motion is restricted by structure or configuration

    Dynamics of Air-Fluidized Granular System Measured by the Modulated Gradient Spin-echo

    Full text link
    The power spectrum of displacement fluctuation of beads in the air-fluidized granular system is measured by a novel NMR technique of modulated gradient spin-echo. The results of measurement together with the related spectrum of the velocity fluctuation autocorrelation function fit well to an empiric formula based on to the model of bead caging between nearest neighbours; the cage breaks up after a few collisions \cite{Menon1}. The fit yields the characteristic collision time, the size of bead caging and the diffusion-like constant for different degrees of system fluidization. The resulting mean squared displacement increases proportionally to the second power of time in the short-time ballistic regime and increases linearly with time in the long-time diffusion regime as already confirmed by other experiments and simulations.Comment: 4 figures. Submited to Physical Review Letters, April 200

    Wet-dry-wet drug screen leads to the synthesis of TS1, a novel compound reversing lung fibrosis through inhibition of myofibroblast differentiation

    Get PDF
    Therapies halting the progression of fibrosis are ineffective and limited. Activated myofibroblasts are emerging as important targets in the progression of fibrotic diseases. Previously, we performed a high-throughput screen on lung fibroblasts and subsequently demonstrated that the inhibition of myofibroblast activation is able to prevent lung fibrosis in bleomycin-treated mice. High-throughput screens are an ideal method of repurposing drugs, yet they contain an intrinsic limitation, which is the size of the library itself. Here, we exploited the data from our “wet” screen and used “dry” machine learning analysis to virtually screen millions of compounds, identifying novel anti-fibrotic hits which target myofibroblast differentiation, many of which were structurally related to dopamine. We synthesized and validated several compounds ex vivo (“wet”) and confirmed that both dopamine and its derivative TS1 are powerful inhibitors of myofibroblast activation. We further used RNAi-mediated knock-down and demonstrated that both molecules act through the dopamine receptor 3 and exert their anti-fibrotic effect by inhibiting the canonical transforming growth factor β pathway. Furthermore, molecular modelling confirmed the capability of TS1 to bind both human and mouse dopamine receptor 3. The anti-fibrotic effect on human cells was confirmed using primary fibroblasts from idiopathic pulmonary fibrosis patients. Finally, TS1 prevented and reversed disease progression in a murine model of lung fibrosis. Both our interdisciplinary approach and our novel compound TS1 are promising tools for understanding and combating lung fibrosis

    The role of tissue microstructure and water exchange in biophysical modelling of diffusion in white matter

    Get PDF

    Ensembles of extremely randomized predictive clustering trees for predicting structured outputs

    No full text
    We address the task of learning ensembles of predictive models for structured output prediction (SOP). We focus on three SOP tasks: multi-target regression (MTR), multi-label classification (MLC) and hierarchical multi-label classification (HMC). In contrast to standard classification and regression, where the output is a single (discrete or continuous) variable, in SOP the output is a data structure—a tuple of continuous variables MTR, a tuple of binary variables MLC or a tuple of binary variables with hierarchical dependencies (HMC). SOP is gaining increasing interest in the research community due to its applicability in a variety of practically relevant domains. In this context, we consider the Extra-Tree ensemble learning method—the overall top performer in the DREAM4 and DREAM5 challenges for gene network reconstruction. We extend this method for SOP tasks and call the extension Extra-PCTs ensembles. As base predictive models we propose using predictive clustering trees (PCTs)–a generalization of decision trees for predicting structured outputs. We conduct a comprehensive experimental evaluation of the proposed method on a collection of 41 benchmark datasets: 21 for MTR, 10 for MLC and 10 for HMC. We first investigate the influence of the size of the ensemble and the size of the feature subset considered at each node. We then compare the performance of Extra-PCTs to other ensemble methods (random forests and bagging), as well as to single PCTs. The experimental evaluation reveals that the Extra-PCTs achieve optimal performance in terms of predictive power and computational cost, with 50 base predictive models across the three tasks. The recommended values for feature subset sizes vary across the tasks, and also depend on whether the dataset contains only binary and/or sparse attributes. The Extra-PCTs give better predictive performance than a single tree (the differences are typically statistically significant). Moreover, the Extra-PCTs are the best performing ensemble method (except for the MLC task, where performances are similar to those of random forests), and Extra-PCTs can be used to learn good feature rankings for all of the tasks considered here

    Semi-supervised regression trees with application to QSAR modelling

    No full text
    Despite the ease of collecting abundance of data about various phenomena, obtaining labeled data needed for learning models with high predictive performance remains a difficult and expensive task in many domains. This issue is particularly present in the case of the analysis of scientific data where obtaining labeled data typically requires expensive experiments. Moreover, in the analysis of scientific data, another issue is of fundamental importance: the interpretability of the models and the explainability of their decisions. By taking into account these considerations, we propose a novel semi-supervised method to learn regression trees. Thanks to the semi-supervised machine learning approach, the method is able to exploit information coming not only from labeled data, but also from unlabeled data, thus alleviating the issue of lack of labeled data. The method is based on the predictive clustering trees paradigm that extends regression trees towards structured output prediction. This allows us to obtain interpretable regression trees. The method we propose is particularly suited for the chemoinformatics task of quantitative structure-activity relationship (QSAR) modeling, which is the main application context considered in this paper. Specifically, we evaluate the proposed method on 4 QSAR modelling datasets and illustrate its use on a case study of predicting farnesyltransferase inhibitors. Additionally, we also evaluate our approach on 10 benchmark datasets not related to the QSAR modeling problem. The evaluation reveals the following: semi-supervised trees and ensembles thereof have better predictive performance than their supervised counterparts (especially when the number of labeled examples is very small); different datasets and different amounts of labeled data require different amounts of unlabeled data to be included in the learning process; and the learned semi-supervised regression trees can be used to better understand the problem at hand and the way predictions are being made
    corecore