44 research outputs found

    Small-Molecule RORĪ³t Antagonists Inhibit T Helper 17 Cell Transcriptional Network by Divergent Mechanisms

    Get PDF
    SummaryWe identified three retinoid-related orphan receptor gamma t (RORĪ³t)-specific inhibitors that suppress T helper 17 (Th17) cell responses, including Th17-cell-mediated autoimmune disease. We systemically characterized RORĪ³t binding in the presence and absence of drugs with corresponding whole-genome transcriptome sequencing. RORĪ³t acts as a direct activator of Th17 cell signature genes and a direct repressor of signature genes from other TĀ cell lineages; its strongest transcriptional effects are on cis-regulatory sites containing the RORĪ± binding motif. RORĪ³t is central in a densely interconnected regulatory network that shapes the balance of TĀ cell differentiation. Here, the three inhibitors modulated the RORĪ³t-dependent transcriptional network to varying extents and through distinct mechanisms. Whereas one inhibitor displaced RORĪ³t from its target loci, the other two inhibitors affected transcription predominantly without removing DNA binding. Our work illustrates the power of a system-scale analysis of transcriptional regulation to characterize potential therapeutic compounds that inhibit pathogenic Th17 cells and suppress autoimmunity

    Controlled antibody release from gelatin for on-chip sample preparation

    Get PDF
    A practical way to realize on-chip sample preparation for point-of-care diagnostics is to store the required reagents on a microfluidic device and release them in a controlled manner upon contact with the sample. For the development of such diagnostic devices, a fundamental understanding of the release kinetics of reagents from suitable materials in microfluidic chips is therefore essential. Here, we study the release kinetics of fluorophore-conjugated antibodies from (sub-) Āµm thick gelatin layers and several ways to control the release time. The observed antibody release is well-described by a diffusion model. Release times ranging from ~20 s to ~650 s were determined for layers with thicknesses (in the dry state) between 0.25 Āµm and 1.5 Āµm, corresponding to a diffusivity of 0.65 Āµm2/s (in the swollen state) for our standard layer preparation conditions. By modifying the preparation conditions, we can influence the properties of gelatin to realize faster or slower release. Faster drying at increased temperatures leads to shorter release times, whereas slower drying at increased humidity yields slower release. As expected in a diffusive process, the release time increases with the size of the antibody. Moreover, the ionic strength of the release medium has a significant impact on the release kinetics. Applying these findings to cell counting chambers with on-chip sample preparation, we can tune the release to control the antibody distribution after inflow of blood in order to achieve homogeneous cell staining

    Critical current density: Measurements vs. reality

    Get PDF
    Different experimental techniques are employed to evaluate the critical current density (Jc), namely transport current measurements and two different magnetisation measurements forming quasi-equilibrium and dynamic critical states. Our technique-dependent results for superconducting YBa 2Cu3O7 (YBCO) film and MgB2 bulk samples show an extremely high sensitivity of Jc and associated interpretations, such as irreversibility fields and Kramer plots, which lose meaning without a universal approach. We propose such approach for YBCO films based on their unique pinning features. This approach allows us to accurately recalculate the magnetic-field-dependent Jc obtained by any technique into the Jc behaviour, which would have been measured by any other method without performing the corresponding experiments. We also discovered low-frequency-dependent phenomena, governing flux dynamics, but contradicting the considered ones in the literature. The understanding of these phenomena, relevant to applications with moving superconductors, can clarify their dramatic impact on the electric-field criterion through flux diffusivity and corresponding measurements. Ā© Copyright EPLA, 2013

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Monte Carlo Simulation and Integration

    No full text
    In this paper, we introduce the Tootsie Pop Algorithm and explore its use in different contexts. It can be used to estimate more general problems where a measure is defined, or in the context of statistics application, integration involving high dimensions. The Tootsie Pop Algorithm was introduced by Huber and Schott[2] The general process of Tootsie Pop Algorithm, just like what its name suggests, is a process of peeling down the outer shell, which is the larger enclosing set, to the center, which is the smaller enclosed. We obtain the average number of peels, which gives us an understanding of the ratio between the size of the shell and the size of the center. Each peel is generated by a random draw within the outer shell: if the drawn point is located in the center, we are done, else we update the outer shell such that the drawn point is right on its edge

    Adverse Childhood Experiences (ACEs): Development of an ACEs Knowledge Scale

    No full text
    BACKGROUND AND PURPOSE: Adverse childhood experiences (ACEs) negatively impacting children\u27s health and later in their lives warrant necessity to educate nursing students about ACEs. The purposes of this study were to evaluate (a) nursing students\u27 understanding of key concepts of ACEs using the ACEs Knowledge Scale (AKS) and (b) psychometric properties of the AKS. METHODS: A survey using AKS was conducted with randomly selected student participants ( = 344) to evaluate students\u27 understanding of ACEs knowledge. Empirical validation of the AKS included content validity using Content Validity Index (CVI), reliability, and construct validity analyses. RESULTS: The results showed students in the Bachelor of Science in Nursing (BSN) program had increased knowledge of ACEs over pre-nursing students, and the BSN graduating students had increased knowledge related to trauma-informed care and building resilience. There were no significant differences in ACEs knowledge between Family Nurse Practitioner (FNP) and BSN students. Evaluation of psychometric properties of AKS revealed S-CVI/Ave=0.912, indicated excellent content validity based on the expert panel\u27s ratings. Cronbach\u27s alpha coefficient = .84 for the overall instrument indicated good reliability. Factor analyses were employed, showing that the 5-factor model gives good fit indexes, supporting the hypothesized factor structure of five key concepts. CONCLUSIONS: The AKS has showed promising implications to future research, nursing education, and nursing practice

    Study on the Unconventional Water Subsidy Policy in the Arid Area of Northwest China

    No full text
    The arid regions of Northwest China are facing water shortages and ecological fragility. Making full use of unconventional water is one of the effective ways of solving water issues and achieving high-quality regional development. The high cost of unconventional water utilization is the main obstacle to its utilization and technological development, and the subsidy policy may become a breaking point. Taking Ningdong Energy and Chemical Industry Base (NECI Base) as a case study, the article proposes raising the Yellow River water price to subsidize the utilization of mine water. The development and utilization of mine water can be effectively improved. Considering the optimal allocation of multiple water sources and the substitution relationship between the Yellow river water and mine water, this paper extends the water resources module (WRM) of the Computable General Equilibrium (CGE) model. The model can reflect the substitution of water sources and the linkage between water prices and the economy. Ten different subsidy policy scenarios are simulated through the extended CGE model, and the laws and mechanisms of the subsidy policy on the economy and water usage are summarized. The results show that increasing the price of Yellow River water by 8% to subsidize the mine water will achieve optimal socio-economic output. Under this scenario, the industrial value added (IVA) is basically unaffected, the water-use efficiency (WUE) is significantly improved, and the affordability of the enterprise is satisfied. The Yellow River water usage decreased from 319.03 million (M)mĀ³ to 283.58 MmĀ³ (11.1% saving), and mine water usage increased from 27.88 MmĀ³ to 47.15 MmĀ³ (69.1% increase)

    Reconciling opposite trends in the observed and simulated equatorial Pacific zonal sea surface temperature gradient

    No full text
    Abstract The reasons for large discrepancies between observations and simulations, as well as for uncertainties in projections of the equatorial Pacific zonal sea surface temperature (SST) gradient, are controversial. We used CMIP6 models and large ensemble simulations to show that model bias and internal variabilities affected, i.e., strengthened, the SST gradient between 1981 and 2010. The underestimation of strengthened trends in the southeast trade wind belt, the insufficient cooling effect of eastern Pacific upwelling, and the excessive westward extension of the climatological cold tongue in models jointly caused a weaker SST gradient than the recent observations. The phase transformation of the Interdecadal Pacific Oscillation (IPO) could explainā€‰~ā€‰51% of the observed SST gradient strengthening. After adjusting the random IPO phase to the observed IPO change, the adjusted SST gradient trends were closer to observations. We further constrained the projection of SST gradient change by using climate modelsā€™ ability to reproduce the historical SST gradient intensification or the phase of the IPO. These models suggest a weakened SST gradient in the middle of the twenty-first century

    Evaluation of temporal compositing algorithms for annual land cover classification using Landsat time series data

    No full text
    In this paper, four widely used temporal compositing algorithms, i.e. median, maximum NDVI, medoid, and weighted scoring-based algorithms, were evaluated for annual land cover classification using monthly Landsat time series data. Four study areas located in California, Texas, Kansas, and Minnesota, USA were selected for image compositing and land cover classification. Results indicated that images composited using weighted scoring-based algorithms have the best spatial fidelity compared to other three algorithms. In addition, the weighted scoring-based algorithms have superior classification accuracy, followed by median, maximum NDVI, and medoid in descending order. However, the median algorithm has a significant advantage in computational efficiency which was āˆ¼70 times that of weighted scoring-based algorithms, and with overall classification accuracy just slightly lower (āˆ¼0.13% on average) than weighted scoring-based algorithms. Therefore, we recommended the weighted scoring-based compositing algorithms for small area land cover mapping, and median compositing algorithm for the land cover mapping of large area considering the balance between computational complexity and classification accuracy. The findings of this study provide insights into the performance difference between various compositing algorithms, and have potential uses for the selection of pixel-based image compositing technique adopted for land cover mapping based on Landsat time series data
    corecore