615 research outputs found

    Hunting a New Ocean Tracer

    Get PDF

    Racial differences in neurocognitive outcomes post-stroke: The impact of healthcare variables

    Get PDF
    AbstractObjectives:The present study examined differences in neurocognitive outcomes among non-Hispanic Black and White stroke survivors using the NIH Toolbox-Cognition Battery (NIHTB-CB), and investigated the roles of healthcare variables in explaining racial differences in neurocognitive outcomes post-stroke.Methods:One-hundred seventy adults (91 Black; 79 White), who participated in a multisite study were included (age:M=56.4;SD=12.6; education:M=13.7;SD=2.5; 50% male; years post-stroke: 1–18; stroke type: 72% ischemic, 28% hemorrhagic). Neurocognitive function was assessed with the NIHTB-CB, using demographically corrected norms. Participants completed measures of socio-demographic characteristics, health literacy, and healthcare use and access. Stroke severity was assessed with the Modified Rankin Scale.Results:An independent samplesttest indicated Blacks showed more neurocognitive impairment (NIHTB-CB Fluid Composite T-score:M=37.63;SD=11.67) than Whites (Fluid T-score:M=42.59,SD=11.54;p=.006). This difference remained significant after adjusting for reading level (NIHTB-CB Oral Reading), and when stratified by stroke severity. Blacks also scored lower on health literacy, reported differences in insurance type, and reported decreased confidence in the doctors treating them. Multivariable models adjusting for reading level and injury severity showed that health literacy and insurance type were statistically significant predictors of the Fluid cognitive composite (p&lt;.001 andp=.02, respectively) and significantly mediated racial differences on neurocognitive impairment.Conclusions:We replicated prior work showing that Blacks are at increased risk for poorer neurocognitive outcomes post-stroke than Whites. Health literacy and insurance type might be important modifiable factors influencing these differences. (JINS, 2017,23, 640–652)</jats:p

    Bayesian Gaussian distributional regression models for more efficient norm estimation

    Get PDF
    A test score on a psychological test is usually expressed as a normed score, representing its position relative to test scores in a reference population. These typically depend on predictor(s) such as age. The test score distribution conditional on predictors is estimated using regression, which may need large normative samples to estimate the relationships between the predictor(s) and the distribution characteristics properly. In this study, we examine to what extent this burden can be alleviated by using prior information in the estimation of new norms with Bayesian Gaussian distributional regression. In a simulation study, we investigate to what extent this norm estimation is more efficient and how robust it is to prior model deviations. We varied the prior type, prior misspecification and sample size. In our simulated conditions, using a fixed effects prior resulted in more efficient norm estimation than a weakly informative prior as long as the prior misspecification was not age dependent. With the proposed method and reasonable prior information, the same norm precision can be achieved with a smaller normative sample, at least in empirical problems similar to our simulated conditions. This may help test developers to achieve cost‐efficient high‐quality norms. The method is illustrated using empirical normative data from the IDS‐2 intelligence test

    Generating Shigella that internalize into glioblastoma cells

    Get PDF
    IntroductionThe use of microorganisms as drug delivery systems to treat cancer has expanded recently, including FDA approval of certain viruses as oncolytics. Microorganisms have several unique benefits compared to traditional pharmacologic agents including dose independence, the ability to produce therapeutic proteins locally within the tumor, and simplicity of administration. However, current microbial delivery systems such as AAV9 and herpes virus have limited cassette sizes, minimal cancer cell selectivity, and low innate cytotoxicity. To address these issues, we sought to generate a strain of Shigella flexneri to selectively internalize into glioblastoma (GBM) brain tumor cells as an initial step to generating a bacterial-based drug delivery system.MethodsWe generated S. flexneri that selectively internalize into GBM cells using iterative co-cultured assays.ResultsAfter 50 rounds of co-culture, the new strain infected 95 percent of GBM cells in 2 hours. GBM-infecting Shigella demonstrate a 124-fold preference for internalizing in nine different GBM cell lines compared to Normal Astrocytes (NA) controls. Additionally, we developed an in-cell western to identify GBM-infecting Shigella clones that preferentially internalize in patient samples without iterative co-culture. Finally, we demonstrate internalization into GBM cells is mediated via a factor modified by myristoylation.DiscussionIn conclusion, here we present a novel bacterial platform that preferentially internalizes in brain tumor cells. This system provides numerous potential benefits over current interventions and other microbial strategies for treating brain tumors

    Financial Transaction Tax: Small is Beautiful

    Get PDF
    The case for taxing financial transactions merely to raise more revenues from the financial sector is not particularly strong. Better alternatives to tax the financial sector are likely to be available. However, a tax on financial transactions could be justified in order to limit socially undesirable transactions when more direct means of doing so are unavailable for political or practical reasons. Some financial transactions are indeed likely to do more harm than good, especially when they contribute to the systemic risk of the financial system. However, such a financial transaction tax should be very small, much smaller than the negative externalities in question, because it is a blunt instrument that also drives out socially useful transactions. There is a case for taxing over-the-counter derivative transactions at a somewhat higher rate than exchange-based derivative transactions. More targeted remedies to drive out socially undesirable transactions should be sought in parallel, which would allow, after their implementation, to reduce or even phase out financialtransaction taxes

    The Baltic Sea Tracer Release Experiment. Part I: Mixing rates

    Get PDF
    In this study, results from the Baltic Sea Tracer Release Experiment (BATRE) are described, in which deep water mixing rates and mixing processes in the central Baltic Sea were investigated. In September 2007, an inert tracer gas (CF3SF5) was injected at approximately 200 m depth in the Gotland Basin, and the subsequent spreading of the tracer was observed during six surveys until February 2009. These data describe the diapycnal and lateral mixing during a stagnation period without any significant deep water renewal due to inflow events. As one of the main results, vertical mixing rates were found to dramatically increase after the tracer had reached the lateral boundaries of the basin, suggesting boundary mixing as the key process for basin-scale vertical mixing. Basin-scale vertical diffusivities were of the order of 10−5 m2 s−1 (about 1 order of magnitude larger than interior diffusivities) with evidence for a seasonal and vertical variability. In contrast to tracer experiments in the open ocean, the basin geometry (hypsography) was found to have a crucial impact on the vertical tracer spreading. The e-folding time scale for deep water renewal due to mixing was slightly less than 2 years, the time scale for the lateral homogenization of the tracer patch was of the order of a few months. Key Points: Mixing rates in the Gotland Basin are dominated by boundary mixing processes; The time scale for Gotland Basin deep water renewal is approximately 2 years; Mixing rates determined from the tracer CF3SF

    Energy- and flux-budget (EFB) turbulence closure model for the stably stratified flows. Part I: Steady-state, homogeneous regimes

    Get PDF
    We propose a new turbulence closure model based on the budget equations for the key second moments: turbulent kinetic and potential energies: TKE and TPE (comprising the turbulent total energy: TTE = TKE + TPE) and vertical turbulent fluxes of momentum and buoyancy (proportional to potential temperature). Besides the concept of TTE, we take into account the non-gradient correction to the traditional buoyancy flux formulation. The proposed model grants the existence of turbulence at any gradient Richardson number, Ri. Instead of its critical value separating - as usually assumed - the turbulent and the laminar regimes, it reveals a transition interval, 0.1< Ri <1, which separates two regimes of essentially different nature but both turbulent: strong turbulence at Ri<<1; and weak turbulence, capable of transporting momentum but much less efficient in transporting heat, at Ri>1. Predictions from this model are consistent with available data from atmospheric and lab experiments, direct numerical simulation (DNS) and large-eddy simulation (LES).Comment: 40 pages, 6 figures, Boundary-layer Meteorology, resubmitted, revised versio

    On the structure of maximal solvable extensions and of Levi extensions of nilpotent algebras

    Full text link
    We establish an improved upper estimate on dimension of any solvable algebra s with its nilradical isomorphic to a given nilpotent Lie algebra n. Next we consider Levi decomposable algebras with a given nilradical n and investigate restrictions on possible Levi factors originating from the structure of characteristic ideals of n. We present a new perspective on Turkowski's classification of Levi decomposable algebras up to dimension 9.Comment: 21 pages; major revision - one section added, another erased; author's version of the published pape

    NWP-based lightning prediction using flexible count data regression

    Get PDF
    A method to predict lightning by postprocessing numerical weather prediction (NWP) output is developed for the region of the European Eastern Alps. Cloud-to-ground (CG) flashes – detected by the ground-based Austrian Lightning Detection &amp; Information System (ALDIS) network – are counted on the 18×18&thinsp;km2 grid of the 51-member NWP ensemble of the European Centre for Medium-Range Weather Forecasts (ECMWF). These counts serve as the target quantity in count data regression models for the occurrence of lightning events and flash counts of CG. The probability of lightning occurrence is modelled by a Bernoulli distribution. The flash counts are modelled with a hurdle approach where the Bernoulli distribution is combined with a zero-truncated negative binomial. In the statistical models the parameters of the distributions are described by additive predictors, which are assembled using potentially nonlinear functions of NWP covariates. Measures of location and spread of 100 direct and derived NWP covariates provide a pool of candidates for the nonlinear terms. A combination of stability selection and gradient boosting identifies the nine (three) most influential terms for the parameters of the Bernoulli (zero-truncated negative binomial) distribution, most of which turn out to be associated with either convective available potential energy (CAPE) or convective precipitation. Markov chain Monte Carlo (MCMC) sampling estimates the final model to provide credible inference of effects, scores, and predictions. The selection of terms and MCMC sampling are applied for data of the year 2016, and out-of-sample performance is evaluated for 2017. The occurrence model outperforms a reference climatology – based on 7 years of data – up to a forecast horizon of 5 days. The flash count model is calibrated and also outperforms climatology for exceedance probabilities, quantiles, and full predictive distributions.</p
    • 

    corecore