5,003 research outputs found

    Design and Rationale of the Cognitive Intervention to Improve Memory in Heart Failure Patients Study

    Get PDF
    BACKGROUND: Memory loss is an independent predictor of mortality among heart failure patients. Twenty-three percent to 50% of heart failure patients have comorbid memory loss, but few interventions are available to treat the memory loss. The aims of this 3-arm randomized controlled trial were to (1) evaluate efficacy of computerized cognitive training intervention using BrainHQ to improve primary outcomes of memory and serum brain-derived neurotrophic factor levels and secondary outcomes of working memory, instrumental activities of daily living, and health-related quality of life among heart failure patients; (2) evaluate incremental cost-effectiveness of BrainHQ; and (3) examine depressive symptoms and genomic moderators of BrainHQ effect. METHODS: A sample of 264 heart failure patients within 4 equal-sized blocks (normal/low baseline cognitive function and gender) will be randomly assigned to (1) BrainHQ, (2) active control computer-based crossword puzzles, and (3) usual care control groups. BrainHQ is an 8-week, 40-hour program individualized to each patient's performance. Data collection will be completed at baseline and at 10 weeks and 4 and 8 months. Descriptive statistics, mixed model analyses, and cost-utility analysis using intent-to-treat approach will be computed. CONCLUSIONS: This research will provide new knowledge about the efficacy of BrainHQ to improve memory and increase serum brain-derived neurotrophic factor levels in heart failure. If efficacious, the intervention will provide a new therapeutic approach that is easy to disseminate to treat a serious comorbid condition of heart failure

    Instrumental neutron activation analysis of an enriched 28Si single-crystal

    Full text link
    The determination of the Avogadro constant plays a key role in the redefinition of the kilogram in terms of a fundamental constant. The present experiment makes use of a silicon single-crystal highly enriched in 28Si that must have a total impurity mass fraction smaller than a few parts in 109. To verify this requirement, we previously developed a relative analytical method based on neutron activation for the elemental characterization of a sample of the precursor natural silicon crystal WASO 04. The method is now extended to fifty-nine elements and applied to a monoisotopic 28Si single-crystal that was grown to test the achievable enrichment. Since this crystal was likely contaminated, this measurement tested also the detection capabilities of the analysis. The results quantified contaminations by Ge, Ga, As, Tm, Lu, Ta, W and Ir and, for a number of the detectable elements, demonstrated that we can already reach the targeted 1 ng/g detection limit.Comment: 9 pages, 1 figure, 1 tabl

    Modal analysis of the lysozyme protein considering all-atom and coarse-grained finite element models

    Get PDF
    Proteins are the fundamental entities of several organic activities. They are essential for a broad range of tasks in a way that their shapes and folding processes are crucial to achieving proper biological functions. Low-frequency modes, generally associated with collective movements at terahertz (THz) and sub-terahertz frequencies, have been appointed as critical for the conformational processes of many proteins. Dynamic simulations, such as molecular dynamics, are vastly applied by biochemical researchers in this field. However, in the last years, proposals that define the protein as a simplified elastic macrostructure have shown appealing results when dealing with this type of problem. In this context, modal analysis based on different modelization techniques, i.e., considering both an all-atom (AA) and coarse-grained (CG) representation, is proposed to analyze the hen egg-white lysozyme. This work presents new considerations and conclusions compared to previous analyses. Experimental values for the B-factor, considering all the heavy atoms or only one representative point per amino acid, are used to evaluate the validity of the numerical solutions. In general terms, this comparison allows the assessment of the regional flexibility of the protein. Besides, the low computational requirements make this approach a quick method to extract the protein’s dynamic properties under scrutiny

    Intercomparison of oceanic and atmospheric forced and coupled mesoscale simulations <br>Part I: Surface fluxes

    No full text
    International audienceA mesoscale non-hydrostatic atmospheric model has been coupled with a mesoscale oceanic model. The case study is a four-day simulation of a strong storm event observed during the SEMAPHORE experiment over a 500 Ă— 500 km2 domain. This domain encompasses a thermohaline front associated with the Azores current. In order to analyze the effect of mesoscale coupling, three simulations are compared: the first one with the atmospheric model forced by realistic sea surface temperature analyses; the second one with the ocean model forced by atmospheric fields, derived from weather forecast re-analyses; the third one with the models being coupled. For these three simulations the surface fluxes were computed with the same bulk parametrization. All three simulations succeed well in representing the main oceanic or atmospheric features observed during the storm. Comparison of surface fields with in situ observations reveals that the winds of the fine mesh atmospheric model are more realistic than those of the weather forecast re-analyses. The low-level winds simulated with the atmospheric model in the forced and coupled simulations are appreciably stronger than the re-analyzed winds. They also generate stronger fluxes. The coupled simulation has the strongest surface heat fluxes: the difference in the net heat budget with the oceanic forced simulation reaches on average 50 Wm-2 over the simulation period. Sea surface-temperature cooling is too weak in both simulations, but is improved in the coupled run and matches better the cooling observed with drifters. The spatial distributions of sea surface-temperature cooling and surface fluxes are strongly inhomogeneous over the simulation domain. The amplitude of the flux variation is maximum in the coupled run. Moreover the weak correlation between the cooling and heat flux patterns indicates that the surface fluxes are not responsible for the whole cooling and suggests that the response of the ocean mixed layer to the atmosphere is highly non-local and enhanced in the coupled simulation

    Do longer sequences improve the accuracy of identification of forensically important Calliphoridae species?

    Get PDF
    Species identification is a crucial step in forensic entomology. In several cases the calculation of the larval age allows the estimation of the minimum Post-Mortem Interval (mPMI). A correct identification of the species is the first step for a correct mPMI estimation. To overcome the difficulties due to the morphological identification especially of the immature stages, a molecular approach can be applied. However, difficulties in separation of closely related species are still an unsolved problem. Sequences of 4 different genes (COI, ND5, EF-1\u3b1, PER) of 13 different fly species collected during forensic experiments (Calliphora vicina, Calliphora vomitoria, Lucilia sericata, Lucilia illustris, Lucilia caesar, Chrysomya albiceps, Phormia regina, Cynomya mortuorum, Sarcophaga sp., Hydrotaea sp., Fannia scalaris, Piophila sp., Megaselia scalaris) were evaluated for their capability to identify correctly the species. Three concatenated sequences were obtained combining the four genes in order to verify if longer sequences increase the probability of a correct identification. The obtained results showed that this rule does not work for the species L. caesar and L. illustris. Future works on other DNA regions are suggested to solve this taxonomic issue

    Comparative genome-wide analysis of repetitive DNA in the genus Populus L.

    Get PDF
    Genome skimming was performed, using Illumina sequence reads, in order to obtain a detailed comparative picture of the repetitive component of the genome of Populus species. Read sets of seven Populus and two Salix species (as outgroups) were subjected to clustering using RepeatExplorer (Novák et al. BMC Bioinformatics 11:378 2010). The repetitive portion of the genome ranged from 33.8 in Populus nigra to 46.5% in Populus tremuloides. The large majority of repetitive sequences were long terminal repeat-retrotransposons. Gypsy elements were over-represented compared to Copia ones, with a mean ratio Gypsy to Copia of 6.7:1. Satellite DNAs showed a mean genome proportion of 2.2%. DNA transposons and ribosomal DNA showed genome proportions of 1.8 and 1.9%, respectively. The other repeat types accounted for less of 1% each. Long terminal repeat-retrotransposons were further characterized, identifying the lineage to which they belong and studying the proliferation times of each lineage in the different species. The most abundant lineage was Athila, which showed large differences among species. Concerning Copia lineages, similar transpositional profiles were observed among all the analysed species; by contrast, differences in transpositional peaks of Gypsy lineages were found. The genome proportions of repeats were compared in the seven species, and a phylogenetic tree was built, showing species separation according to the botanical section to which the species belongs, although significant differences could be found within sections, possibly related to the different geographical origin of the species. Overall, the data indicate that the repetitive component of the genome in the poplar genus is still rapidly evolving

    Evaluation of the pharmacodynamic and pharmacokinetic interaction between pagoclone and ethanol

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/109936/1/cptclpt2003353.pd

    Hyperparameter optimization for recommender systems through Bayesian optimization

    Get PDF
    AbstractRecommender systems represent one of the most successful applications of machine learning in B2C online services, to help the users in their choices in many web services. Recommender system aims to predict the user preferences from a huge amount of data, basically the past behaviour of the user, using an efficient prediction algorithm. One of the most used is the matrix-factorization algorithm. Like many machine learning algorithms, its effectiveness goes through the tuning of its hyper-parameters, and the associated optimization problem also called hyper-parameter optimization. This represents a noisy time-consuming black-box optimization problem. The related objective function maps any possible hyper-parameter configuration to a numeric score quantifying the algorithm performance. In this work, we show how Bayesian optimization can help the tuning of three hyper-parameters: the number of latent factors, the regularization parameter, and the learning rate. Numerical results are obtained on a benchmark problem and show that Bayesian optimization obtains a better result than the default setting of the hyper-parameters and the random search

    Blood Pressure and Cognitive Decline Over 8 Years in Middle-Aged and Older Black and White Americans

    Get PDF
    Although the association between high blood pressure (BP), particularly in midlife, and late-life dementia is known, less is known about variations by race and sex. In a prospective national study of 22 164 blacks and whites ≥45 years without baseline cognitive impairment or stroke from the REGARDS cohort study (Reasons for Geographic and Racial Differences in Stroke), enrolled 2003 to 2007 and followed through September 2015, we measured changes in cognition associated with baseline systolic and diastolic BP (SBP and DBP), as well as pulse pressure (PP) and mean arterial pressure, and we tested whether age, race, and sex modified the effects. Outcomes were global cognition (Six-Item Screener; primary outcome), new learning (Word List Learning), verbal memory (Word List Delayed Recall), and executive function (Animal Fluency Test). Median follow-up was 8.1 years. Significantly faster declines in global cognition were associated with higher SBP, lower DBP, and higher PP with increasing age ( P<0.001 for age×SBP×follow-up-time, age×DBP×follow-up-time, and age×PP×follow-up-time interaction). Declines in global cognition were not associated with mean arterial pressure after adjusting for PP. Blacks, compared with whites, had faster declines in global cognition associated with SBP ( P=0.02) and mean arterial pressure ( P=0.04). Men, compared with women, had faster declines in new learning associated with SBP ( P=0.04). BP was not associated with decline of verbal memory and executive function, after controlling for the effect of age on cognitive trajectories. Significantly faster declines in global cognition over 8 years were associated with higher SBP, lower DBP, and higher PP with increasing age. SBP-related cognitive declines were greater in blacks and men
    • …
    corecore