220 research outputs found
Professional and community satisfaction with the Brazilian family health strategy
OBJECTIVE: To analyze the strengths and limitations of the Family Health Strategy from the perspective of health care professionals and the community. METHODS: Between June-August 2009, in the city of Vespasiano, Minas Gerais State, Southeastern Brazil, a questionnaire was used to evaluate the Family Health Strategy (ESF) with 77 healthcare professionals and 293 caregivers of children under five. Health care professional training, community access to health care, communication with patients and delivery of health education and pediatric care were the main points of interest in the evaluation. Logistic regression analysis was used to obtain odds ratios and 95% confidence intervals as well as to assess the statistical significance of the variables studied. RESULTS: The majority of health care professionals reported their program training was insufficient in quantity, content and method of delivery. Caregivers and professionals identified similar weaknesses (services not accessible to the community, lack of healthcare professionals, poor training for professionals) and strengths (community health worker-patient communications, provision of educational information, and pediatric care). Recommendations for improvement included: more doctors and specialists, more and better training, and scheduling improvements. Caregiver satisfaction with the ESF was found to be related to perceived benefits such as community health agent household visits (OR 5.8, 95%CI 2.8;12.1), good professional-patient relationships (OR 4.8, 95%CI 2.5;9.3), and family-focused health (OR 4.1, 95%CI 1.6;10.2); and perceived problems such as lack of personnel (OR 0.3, 95%CI 0.2;0.6), difficulty with access (OR 0.2, 95%CI 0.1;0.4), and poor quality of care (OR 0.3, 95%CI 0.1;0.6). Overall, 62% of caregivers reported being generally satisfied with the ESF services. CONCLUSIONS: Identifying the limitations and strengths of the Family Health Strategy from the healthcare professional and caregiver perspective may serve to advance primary community healthcare in Brazil
Absolute risk representation in cardiovascular disease prevention: comprehension and preferences of health care consumers and general practitioners involved in a focus group study
Background
Communicating risk is part of primary prevention of coronary heart disease and stroke, collectively referred to as cardiovascular disease (CVD). In Australia, health organisations have promoted an absolute risk approach, thereby raising the question of suitable standardised formats for risk communication.
Methods
Sixteen formats of risk representation were prepared including statements, icons, graphical formats, alone or in combination, and with variable use of colours. All presented the same risk, i.e., the absolute risk for a 55 year old woman, 16% risk of CVD in five years. Preferences for a five or ten-year timeframe were explored. Australian GPs and consumers were recruited for participation in focus groups, with the data analysed thematically and preferred formats tallied.
Results
Three focus groups with health consumers and three with GPs were held, involving 19 consumers and 18 GPs.
Consumers and GPs had similar views on which formats were more easily comprehended and which conveyed 16% risk as a high risk. A simple summation of preferences resulted in three graphical formats (thermometers, vertical bar chart) and one statement format as the top choices. The use of colour to distinguish risk (red, yellow, green) and comparative information (age, sex, smoking status) were important ingredients. Consumers found formats which combined information helpful, such as colour, effect of changing behaviour on risk, or comparison with a healthy older person. GPs preferred formats that helped them relate the information about risk of CVD to their patients, and could be used to motivate patients to change behaviour.
Several formats were reported as confusing, such as a percentage risk with no contextual information, line graphs, and icons, particularly those with larger numbers.
Whilst consumers and GPs shared preferences, the use of one format for all situations was not recommended. Overall, people across groups felt that risk expressed over five years was preferable to a ten-year risk, the latter being too remote.
Conclusions
Consumers and GPs shared preferences for risk representation formats. Both groups liked the option to combine formats and tailor the risk information to reflect a specific individual's risk, to maximise understanding and provide a good basis for discussion
Achieving temperature-size changes in a unicellular organism.
The temperature-size rule (TSR) is an intraspecific phenomenon describing the phenotypic plastic response of an organism size to the temperature: individuals reared at cooler temperatures mature to be larger adults than those reared at warmer temperatures. The TSR is ubiquitous, affecting >80% species including uni- and multicellular groups. How the TSR is established has received attention in multicellular organisms, but not in unicells. Further, conceptual models suggest the mechanism of size change to be different in these two groups. Here, we test these theories using the protist Cyclidium glaucoma. We measure cell sizes, along with population growth during temperature acclimation, to determine how and when the temperature-size changes are achieved. We show that mother and daughter sizes become temporarily decoupled from the ratio 2:1 during acclimation, but these return to their coupled state (where daughter cells are half the size of the mother cell) once acclimated. Thermal acclimation is rapid, being completed within approximately a single generation. Further, we examine the impact of increased temperatures on carrying capacity and total biomass, to investigate potential adaptive strategies of size change. We demonstrate no temperature effect on carrying capacity, but maximum supported biomass to decrease with increasing temperature
Quantitative gait analysis under dual-task in older people with mild cognitive impairment: a reliability study
<p>Abstract</p> <p>Background</p> <p>Reliability of quantitative gait assessment while dual-tasking (walking while doing a secondary task such as talking) in people with cognitive impairment is unknown. Dual-tasking gait assessment is becoming highly important for mobility research with older adults since better reflects their performance in the basic activities of daily living. Our purpose was to establish the test-retest reliability of assessing quantitative gait variables using an electronic walkway in older adults with mild cognitive impairment (MCI) under single and dual-task conditions.</p> <p>Methods</p> <p>The gait performance of 11 elderly individuals with MCI was evaluated using an electronic walkway (GAITRite<sup>® </sup>System) in two sessions, one week apart. Six gait parameters (gait velocity, step length, stride length, step time, stride time, and double support time) were assessed under two conditions: single-task (sG: usual walking) and dual-task (dG: counting backwards from 100 while walking). Test-retest reliability was determined using intra-class correlation coefficient (ICC). Gait variability was measured using coefficient of variation (CoV).</p> <p>Results</p> <p>Eleven participants (average age = 76.6 years, SD = 7.3) were assessed. They were high functioning (Clinical Dementia Rating Score = 0.5) with a mean Mini-Mental Status Exam (MMSE) score of 28 (SD = 1.56), and a mean Montreal Cognitive Assessment (MoCA) score of 22.8 (SD = 1.23). Under dual-task conditions, mean gait velocity (GV) decreased significantly (sGV = 119.11 ± 20.20 cm/s; dGV = 110.88 ± 19.76 cm/s; p = 0.005). Additionally, under dual-task conditions, higher gait variability was found on stride time, step time, and double support time. Test-retest reliability was high (ICC>0.85) for the six parameters evaluated under both conditions.</p> <p>Conclusion</p> <p>In older people with MCI, variability of time-related gait parameters increased with dual-tasking suggesting cognitive control of gait performance. Assessment of quantitative gait variables using an electronic walkway is highly reliable under single and dual-task conditions. The presence of cognitive impairment did not preclude performance of dual-tasking in our sample supporting that this methodology can be reliably used in cognitive impaired older individuals.</p
Gait stability and variability measures show effects of impaired cognition and dual tasking in frail people
<p>Abstract</p> <p>Background</p> <p>Falls in frail elderly are a common problem with a rising incidence. Gait and postural instability are major risk factors for falling, particularly in geriatric patients. As walking requires attention, cognitive impairments are likely to contribute to an increased fall risk. An objective quantification of gait and balance ability is required to identify persons with a high tendency to fall. Recent studies have shown that stride variability is increased in elderly and under dual task condition and might be more sensitive to detect fall risk than walking speed. In the present study we complemented stride related measures with measures that quantify trunk movement patterns as indicators of dynamic balance ability during walking. The aim of the study was to quantify the effect of impaired cognition and dual tasking on gait variability and stability in geriatric patients.</p> <p>Methods</p> <p>Thirteen elderly with dementia (mean age: 82.6 ± 4.3 years) and thirteen without dementia (79.4 ± 5.55) recruited from a geriatric day clinic, walked at self-selected speed with and without performing a verbal dual task. The Mini Mental State Examination and the Seven Minute Screen were administered. Trunk accelerations were measured with an accelerometer. In addition to walking speed, mean, and variability of stride times, gait stability was quantified using stochastic dynamical measures, namely regularity (sample entropy, long range correlations) and local stability exponents of trunk accelerations.</p> <p>Results</p> <p>Dual tasking significantly (p < 0.05) decreased walking speed, while stride time variability increased, and stability and regularity of lateral trunk accelerations decreased. Cognitively impaired elderly showed significantly (p < 0.05) more changes in gait variability than cognitive intact elderly. Differences in dynamic parameters between groups were more discerned under dual task conditions.</p> <p>Conclusions</p> <p>The observed trunk adaptations were a consistent instability factor. These results support the concept that changes in cognitive functions contribute to changes in the variability and stability of the gait pattern. Walking under dual task conditions and quantifying gait using dynamical parameters can improve detecting walking disorders and might help to identify those elderly who are able to adapt walking ability and those who are not and thus are at greater risk for falling.</p
Effects of an attention demanding task on dynamic stability during treadmill walking
<p>Abstract</p> <p>Background</p> <p>People exhibit increased difficulty balancing when they perform secondary attention-distracting tasks while walking. However, a previous study by Grabiner and Troy (<it>J. Neuroengineering Rehabil</it>., 2005) found that young healthy subjects performing a concurrent Stroop task while walking on a motorized treadmill exhibited <it>decreased </it>step width variability. However, measures of variability do not directly quantify how a system responds to perturbations. This study re-analyzed data from Grabiner and Troy 2005 to determine if performing the concurrent Stroop task directly affected the dynamic stability of walking in these same subjects.</p> <p>Methods</p> <p>Thirteen healthy volunteers walked on a motorized treadmill at their self-selected constant speed for 10 minutes both while performing the Stroop test and during undisturbed walking. This Stroop test consisted of projecting images of the name of one color, printed in text of a different color, onto a wall and asking subjects to verbally identify the color of the text. Three-dimensional motions of a marker attached to the base of the neck (C5/T1) were recorded. Marker velocities were calculated over 3 equal intervals of 200 sec each in each direction. Mean variability was calculated for each time series as the average standard deviation across all strides. Both "local" and "orbital" dynamic stability were quantified for each time series using previously established methods. These measures directly quantify how quickly small perturbations grow or decay, either continuously in real time (local) or discretely from one cycle to the next (orbital). Differences between Stroop and Control trials were evaluated using a 2-factor repeated measures ANOVA.</p> <p>Results</p> <p>Mean variability of trunk movements was significantly reduced during the Stroop tests compared to normal walking. Conversely, local and orbital stability results were mixed: some measures showed slight increases, while others showed slight decreases. In many cases, different subjects responded differently to the Stroop test. While some of our comparisons reached statistical significance, many did not. In general, measures of variability and dynamic stability reflected different properties of walking dynamics, consistent with previous findings.</p> <p>Conclusion</p> <p>These findings demonstrate that the decreased movement variability associated with the Stroop task did <it>not </it>translate to greater dynamic stability.</p
The interest of gait markers in the identification of subgroups among fibromyalgia patients
<p>Abstract</p> <p>Background</p> <p>Fibromyalgia (FM) is a heterogeneous syndrome and its classification into subgroups calls for broad-based discussion. FM subgrouping, which aims to adapt treatment according to different subgroups, relies in part, on psychological and cognitive dysfunctions. Since motor control of gait is closely related to cognitive function, we hypothesized that gait markers could be of interest in the identification of FM patients' subgroups. This controlled study aimed at characterizing gait disorders in FM, and subgrouping FM patients according to gait markers such as stride frequency (SF), stride regularity (SR), and cranio-caudal power (CCP) which measures kinesia.</p> <p>Methods</p> <p>A multicentre, observational open trial enrolled patients with primary FM (44.1 ± 8.1 y), and matched controls (44.1 ± 7.3 y). Outcome measurements and gait analyses were available for 52 pairs. A 3-step statistical analysis was carried out. A preliminary single blind analysis using k-means cluster was performed as an initial validation of gait markers. Then in order to quantify FM patients according to psychometric and gait variables an open descriptive analysis comparing patients and controls were made, and correlations between gait variables and main outcomes were calculated. Finally using cluster analysis, we described subgroups for each gait variable and looked for significant differences in self-reported assessments.</p> <p>Results</p> <p>SF was the most discriminating gait variable (73% of patients and controls). SF, SR, and CCP were different between patients and controls. There was a non-significant association between SF, FIQ and physical components from Short-Form 36 (p = 0.06). SR was correlated to FIQ (p = 0.01) and catastrophizing (p = 0.05) while CCP was correlated to pain (p = 0.01). The SF cluster identified 3 subgroups with a particular one characterized by normal SF, low pain, high activity and hyperkinesia. The SR cluster identified 2 distinct subgroups: the one with a reduced SR was distinguished by high FIQ, poor coping and altered affective status.</p> <p>Conclusion</p> <p>Gait analysis may provide additional information in the identification of subgroups among fibromyalgia patients. Gait analysis provided relevant information about physical and cognitive status, and pain behavior. Further studies are needed to better understand gait analysis implications in FM.</p
Cell line-dependent variability in HIV activation employing DNMT inhibitors
Long-lived reservoirs of Human Immunodeficiency Virus (HIV) latently infected cells present the main barrier to a cure for HIV infection. Much interest has focused on identifying strategies to activate HIV, which would be used together with antiretrovirals to attack reservoirs. Several HIV activating agents, including Tumor Necrosis Factor alpha (TNFα) and other agents that activate via NF-kB are not fully effective in all latent infection models due to epigenetic restrictions, such as DNA methylation and the state of histone acetylation. DNA methyltransferases (DNMT) inhibitors like 5-aza-2'deoxycytidine (Aza-CdR) and histone deacetylase (HDAC) inhibitors like Trichostatin A (TSA) have been proposed as agents to enhance reactivation and have shown activity in model systems. However, it is not clear how the activities of DNMT and HDAC inhibitors range across different latently infected cell lines, potential models for the many different latently infected cells within an HIV patient. We determined HIV activation following treatment with TNFα, TSA and Aza-CdR across a range of well known latently infected cell lines. We assessed the activity of these compounds in four different Jurkat T cell-derived J-Lat cell lines (6.3, 8.4, 9.2 and 10.6), which have a latent HIV provirus in which GFP replaces Nef coding sequence, and ACH-2 and J1.1 (T cell-derived), and U1 (promonocyte-derived) cell lines with full-length provirus. We found that Aza-CdR plus TNFα activated HIV at least twice as well as TNFα alone for almost all J-Lat cells, as previously described, but not for J-Lat 10.6, in which TNFα plus Aza-CdR moderately decreased activation compared to TNFα alone. Surprisingly, a much greater reduction of TNFα-stimulated activation with Aza-CdR was detected for ACH-2, J1.1 and U1 cells. Reaching the highest reduction in U1 cells with a 75% reduction. Interestingly, Aza-CdR not only decreased TNFα induction of HIV expression in certain cell lines, but also decreased activation by TSA. Since DNMT inhibitors reduce the activity of provirus activators in some HIV latently infected cell lines the use of epigenetic modifying agents may need to be carefully optimized if they are to find clinical utility in therapies aimed at attacking latent HIV reservoirs
Clinical and cost-effectiveness of contingency management for cannabis use in early psychosis: the CIRCLE randomised clinical trial
Background
Cannabis is the most commonly used illicit substance among people with psychosis. Continued cannabis use following the onset of psychosis is associated with poorer functional and clinical outcomes. However, finding effective ways of intervening has been very challenging. We examined the clinical and cost-effectiveness of adjunctive contingency management (CM), which involves incentives for abstinence from cannabis use, in people with a recent diagnosis of psychosis.
Methods
CIRCLE was a pragmatic multi-centre randomised controlled trial. Participants were recruited via Early Intervention in Psychosis (EIP) services across the Midlands and South East of England. They had had at last one episode of clinically diagnosed psychosis (affective or non-affective); were aged 18 to 36; reported cannabis use in at least 12 out of the previous 24 weeks; and were not currently receiving treatment for cannabis misuse, or subject to a legal requirement for cannabis testing. Participants were randomised via a secure web-based service 1:1 to either an experimental arm, involving 12 weeks of CM plus a six-session psychoeducation package, or a control arm receiving the psychoeducation package only. The total potential voucher reward in the CM intervention was £240. The primary outcome was time to acute psychiatric care, operationalised as admission to an acute mental health service (including community alternatives to admission). Primary outcome data were collected from patient records at 18 months post-consent by assessors masked to allocation. The trial was registered with the ISRCTN registry, number ISRCTN33576045.
Results:
551 participants were recruited between June 2012 and April 2016. Primary outcome data were obtained for 272 (98%) in the CM (experimental) group and 259 (95%) in the control group. There was no statistically significant difference in time to acute psychiatric care (the primary outcome) (HR 1.03, 95% CI 0.76, 1.40) between groups. By 18 months, 90 (33%) of participants in the CM group, and 85 (30%) of the control groups had been admitted at least once to an acute psychiatric service. Amongst those who had experienced an acute psychiatric admission, the median time to admission was 196 days (IQR 82, 364) in the CM group and 245 days (IQR 99,382) in the control group. Cost-effectiveness analyses suggest that there is an 81% likelihood that the intervention was cost-effective, mainly resulting from higher mean inpatient costs for the control group compared with the CM group, however the cost difference between groups was not statistically significant. There were 58 adverse events, 27 in the CM group and 31 in the control group.
Conclusions
Overall, these results suggest that CM is not an effective intervention for improving the time to acute psychiatric admission or reducing cannabis use in psychosis, at least at the level of voucher reward offered
- …