125 research outputs found

    A computer-based medical record system and personal digital assistants to assess and follow patients with respiratory tract infections visiting a rural Kenyan health centre

    Get PDF
    BACKGROUND: Clinical research can be facilitated by the use of informatics tools. We used an existing electronic medical record (EMR) system and personal data assistants (PDAs) to assess the characteristics and outcomes of patients with acute respiratory illnesses (ARIs) visiting a Kenyan rural health center. METHODS: We modified the existing EMR to include details on patients with ARIs. The EMR database was then used to identify patients with ARIs who were prospectively followed up by a research assistant who rode a bicycle to patients' homes and entered data into a PDA. RESULTS: A total of 2986 clinic visits for 2009 adult patients with respiratory infections were registered in the database between August 2002 and January 2005; 433 patients were selected for outcome assessments. These patients were followed up in the villages and assessed at 7 and 30 days later. Complete follow-up data were obtained on 381 patients (88%) and merged with data from the enrollment visit's electronic medical records and subsequent health center visits to assess duration of illness and complications. Symptoms improved at 7 and 30 days, but a substantial minority of patients had persistent symptoms. Eleven percent of patients sought additional care for their respiratory infection. CONCLUSION: EMRs and PDA are useful tools for performing prospective clinical research in resource constrained developing countries

    A public health response to the methamphetamine epidemic: the implementation of contingency management to treat methamphetamine dependence

    Get PDF
    BACKGROUND: In response to increases in methamphatemine-associated sexually transmitted diseases, the San Francisco Department of Public Health implemented a contingency management (CM) field program called the Positive Reinforcement Opportunity Project (PROP). METHODS: Methamphetamine-using men who have sex with men (MSM) in San Francisco qualified for PROP following expressed interest in the program, provision of an observed urine sample that tested positive for methamphetamine metabolites and self-report of recent methamphetamine use. For 12 weeks, PROP participants provided observed urine samples on Mondays, Wednesdays and Fridays and received vouchers of increasing value for each consecutive sample that tested negative to metabolites of methamphetamine. Vouchers were exchanged for goods and services that promoted a healthy lifestyle. No cash was provided. Primary outcomes included acceptability (number of enrollments/time), impact (clinical response to treatment and cost-effectiveness as cost per patient treated). RESULTS: Enrollment in PROP was brisk indicating its acceptability. During the first 10 months of operation, 143 men sought treatment and of these 77.6% were HIV-infected. Of those screened, 111 began CM treatment and averaged 15 (42%) methamphetamine-free urine samples out of a possible 36 samples during the 12-week treatment period; 60% completed 4 weeks of treatment; 48% 8 weeks and 30% 12 weeks. Across all participants, an average of 159(SD=159 (SD = 165) in vouchers or 35.1% of the maximum possible (453)wasprovidedfortheseparticipants.Theaveragecostperparticipantofthe143treatedwas453) was provided for these participants. The average cost per participant of the 143 treated was 800. CONCLUSION: Clinical responses to CM in PROP were similar to CM delivered in drug treatment programs, supporting the adaptability and effectiveness of CM to non-traditional drug treatment settings. Costs were reasonable and less than or comparable to other methamphetamine outpatient treatment programs. Further expansion of programs like PROP could address the increasing need for acceptable, feasible and cost-effective methamphetamine treatment in this group with exceptionally high rates of HIV-infection

    Hospitalisation with Infection, Asthma and Allergy in Kawasaki Disease Patients and Their Families: Genealogical Analysis Using Linked Population Data

    Get PDF
    Background: Kawasaki disease results from an abnormal immunological response to one or more infectious triggers. We hypothesised that heritable differences in immune responses in Kawasaki disease-affected children and their families would result in different epidemiological patterns of other immune-related conditions. We investigated whether hospitalisation for infection and asthma/allergy were different in Kawasaki disease-affected children and their relatives. Methods/Major Findings: We used Western Australian population-linked health data from live births (1970-2006) to compare patterns of hospital admissions in Kawasaki disease cases, age- and sex-matched controls, and their relatives. There were 295 Kawasaki disease cases and 598 age- and sex-matched controls, with 1,636 and 3,780 relatives, respectively. Compared to controls, cases were more likely to have been admitted at least once with an infection (cases, 150 admissions (50.8%) vs controls, 210 admissions (35.1%); odds ratio (OR) = 1.9, 95% confidence interval (CI) 1.4-2.6, P = 7.2×10-6), and with asthma/allergy (cases, 49 admissions (16.6%) vs controls, 42 admissions (7.0%); OR = 2.6, 95% CI 1.7-4.2, P = 1.3×10-5). Cases also had more admissions per person with infection (cases, median 2 admissions, 95% CI 1-5, vs controls, median 1 admission, 95% CI 1-4, P = 1.09×10-5). The risk of admission with infection was higher in the first degree relatives of Kawasaki disease cases compared to those of controls, but the differences were not significant. Conclusion: Differences in the immune phenotype of children who develop Kawasaki disease may influence the severity of other immune-related conditions, with some similar patterns observed in relatives. These data suggest the influence of shared heritable factors in these families

    Medical conditions in autism spectrum disorders

    Get PDF
    Autism spectrum disorder (ASD) is a behaviourally defined syndrome where the etiology and pathophysiology is only partially understood. In a small proportion of children with the condition, a specific medical disorder is identified, but the causal significance in many instances is unclear. Currently, the medical conditions that are best established as probable causes of ASD include Fragile X syndrome, Tuberous Sclerosis and abnormalities of chromosome 15 involving the 15q11-13 region. Various other single gene mutations, genetic syndromes, chromosomal abnormalities and rare de novo copy number variants have been reported as being possibly implicated in etiology, as have several ante and post natal exposures and complications. However, in most instances the evidence base for an association with ASD is very limited and largely derives from case reports or findings from small, highly selected and uncontrolled case series. Not only therefore, is there uncertainty over whether the condition is associated, but the potential basis for the association is very poorly understood. In some cases the medical condition may be a consequence of autism or simply represent an associated feature deriving from an underlying shared etiology. Nevertheless, it is clear that in a growing proportion of individuals potentially causal medical conditions are being identified and clarification of their role in etio-pathogenesis is necessary. Indeed, investigations into the causal mechanisms underlying the association between conditions such as tuberous sclerosis, Fragile X and chromosome 15 abnormalities are beginning to cast light on the molecular and neurobiological pathways involved in the pathophysiology of ASD. It is evident therefore, that much can be learnt from the study of probably causal medical disorders as they represent simpler and more tractable model systems in which to investigate causal mechanisms. Recent advances in genetics, molecular and systems biology and neuroscience now mean that there are unparalleled opportunities to test causal hypotheses and gain fundamental insights into the nature of autism and its development

    Early T Cell Recognition of B Cells following Epstein-Barr Virus Infection: Identifying Potential Targets for Prophylactic Vaccination

    Get PDF
    Epstein-Barr virus, a B-lymphotropic herpesvirus, is the cause of infectious mononucleosis, has strong aetiologic links with several malignancies and has been implicated in certain autoimmune diseases. Efforts to develop a prophylactic vaccine to prevent or reduce EBV-associated disease have, to date, focused on the induction of neutralising antibody responses. However, such vaccines might be further improved by inducing T cell responses capable of recognising and killing recently-infected B cells. In that context, EBNA2, EBNA-LP and BHRF1 are the first viral antigens expressed during the initial stage of B cell growth transformation, yet have been poorly characterised as CD8+ T cell targets. Here we describe CD8+ T cell responses against each of these three "first wave" proteins, identifying target epitopes and HLA restricting alleles. While EBNA-LP and BHRF1 each contained one strong CD8 epitope, epitopes within EBNA2 induced immunodominant responses through several less common HLA class I alleles (e.g. B*3801 and B*5501), as well as subdominant responses through common class I alleles (e.g. B7 and C*0304). Importantly, such EBNA2-specific CD8+ T cells recognised B cells within the first day post-infection, prior to CD8+ T cells against well-characterised latent target antigens such as EBNA3B or LMP2, and effectively inhibited outgrowth of EBV-transformed B cell lines. We infer that "first wave" antigens of the growth-transforming infection, especially EBNA2, constitute potential CD8+ T cell immunogens for inclusion in prophylactic EBV vaccine design

    Behavioural indicators of welfare in farmed fish

    Get PDF
    Behaviour represents a reaction to the environment as fish perceive it and is therefore a key element of fish welfare. This review summarises the main findings on how behavioural changes have been used to assess welfare in farmed fish, using both functional and feeling-based approaches. Changes in foraging behaviour, ventilatory activity, aggression, individual and group swimming behaviour, stereotypic and abnormal behaviour have been linked with acute and chronic stressors in aquaculture and can therefore be regarded as likely indicators of poor welfare. On the contrary, measurements of exploratory behaviour, feed anticipatory activity and reward-related operant behaviour are beginning to be considered as indicators of positive emotions and welfare in fish. Despite the lack of scientific agreement about the existence of sentience in fish, the possibility that they are capable of both positive and negative emotions may contribute to the development of new strategies (e. g. environmental enrichment) to promote good welfare. Numerous studies that use behavioural indicators of welfare show that behavioural changes can be interpreted as either good or poor welfare depending on the fish species. It is therefore essential to understand the species-specific biology before drawing any conclusions in relation to welfare. In addition, different individuals within the same species may exhibit divergent coping strategies towards stressors, and what is tolerated by some individuals may be detrimental to others. Therefore, the assessment of welfare in a few individuals may not represent the average welfare of a group and vice versa. This underlines the need to develop on-farm, operational behavioural welfare indicators that can be easily used to assess not only the individual welfare but also the welfare of the whole group (e. g. spatial distribution). With the ongoing development of video technology and image processing, the on-farm surveillance of behaviour may in the near future represent a low-cost, noninvasive tool to assess the welfare of farmed fish.Fundação para a Ciência e Tecnologia, Portugal [SFRH/BPD/42015/2007]info:eu-repo/semantics/publishedVersio

    Laparoscopy in management of appendicitis in high-, middle-, and low-income countries: a multicenter, prospective, cohort study.

    Get PDF
    BACKGROUND: Appendicitis is the most common abdominal surgical emergency worldwide. Differences between high- and low-income settings in the availability of laparoscopic appendectomy, alternative management choices, and outcomes are poorly described. The aim was to identify variation in surgical management and outcomes of appendicitis within low-, middle-, and high-Human Development Index (HDI) countries worldwide. METHODS: This is a multicenter, international prospective cohort study. Consecutive sampling of patients undergoing emergency appendectomy over 6 months was conducted. Follow-up lasted 30 days. RESULTS: 4546 patients from 52 countries underwent appendectomy (2499 high-, 1540 middle-, and 507 low-HDI groups). Surgical site infection (SSI) rates were higher in low-HDI (OR 2.57, 95% CI 1.33-4.99, p = 0.005) but not middle-HDI countries (OR 1.38, 95% CI 0.76-2.52, p = 0.291), compared with high-HDI countries after adjustment. A laparoscopic approach was common in high-HDI countries (1693/2499, 67.7%), but infrequent in low-HDI (41/507, 8.1%) and middle-HDI (132/1540, 8.6%) groups. After accounting for case-mix, laparoscopy was still associated with fewer overall complications (OR 0.55, 95% CI 0.42-0.71, p < 0.001) and SSIs (OR 0.22, 95% CI 0.14-0.33, p < 0.001). In propensity-score matched groups within low-/middle-HDI countries, laparoscopy was still associated with fewer overall complications (OR 0.23 95% CI 0.11-0.44) and SSI (OR 0.21 95% CI 0.09-0.45). CONCLUSION: A laparoscopic approach is associated with better outcomes and availability appears to differ by country HDI. Despite the profound clinical, operational, and financial barriers to its widespread introduction, laparoscopy could significantly improve outcomes for patients in low-resource environments. TRIAL REGISTRATION: NCT02179112

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    Developmental delay in early childhood is associated with visual-constructive skills at school age in a Brazilian cohort

    Get PDF
    Abstract We investigated differences in IQ and visual-constructive skills in school-age children evaluated as developmentally delayed or typically developed in early childhood. Sixty-four participants from a Brazilian cohort were evaluated in IQ (Wechsler Abbreviated Scale of Intelligence) and tasks of visual-spatial memory and visual-constructive skills through the Benton Visual Retention Test (BVRT) at school age. Neuropsychomotor development at 4 years of age was measured by Denver II. Developmentally delayed children showed lower IQs, lower scores, and more errors in copy and memory BVRT tasks when compared to typically developed children. Delay in neuropsychomotor development in early childhood may affect the subsequent cognitive development of children
    corecore