55 research outputs found
A retrospective study evaluating the efficacy of identification and management of sepsis at a district-level hospital internal medicine department in the Western Cape Province, South Africa...
Full Title: A retrospective study evaluating the efficacy of identification and management of sepsis at a district-level hospital internal medicine department in the Western Cape Province, South Africa, in comparison with the guidelines stipulated in the 2012 Surviving Sepsis CampaignBackground. Currently there is little information on the identification, management and outcomes of patients with sepsis in developing countries. Simple cost-effective measures such as accurate identification of patients with sepsis and early antibiotic administration are achievable targets, within reach without having to make use of unsustainable protocols constructed in developed countries.Objectives. To assess the ability of clinicians at a district-level hospital to identify and manage sepsis, and to assess patient outcome in terms of in-hospital mortality and length of hospital stay given the above management.Methods. A retrospective descriptive study design was used, analysing data from the routine burden of disease audit done on a 3-monthly basis at Karl Bremer Hospital (KBH) in the Western Cape Province, South Africa.Results. The total sample size obtained was 70 patients, of whom 18 (25.7%) had an initial triage blood pressure indicative of sepsis-induced hypotension. However, only 1 (5.5%) of these 18 patients received an initial crystalloid fluid bolus of at least 30 mL/kg. The median time that elapsed before administration of antibiotics in septic shock was 4.25 hours. Furthermore, a positive delay in antibiotic administration (p=0.0039) was demonstrated. The data also showed that 8/12 patients (66.7%) with septic shock received inappropriate amounts of fluids. The in-hospital mortality rate for sepsis was 4/24 (16.7%), for severe sepsis 11/34 (32.3%) and for septic shock a staggering 9/12 (75.0%).Conclusions. The initial classification process and management of sepsis by clinicians at KBH is flawed. This inevitably leads to an increase in in-hospital mortality
Speech intelligibility for target and masker with different spectra
The speech intelligibility index (SII) calculation is based on the assumption that the effective range of signal-to-noise ratio (SNR) regarding speech intelligibility is [â 15 dB; +15 dB]. In a specific frequency band, speech intelligibility would remain constant by varying the SNRs above + 15 dB or below â 15 dB. These assumptions were tested in four experiments measuring speech reception thresholds (SRTs) with a speech target and speech-spectrum noise, while attenuating target or noise above or below 1400 Hz, with different levels of attenuation in order to test different SNRs in the two bands. SRT varied linearly with attenuation at low-attenuation levels and an asymptote was reached for high-attenuation levels. However, this asymptote was reached (intelligibility was not influenced by further attenuation) for different attenuation levels across experiments. The â 15-dB SII limit was confirmed for high-pass filtered targets, whereas for low-pass filtered targets, intelligibility was further impaired by decreasing the SNR below â 15 dB (until â 37 dB) in the high-frequency band. For high-pass and low-pass filtered noises, speech intelligibility kept improving when increasing the SNR in the rejected band beyond + 15 dB (up to 43 dB). Before reaching the asymptote, a 10-dB increase of SNR obtained by filtering the noise resulted in a larger decrease of SRT than a corresponding 10-dB decrease of SNR obtained by filtering the target (the slopes SRT/attenuation were different depending on which source was filtered). These results question the use of the SNR range and the importance function adopted by the SII when considering sharply filtered signals
Generalization of auditory sensory and cognitive learning in typically developing children
Despite the well-established involvement of both sensory (âbottom-upâ) and cognitive (âtop-downâ) processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported âfar-transferâ to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups. Further research is required to investigate the effects of various stimuli and lengths of training on the generalization of sensory and cognitive learning to literacy skills
Africaâs drylands in a changing world: Challenges for wildlife conservation under climate and land-use changes in the Greater Etosha Landscape
Proclaimed in 1907, Etosha National Park in northern Namibia is an iconic dryland system with a
rich history of wildlife conservation and research. A recent research symposium on wildlife
conservation in the Greater Etosha Landscape (GEL) highlighted increased concern of how
intensification of global change will affect wildlife conservation based on participant responses to
a questionnaire. The GEL includes Etosha and surrounding areas, the latter divided by a veteri nary fence into large, private farms to the south and communal areas of residential and farming
land to the north. Here, we leverage our knowledge of this ecosystem to provide insight into the
broader challenges facing wildlife conservation in this vulnerable dryland environment. We first
look backward, summarizing the history of wildlife conservation and research trends in the GEL
based on a literature review, providing a broad-scale understanding of the socioecological pro cesses that drive dryland system dynamics. We then look forward, focusing on eight key areas of
challenge and opportunity for this ecosystem: climate change, water availability and quality,
vegetation and fire management, adaptability of wildlife populations, disease risk, human wildlife conflict, wildlife crime, and human dimensions of wildlife conservation. Using this
model system, we summarize key lessons and identify critical threats highlighting future research
needs to support wildlife management. Research in the GEL has followed a trajectory seen
elsewhere reflecting an increase in complexity and integration across biological scales over time.
Yet, despite these trends, a gap exists between the scope of recent research efforts and the needs of
wildlife conservation to adapt to climate and land-use changes. Given the complex nature of
climate change, in addition to locally existing system stressors, a framework of forward-thinking
adaptive management to address these challenges, supported by integrative and multidisciplinary
research could be beneficial. One critical area for growth is to better integrate research and
wildlife management across land-use types. Such efforts have the potential to support wildlife
conservation efforts and human development goals, while building resilience against the impacts
of climate change. While our conclusions reflect the specifics of the GEL ecosystem, they have
direct relevance for other African dryland systems impacted by global change
Recommended from our members
Emotion as opportunity: reflections on multiple concurrent partnerships among young men in South Africa
Partner reduction has been shown to be one of the most important aspects of any programme that seeks to contain the spread of HIV. In South Africa, however, multiple concurrent sexual partnerships are a common feature of township life for young people, especially young men. Following on from Swartz & Bhana's (2009) study on young fathers this small, qualitative study comprised
a series of in-depth and frank discussions about multiple concurrent sexual partnerships with a group of four young men living in Langa, Cape Town, who had been involved in the previous study of young fathers as either key informants or community recruiters. Three discussion themes emerged: the social dynamics around multiple concurrent partnerships; the reasons for their high prevalence and persistence in the face of HIV; and the emotional complexities and costs of having multiple concurrent partnerships. These conversations highlighted the fact that the literature has tended to focus on the social, historical and practical reasons for multiple concurrent partners, rather than exploring the gendered emotional aspects thereof. We suggest that a greater focus on the latter, especially among young men, will offer possibilities for effective partnership reduction programmes.
Some memories are odder than others : Judgments of episodic oddity violate known decision rules
Current decision models of recognition memory are based almost entirely on one paradigm, single item old/new judgments accompanied by confidence ratings. This task results in receiver operating characteristics (ROCs) that are well fit by both signal-detection and dual-process models. Here we examine an entirely new recognition task, the judgment of episodic oddity, whereby participants select the mnemonically odd members of triplets (e.g., a new item hidden among two studied items). Using the only two known signal-detection rules of oddity judgment derived from the sensory perception literature, the unequal variance signal-detection model predicted that an old item among two new items would be easier to discover than a new item among two old items. In contrast, four separate empirical studies demonstrated the reverse pattern: triplets with two old items were the easiest to resolve. This finding was anticipated by the dual-process approach as the presence of two old items affords the greatest opportunity for recollection. Furthermore, a bootstrap-fed Monte Carlo procedure using two independent datasets demonstrated that the dual-process parameters typically observed during single item recognition correctly predict the current oddity findings, whereas unequal variance signal-detection parameters do not. Episodic oddity judgments represent a case where dual- and single-process predictions qualitatively diverge and the findings demonstrate that novelty is "odder" than familiarity.PreprintPeer reviewe
- âŠ