40 research outputs found

    Cooperating teachers' images : a study in early childhood settings

    Get PDF

    Childcare provision: Whose responsibility? Who pays?

    Get PDF
    Recent debates about the provision of child care for children of below school age have focused on issues relating to children, to families, to social capital building and to financial return on investment. The first of these is concerned with providing for children’s growth and development and focuses on the enhancement of skills and experiences conducive to furthering children’s capacity as learners. Early learning provides a critical underpinning for subsequent social and academic success (Shonkoff & Phillips 2000). For example, the Longitudinal Study of Australian Children (LSAC), identified that 4–5 years olds who had not participated in educational programs prior to school were performing less well on measures of early literacy and numeracy (Harrison & Ungerer 2005). Issues around social capital building recognise that a focus on the early years, particularly for socially disadvantaged families, subsequently reaps long-term benefits in terms of improvement in educational outcomes, increased economic self-sufficiency, crime reduction and improvement in family relationships and health (Bruner 2004; Karoly et al. 1998, Lynch 2004; Schweinhart 2005). Family circumstances include those associated with social disadvantage, child protection and disability. Martin (2003) identified that the childcare system in Australia returned over $1.86 per dollar spent to the government’s ‘bottom line’ through increased taxation revenue and reduced social assistance outlays. Martin also recognised the potential for such investment to have a ripple effect through society and, consequently, to facilitate social capital building. The Australian Government’s Stronger Families and Communities Strategy and the NSW Department of Community Services Early Intervention Program have both welfare and social reform agendas but little attention has truly been given to financial and social return on investment

    Childcare provision: Whose responsibility? Who pays?

    Get PDF
    Recent debates about the provision of child care for children of below school age have focused on issues relating to children, to families, to social capital building and to financial return on investment. The first of these is concerned with providing for children’s growth and development and focuses on the enhancement of skills and experiences conducive to furthering children’s capacity as learners. Early learning provides a critical underpinning for subsequent social and academic success (Shonkoff & Phillips 2000). For example, the Longitudinal Study of Australian Children (LSAC), identified that 4–5 years olds who had not participated in educational programs prior to school were performing less well on measures of early literacy and numeracy (Harrison & Ungerer 2005). Issues around social capital building recognise that a focus on the early years, particularly for socially disadvantaged families, subsequently reaps long-term benefits in terms of improvement in educational outcomes, increased economic self-sufficiency, crime reduction and improvement in family relationships and health (Bruner 2004; Karoly et al. 1998, Lynch 2004; Schweinhart 2005). Family circumstances include those associated with social disadvantage, child protection and disability. Martin (2003) identified that the childcare system in Australia returned over $1.86 per dollar spent to the government’s ‘bottom line’ through increased taxation revenue and reduced social assistance outlays. Martin also recognised the potential for such investment to have a ripple effect through society and, consequently, to facilitate social capital building. The Australian Government’s Stronger Families and Communities Strategy and the NSW Department of Community Services Early Intervention Program have both welfare and social reform agendas but little attention has truly been given to financial and social return on investment

    Pathology caused by persistent murine norovirus infection.

    Get PDF
    Subclinical infection of murine norovirus (MNV) was detected in a mixed breeding group of WT and Stat1(-/-) mice with no outward evidence of morbidity or mortality. Investigations revealed the presence of an attenuated MNV variant that did not cause cytopathic effects in RAW264.7 cells or death in Stat1(-/-) mice. Histopathological analysis of tissues from WT, heterozygous and Stat1(-/-) mice revealed a surprising spectrum of lesions. An infectious molecular clone was derived directly from faeces (MNV-O7) and the sequence analysis confirmed it was a member of norovirus genogroup V. Experimental infection with MNV-O7 induced a subclinical infection with no weight loss in Stat1(-/-) or WT mice, and recapitulated the clinical and pathological picture of the naturally infected colony. Unexpectedly, by day 54 post-infection, 50 % of Stat1(-/-) mice had cleared MNV-O7. In contrast, all WT mice remained infected persistently. Most significantly, this was associated with liver lesions in all the subclinically infected WT mice. These data confirmed that long-term persistence in WT mice is established with specific variants of MNV and that despite a subclinical presentation, active foci of acute inflammation persist within the liver. The data also showed that STAT1-dependent responses are not required to protect mice from lethal infection with all strains of MNV

    Combined point of care nucleic acid and antibody testing for SARS-CoV-2 following emergence of D614G Spike Variant

    Get PDF
    Rapid COVID-19 diagnosis in hospital is essential, though complicated by 30-50% of nose/throat swabs being negative by SARS-CoV-2 nucleic acid amplification testing (NAAT). Furthermore, the D614G spike mutant now dominates the pandemic and it is unclear how serological tests designed to detect anti-Spike antibodies perform against this variant. We assess the diagnostic accuracy of combined rapid antibody point of care (POC) and nucleic acid assays for suspected COVID-19 disease due to either wild type or the D614G spike mutant SARS-CoV-2. The overall detection rate for COVID-19 is 79.2% (95CI 57.8-92.9%) by rapid NAAT alone. Combined point of care antibody test and rapid NAAT is not impacted by D614G and results in very high sensitivity for COVID-19 diagnosis with very high specificity

    Complement lectin pathway activation is associated with COVID-19 disease severity, independent of MBL2 genotype subgroups

    Get PDF
    IntroductionWhile complement is a contributor to disease severity in severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections, all three complement pathways might be activated by the virus. Lectin pathway activation occurs through different pattern recognition molecules, including mannan binding lectin (MBL), a protein shown to interact with SARS-CoV-2 proteins. However, the exact role of lectin pathway activation and its key pattern recognition molecule MBL in COVID-19 is still not fully understood.MethodsWe therefore investigated activation of the lectin pathway in two independent cohorts of SARS-CoV-2 infected patients, while also analysing MBL protein levels and potential effects of the six major single nucleotide polymorphisms (SNPs) found in the MBL2 gene on COVID-19 severity and outcome.ResultsWe show that the lectin pathway is activated in acute COVID-19, indicated by the correlation between complement activation product levels of the MASP-1/C1-INH complex (p=0.0011) and C4d (p<0.0001) and COVID-19 severity. Despite this, genetic variations in MBL2 are not associated with susceptibility to SARS-CoV-2 infection or disease outcomes such as mortality and the development of Long COVID.ConclusionIn conclusion, activation of the MBL-LP only plays a minor role in COVID-19 pathogenesis, since no clinically meaningful, consistent associations with disease outcomes were noted

    The evolving SARS-CoV-2 epidemic in Africa: Insights from rapidly expanding genomic surveillance

    Get PDF
    INTRODUCTION Investment in Africa over the past year with regard to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) sequencing has led to a massive increase in the number of sequences, which, to date, exceeds 100,000 sequences generated to track the pandemic on the continent. These sequences have profoundly affected how public health officials in Africa have navigated the COVID-19 pandemic. RATIONALE We demonstrate how the first 100,000 SARS-CoV-2 sequences from Africa have helped monitor the epidemic on the continent, how genomic surveillance expanded over the course of the pandemic, and how we adapted our sequencing methods to deal with an evolving virus. Finally, we also examine how viral lineages have spread across the continent in a phylogeographic framework to gain insights into the underlying temporal and spatial transmission dynamics for several variants of concern (VOCs). RESULTS Our results indicate that the number of countries in Africa that can sequence the virus within their own borders is growing and that this is coupled with a shorter turnaround time from the time of sampling to sequence submission. Ongoing evolution necessitated the continual updating of primer sets, and, as a result, eight primer sets were designed in tandem with viral evolution and used to ensure effective sequencing of the virus. The pandemic unfolded through multiple waves of infection that were each driven by distinct genetic lineages, with B.1-like ancestral strains associated with the first pandemic wave of infections in 2020. Successive waves on the continent were fueled by different VOCs, with Alpha and Beta cocirculating in distinct spatial patterns during the second wave and Delta and Omicron affecting the whole continent during the third and fourth waves, respectively. Phylogeographic reconstruction points toward distinct differences in viral importation and exportation patterns associated with the Alpha, Beta, Delta, and Omicron variants and subvariants, when considering both Africa versus the rest of the world and viral dissemination within the continent. Our epidemiological and phylogenetic inferences therefore underscore the heterogeneous nature of the pandemic on the continent and highlight key insights and challenges, for instance, recognizing the limitations of low testing proportions. We also highlight the early warning capacity that genomic surveillance in Africa has had for the rest of the world with the detection of new lineages and variants, the most recent being the characterization of various Omicron subvariants. CONCLUSION Sustained investment for diagnostics and genomic surveillance in Africa is needed as the virus continues to evolve. This is important not only to help combat SARS-CoV-2 on the continent but also because it can be used as a platform to help address the many emerging and reemerging infectious disease threats in Africa. In particular, capacity building for local sequencing within countries or within the continent should be prioritized because this is generally associated with shorter turnaround times, providing the most benefit to local public health authorities tasked with pandemic response and mitigation and allowing for the fastest reaction to localized outbreaks. These investments are crucial for pandemic preparedness and response and will serve the health of the continent well into the 21st century
    corecore