1,798 research outputs found

    Is Personalized Learning Meeting Its Productivity Promise? Early Lessons From Pioneering Schools

    Get PDF
    Blending computer-based and teacher-led instruction promises to help schools meet students' individual needs by organizing and prioritizing staff and technology in more productive ways. However, this fiscal analysis of eight new charter schools that implemented personalized learning this year finds that early difficulty in forecasting enrollment and revenue can undermine implementation of the model.As a result of missed enrollment and revenue projections:The schools spent less on technology and more on personnel than planned: instead of a combined 1.7millionontechnologyintheearlystages,theyspentjust1.7 million on technology in the early stages, they spent just 650,000Student-to-computer ratios were higher and schools spent less than planned on instructional and performance reporting software.Projected budget deficits in five of the schools threaten their ability to sustain on public funding.Among the brief's recommendations for those hoping to implement personalized-learning models in the future:Invest in student recruitment efforts up front to ensure enrollment targets are met.Develop a 'worst-case scenario' budget where fundraising and enrollment estimates fall 20 -- 25 percent below target.Manage contracts proactively: be explicit about needs, establish performance requirements, and negotiate trial periods to make sure products meet the school's needs.The eight personalized-learning schools included in this analysis were chosen to receive Next Generation Learning Challenges (NGLC) competitive start-up grants. CRPE is midway through a study of twenty personalized-learning schools that received NGLC grants. The study examines how the schools allocate their resources, how they manage the new costs of technology, and whether they can become financially sustainable on public revenues. CRPE will continue to track spending in all twenty schools this year and publish its findings next spring.This study is funded by the Bill & Melinda Gates Foundation

    Making School Choice Work Series: How Parents Experience Public School Choice

    Get PDF
    A growing number of cities now provide a range of public school options for families to choose from. Choosing a school can be one of the most stressful decisions parents make on behalf of their child. Getting access to the right public school will determine their child's future success. How are parents faring in cities where choice is widely available? This report answers this question by examining how parents' experiences with school choice vary across eight "high-choice" cities: Baltimore, Cleveland, Denver, Detroit, Indianapolis, New Orleans, Philadelphia, and Washington, D.C. Our findings suggest parents are taking advantage of the chance to choose a non-neighborhood-based public school option for their child, but there's more work to be done to ensure choice works for all families

    Irregular speech rate dissociates auditory cortical entrainment, evoked responses, and frontal alpha

    Get PDF
    The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features

    Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners

    Get PDF
    Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1 and 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3 and 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception

    Comprehensive Model for Sustaining Community Projects

    Get PDF
    Local sustainability is a goal from any community-based project\u27s first day. A family literacy project team conceptualized a comprehensive model for sustainability that includes six strategies. The two community sites are documented as successful beyond the term of federal funding. They are housed, staffed, and funded for program delivery to clients. The site directors share their knowledge, experience, insights, results, and recommendations. The best results have come from community investment and grant writing. The biggest challenge has been local fundraising. Marketing, fee-for-service, and 501(c)(3) status have contributed positively

    Simple acoustic features can explain phoneme-based predictions of cortical responses to speech

    Get PDF
    When we listen to speech, we have to make sense of a waveform of sound pressure. Hierarchical models of speech perception assume that, to extract semantic meaning, the signal is transformed into unknown, intermediate neuronal representations. Traditionally, studies of such intermediate representations are guided by linguistically defined concepts, such as phonemes. Here, we argue that in order to arrive at an unbiased understanding of the neuronal responses to speech, we should focus instead on representations obtained directly from the stimulus. We illustrate our view with a data-driven, information theoretic analysis of a dataset of 24 young, healthy humans who listened to a 1 h narrative while their magnetoencephalogram (MEG) was recorded. We find that two recent results, the improved performance of an encoding model in which annotated linguistic and acoustic features were combined and the decoding of phoneme subgroups from phoneme-locked responses, can be explained by an encoding model that is based entirely on acoustic features. These acoustic features capitalize on acoustic edges and outperform Gabor-filtered spectrograms, which can explicitly describe the spectrotemporal characteristics of individual phonemes. By replicating our results in publicly available electroencephalography (EEG) data, we conclude that models of brain responses based on linguistic features can serve as excellent benchmarks. However, we believe that in order to further our understanding of human cortical responses to speech, we should also explore low-level and parsimonious explanations for apparent high-level phenomena

    Unambiguous determination of spin dephasing times in ZnO

    Full text link
    Time-resolved magneto-optics is a well-established optical pump probe technique to generate and to probe spin coherence in semiconductors. By this method, spin dephasing times T_2^* can easily be determined if their values are comparable to the available pump-probe-delays. If T_2^* exceeds the laser repetition time, however, resonant spin amplification (RSA) can equally be used to extract T_2^*. We demonstrate that in ZnO these techniques have several tripping hazards resulting in deceptive values for T_2^* and show how to avoid them. We show that the temperature dependence of the amplitude ratio of two separate spin species can easily be misinterpreted as a strongly temperature dependent T_2^* of a single spin ensemble, while the two spin species have T_2^* values which are nearly independent of temperature. Additionally, consecutive pump pulses can significantly diminish the spin polarization, which remains from previous pump pulses. While this barely affects T_2^* values extracted from delay line scans, it results in seemingly shorter T_2^* values in RSA.Comment: 11 pages, 10 figure

    Efficient Multi-Task Scene Analysis with RGB-D Transformers

    Full text link
    Scene analysis is essential for enabling autonomous systems, such as mobile robots, to operate in real-world environments. However, obtaining a comprehensive understanding of the scene requires solving multiple tasks, such as panoptic segmentation, instance orientation estimation, and scene classification. Solving these tasks given limited computing and battery capabilities on mobile platforms is challenging. To address this challenge, we introduce an efficient multi-task scene analysis approach, called EMSAFormer, that uses an RGB-D Transformer-based encoder to simultaneously perform the aforementioned tasks. Our approach builds upon the previously published EMSANet. However, we show that the dual CNN-based encoder of EMSANet can be replaced with a single Transformer-based encoder. To achieve this, we investigate how information from both RGB and depth data can be effectively incorporated in a single encoder. To accelerate inference on robotic hardware, we provide a custom NVIDIA TensorRT extension enabling highly optimization for our EMSAFormer approach. Through extensive experiments on the commonly used indoor datasets NYUv2, SUNRGB-D, and ScanNet, we show that our approach achieves state-of-the-art performance while still enabling inference with up to 39.1 FPS on an NVIDIA Jetson AGX Orin 32 GB.Comment: To be published in IEEE International Joint Conference on Neural Networks (IJCNN) 202

    Converging evidence for the processing costs associated with ambiguous quantifier comprehension.

    Get PDF
    Traditional neuroanatomic models of language comprehension have emphasized a core language network situated in peri-Sylvian cortex. More recent evidence appears to extend the neuroanatomic network beyond peri-Sylvian cortex to encompass other aspects of sentence processing. In this study, we evaluate the neuroanatomic basis for processing the ambiguity in doubly-quantified sentences. For example, a sentence like All the dogs jumped in a lake can be interpreted with a collective interpretation (e.g., several dogs jumping into a single lake) or a distributive interpretation (e.g., several dogs each jumping into a different lake). In Experiment 1, we used BOLD fMRI to investigate neuroanatomic recruitment by young adults during the interpretation of ambiguous doubly-quantified sentences in a sentence-picture verification task. We observed that young adults exhibited a processing cost associated with interpreting ambiguous sentences and this was related to frontal and parietal cortex recruitment. In Experiment 2, we investigate ambiguous sentence processing with the identical materials in non-aphasic patients with behavioral variant frontotemporal dementia (bvFTD) who have frontal cortex disease and executive and decision-making limitations. bvFTD patients are insensitive to ambiguity associated with doubly-quantified sentences, and this is related to the magnitude of their frontal cortex disease. These studies provide converging evidence that cortical regions that extend beyond peri-Sylvian cortex help support the processing costs associated with the interpretation of ambiguous doubly-quantified sentences
    • …
    corecore