4 research outputs found

    The Baby Elmo Program: Improving teen father-child interactions within juvenile justice facilities

    No full text
    The aim of the Baby Elmo Program is to establish a low-cost, sustainable parenting and structured visitation program for non-custodial incarcerated teen parents. The program is taught and supervised by probation staff in juvenile detention facilities and unlike traditional programs, this intervention is not based on increasing the teen's abstract parenting knowledge, but rather in building a relationship between the teen and his child. The sessions target the interactional quality of the relationship by introducing relationship, communication, and socio-emotional enhancing techniques. Because the intervention is conducted in the context of parent-child visits, it fosters hands-on learning and increases the opportunity for contact between these young parents and their children, a benefit in itself. Twenty father-infant dyads, with infants ranging in age from 6 to 36 months, participated in the present preliminary evaluation of the program. Individual growth curve analyses showed significant gains in five of six measures of emotional responsiveness with the age of infant as a significant covariate. These results indicate improvements in positive high quality interactions and communication during sessions between infants and their incarcerated parents and this increase in the interactional quality of the relationship increases the likelihood that the incarcerated teen and child will form and maintain a positive relationship with one another.Juvenile justice Parental incarceration Parent-child interactions

    StarCoder: may the source be with you!

    Full text link
    The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python, can be prompted to achieve 40\% pass@1 on HumanEval, and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license

    Natural history notes

    No full text
    corecore