45 research outputs found

    Primers used for the gene driven screen and mouse <i>Foxf2</i> sequencing.

    No full text
    <p>Primer sets Exon1a-1e were used to screen the first exon and flanking regions, primer set Exon2a was used to screen exon 2. * Primer set Exon1e replaced Exon1d for the screen of the MRC archive.</p

    Summary of exercises that ChatGPT did not solve.

    No full text
    ChatGPT failed to solve 5 of the exercises within 10 attempts. This table summarizes characteristics of these exercises and provides a brief summary of complications that ChatGPT faced when attempting to solve them.</p

    Example prompt for a programming exercise delivered to ChatGPT.

    No full text
    Example prompt for a programming exercise delivered to ChatGPT.</p

    Phenotype of homozygous <i>Foxf2</i><sup>W174R</sup> pup compared with wildtype.

    No full text
    <p><b>A</b>, A noticeable difference in gross size can be seen by 3 days of age in mutant pup. <b>B&C</b>, In contrast to the homozygous knockout, the homozygous <i>Foxf2</i><sup>W174R</sup> individuals appear normal at birth, although the amount of milk consumed (arrow) is reduced compared to wildtype littermates.</p

    Summary of interactions between the human user and ChatGPT.

    No full text
    For the exercises that ChatGPT failed to solve on the first attempt, we categorized the subsequent interactions between the human user and the model. This table indicates the frequency of each interaction type.</p

    Recommendations for using LLMs to generate code.

    No full text
    Computer programming is a fundamental tool for life scientists, allowing them to carry out essential research tasks. However, despite various educational efforts, learning to write code can be a challenging endeavor for students and researchers in life-sciences disciplines. Recent advances in artificial intelligence have made it possible to translate human-language prompts to functional code, raising questions about whether these technologies can aid (or replace) life scientists’ efforts to write code. Using 184 programming exercises from an introductory-bioinformatics course, we evaluated the extent to which one such tool—OpenAI’s ChatGPT—could successfully complete programming tasks. ChatGPT solved 139 (75.5%) of the exercises on its first attempt. For the remaining exercises, we provided natural-language feedback to the model, prompting it to try different approaches. Within 7 or fewer attempts, ChatGPT solved 179 (97.3%) of the exercises. These findings have implications for life-sciences education and research. Instructors may need to adapt their pedagogical approaches and assessment techniques to account for these new capabilities that are available to the general public. For some programming tasks, researchers may be able to work in collaboration with machine-learning models to produce functional code.</div

    Number of Bard iterations per exercise.

    No full text
    For 66 exercise prompts, we gave Google Bard up to 10 attempts at generating a code solution that successfully passed the tests. The counts above each bar represent the number of exercises that required a particular number of attempts.</p

    Comparison of code-solution lengths for instructor solutions versus ChatGPT solutions.

    No full text
    This illustrates the relationship between A) the number of characters or B) the number of lines of code, for each exercise, after removing comment lines. The dashed, red line is the identity line.</p

    Lines of Python code per instructor solution.

    No full text
    Course instructors provided a solution for each exercise. This plot illustrates the number of lines of code for each solution, after removing comment lines.</p
    corecore