140 research outputs found
Darwin’s wind hypothesis: does it work for plant dispersal in fragmented habitats?
Using the wind-dispersed plant Mycelis muralis, we examined how landscape fragmentation affects variation in seed traits contributing to dispersal.
Inverse terminal velocity (Vt−1) of field-collected achenes was used as a proxy for individual seed dispersal ability. We related this measure to different metrics of landscape connectivity, at two spatial scales: in a detailed analysis of eight landscapes in Spain and along a latitudinal gradient using 29 landscapes across three European regions.
In the highly patchy Spanish landscapes, seed Vt−1 increased significantly with increasing connectivity. A common garden experiment suggested that differences in Vt−1 may be in part genetically based. The Vt−1 was also found to increase with landscape occupancy, a coarser measure of connectivity, on a much broader (European) scale. Finally, Vt−1 was found to increase along a south–north latitudinal gradient.
Our results for M. muralis are consistent with ‘Darwin’s wind dispersal hypothesis’ that high cost of dispersal may select for lower dispersal ability in fragmented landscapes, as well as with the ‘leading edge hypothesis’ that most recently colonized populations harbour more dispersive phenotypes.
Integrating Personalized Parsons Problems with Multi-Level Textual Explanations to Scaffold Code Writing
Novice programmers need to write basic code as part of the learning process,
but they often face difficulties. To assist struggling students, we recently
implemented personalized Parsons problems, which are code puzzles where
students arrange blocks of code to solve them, as pop-up scaffolding. Students
found them to be more engaging and preferred them for learning, instead of
simply receiving the correct answer, such as the response they might get from
generative AI tools like ChatGPT. However, a drawback of using Parsons problems
as scaffolding is that students may be able to put the code blocks in the
correct order without fully understanding the rationale of the correct
solution. As a result, the learning benefits of scaffolding are compromised.
Can we improve the understanding of personalized Parsons scaffolding by
providing textual code explanations? In this poster, we propose a design that
incorporates multiple levels of textual explanations for the Parsons problems.
This design will be used for future technical evaluations and classroom
experiments. These experiments will explore the effectiveness of adding textual
explanations to Parsons problems to improve instructional benefits.Comment: Peer-Reviewed, Accepted for publication in Proceedings of the 55th
ACM Technical Symposium on Computer Science Education V. 2 (SIGCSE 2024
Understanding the Effects of Using Parsons Problems to Scaffold Code Writing for Students with Varying CS Self-Efficacy Levels
Introductory programming courses aim to teach students to write code
independently. However, transitioning from studying worked examples to
generating their own code is often difficult and frustrating for students,
especially those with lower CS self-efficacy in general. Therefore, we
investigated the impact of using Parsons problems as a code-writing scaffold
for students with varying levels of CS self-efficacy. Parsons problems are
programming tasks where students arrange mixed-up code blocks in the correct
order. We conducted a between-subjects study with undergraduate students (N=89)
on a topic where students have limited code-writing expertise. Students were
randomly assigned to one of two conditions. Students in one condition practiced
writing code without any scaffolding, while students in the other condition
were provided with scaffolding in the form of an equivalent Parsons problem. We
found that, for students with low CS self-efficacy levels, those who received
scaffolding achieved significantly higher practice performance and in-practice
problem-solving efficiency compared to those without any scaffolding.
Furthermore, when given Parsons problems as scaffolding during practice,
students with lower CS self-efficacy were more likely to solve them. In
addition, students with higher pre-practice knowledge on the topic were more
likely to effectively use the Parsons scaffolding. This study provides evidence
for the benefits of using Parsons problems to scaffold students' write-code
activities. It also has implications for optimizing the Parsons scaffolding
experience for students, including providing personalized and adaptive Parsons
problems based on the student's current problem-solving status.Comment: Peer-Reviewed, Accepted for publication in the proceedings of the
2023 ACM Koli Calling International Conference on Computing Education
Researc
CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning Programming
Learning to program can be challenging, and providing high-quality and timely
support at scale is hard. Generative AI and its products, like ChatGPT, can
create a solution for most intro-level programming problems. However, students
might use these tools to just generate code for them, resulting in reduced
engagement and limited learning. In this paper, we present CodeTailor, a system
that leverages a large language model (LLM) to provide personalized help to
students while still encouraging cognitive engagement. CodeTailor provides a
personalized Parsons puzzle to support struggling students. In a Parsons
puzzle, students place mixed-up code blocks in the correct order to solve a
problem. A technical evaluation with previous incorrect student code snippets
demonstrated that CodeTailor could deliver high-quality (correct, personalized,
and concise) Parsons puzzles based on their incorrect code. We conducted a
within-subjects study with 18 novice programmers. Participants perceived
CodeTailor as more engaging than just receiving an LLM-generated solution (the
baseline condition). In addition, participants applied more supported elements
from the scaffolded practice to the posttest when using CodeTailor than
baseline. Overall, most participants preferred using CodeTailor versus just
receiving the LLM-generated code for learning. Qualitative observations and
interviews also provided evidence for the benefits of CodeTailor, including
thinking more about solution construction, fostering continuity in learning,
promoting reflection, and boosting confidence. We suggest future design ideas
to facilitate active learning opportunities with generative AI techniques.Comment: Accepted to the Eleventh ACM Conference on Learning @ Scale (L@S 24)
as a full pape
Subgoals Help Students Solve Parsons Problems
We report on a study that used subgoal labels to teach students how to write while loops with a Parsons problem learning assessment. Subgoal labels were used to aid learning of programming while not overloading students\u27 cognitive abilities. We wanted to compare giving learners subgoal labels versus asking learners to generate subgoal labels. As an assessment for learning we asked students to solve a Parsons problem – to place code segments in the correct order. We found that students who were given subgoal labels performed statistically better than the groups that did not receive subgoal labels or were asked to generate subgoal labels. We conclude that a low cognitive load assessment, Parsons problems, can be more sensitive to student learning gains than traditional code generation problems
How Novices Use LLM-Based Code Generators to Solve CS1 Coding Tasks in a Self-Paced Learning Environment
As Large Language Models (LLMs) gain in popularity, it is important to
understand how novice programmers use them. We present a thematic analysis of
33 learners, aged 10-17, independently learning Python through 45
code-authoring tasks using Codex, an LLM-based code generator. We explore
several questions related to how learners used these code generators and
provide an analysis of the properties of the written prompts and the generated
code. Specifically, we explore (A) the context in which learners use Codex, (B)
what learners are asking from Codex, (C) properties of their prompts in terms
of relation to task description, language, and clarity, and prompt crafting
patterns, (D) the correctness, complexity, and accuracy of the AI-generated
code, and (E) how learners utilize AI-generated code in terms of placement,
verification, and manual modifications. Furthermore, our analysis reveals four
distinct coding approaches when writing code with an AI code generator: AI
Single Prompt, where learners prompted Codex once to generate the entire
solution to a task; AI Step-by-Step, where learners divided the problem into
parts and used Codex to generate each part; Hybrid, where learners wrote some
of the code themselves and used Codex to generate others; and Manual coding,
where learners wrote the code themselves. The AI Single Prompt approach
resulted in the highest correctness scores on code-authoring tasks, but the
lowest correctness scores on subsequent code-modification tasks during
training. Our results provide initial insight into how novice learners use AI
code generators and the challenges and opportunities associated with
integrating them into self-paced learning environments. We conclude with
various signs of over-reliance and self-regulation, as well as opportunities
for curriculum and tool development.Comment: 12 pages, Peer-Reviewed, Accepted for publication in the proceedings
of the 2023 ACM Koli Calling International Conference on Computing Education
Researc
The Iowa Homemaker vol.5, no.6
Table of Contents
The Thanksgiving Dinner by Barbara Dewell, page 1
Safe and Adequate Food Supply by Mildred Rodgers, page 2
Real Lace by Grace Bonnell, page 3
When in Doubt – Try Apples by Beth Bailey McLean, page 4
“In the Candle Light”, page 5
With Iowa State Home Economics Association, page 6
The Mechanical Maid by Grace Heidbreder, page 7
Girls’ 4-H Clubs, page 8
Editorial, page 9
Who’s There and Where, page 10
The Eternal Question, page 12
New Faculty Members by Virginia Reck, page 14
Birch Hall by Margaret Ericson, page 15
Recipes – Old and New by Muriel Moore, page 1
Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course
The capability of large language models (LLMs) to generate, debug, and
explain code has sparked the interest of researchers and educators in
undergraduate programming, with many anticipating their transformative
potential in programming education. However, decisions about why and how to use
LLMs in programming education may involve more than just the assessment of an
LLM's technical capabilities. Using the social shaping of technology theory as
a guiding framework, our study explores how students' social perceptions
influence their own LLM usage. We then examine the correlation of self-reported
LLM usage with students' self-efficacy and midterm performances in an
undergraduate programming course. Triangulating data from an anonymous
end-of-course student survey (n = 158), a mid-course self-efficacy survey
(n=158), student interviews (n = 10), self-reported LLM usage on homework, and
midterm performances, we discovered that students' use of LLMs was associated
with their expectations for their future careers and their perceptions of peer
usage. Additionally, early self-reported LLM usage in our context correlated
with lower self-efficacy and lower midterm scores, while students' perceived
over-reliance on LLMs, rather than their usage itself, correlated with
decreased self-efficacy later in the course.Comment: Accepted to the ACM Conference on International Computing Education
Research V.1 (ICER '24 Vol. 1
Conducting Multi-Institutional Studies of Parsons Problems
Many novice programmers struggle to write code from scratch and get frustrated when their code does not work.
Parsons problems can reduce the difficulty of a coding problem by providing mixed-up blocks that the learner assembles in the correct order. Parsons problems can also include distractor blocks that are not needed in a correct solution, but which may help students learn to recognize and fix errors. Evidence indicates that students find Parsons problems engaging, easier than writing code from scratch, useful for learning patterns, and typically faster to solve than writing code from scratch with equivalent learning gains. This working group leverages the work of the 2022 ITiCSE working group which published an extensive literature review of Parsons problems and designed and piloted several studies based on the gaps identified by the literature review. The 2023 working group is revising, conducting, and creating new studies. We will analyze the data from these multi-institutional and multi-national studies and publish the results as well as recommendations for future working groups
A Novel Function of DELTA-NOTCH Signalling Mediates the Transition from Proliferation to Neurogenesis in Neural Progenitor Cells
A complete account of the whole developmental process of neurogenesis involves understanding a number of complex underlying molecular processes. Among them, those that govern the crucial transition from proliferative (self-replicating) to neurogenic neural progenitor (NP) cells remain largely unknown. Due to its sequential rostro-caudal gradients of proliferation and neurogenesis, the prospective spinal cord of the chick embryo is a good experimental system to study this issue. We report that the NOTCH ligand DELTA-1 is expressed in scattered cycling NP cells in the prospective chick spinal cord preceding the onset of neurogenesis. These Delta-1-expressing progenitors are placed in between the proliferating caudal neural plate (stem zone) and the rostral neurogenic zone (NZ) where neurons are born. Thus, these Delta-1-expressing progenitors define a proliferation to neurogenesis transition zone (PNTZ). Gain and loss of function experiments carried by electroporation demonstrate that the expression of Delta-1 in individual progenitors of the PNTZ is necessary and sufficient to induce neuronal generation. The activation of NOTCH signalling by DELTA-1 in the adjacent progenitors inhibits neurogenesis and is required to maintain proliferation. However, rather than inducing cell cycle exit and neuronal differentiation by a typical lateral inhibition mechanism as in the NZ, DELTA-1/NOTCH signalling functions in a distinct manner in the PNTZ. Thus, the inhibition of NOTCH signalling arrests proliferation but it is not sufficient to elicit neuronal differentiation. Moreover, after the expression of Delta-1 PNTZ NP continue cycling and induce the expression of Tis21, a gene that is upregulated in neurogenic progenitors, before generating neurons. Together, these experiments unravel a novel function of DELTA–NOTCH signalling that regulates the transition from proliferation to neurogenesis in NP cells. We hypothesize that this novel function is evolutionary conserved
- …
