Teaching programming (with or without AI)

In 2024, our seminars focus on the newest research into how to best teach programming at school — with and without the use of AI tools. Watch recordings, read our summary blogs, and download speakers' slides.

See upcoming seminars in series
Student in the middle of a programming task

Generative AI in programming education: Bridging the gap from school to what lies ahead (16 April 2024)

Speaker: Brett A. Becker (University College Dublin)

Secondary school students and teachers have particular needs and face specific constraints that result in a unique landscape in terms of programming education. The rise of generative AI complicates this scene but also provides several potential opportunities. One of the greatest challenges in this arena is that while university students studying computing have a high likelihood of continuing on a path in computing, most school students choose to go into non-computing disciplines for further/higher education and/or their careers. This talk will focus on how generative AI may be effectively utilised in secondary school programming education and how it can be leveraged so that students can be best prepared for continuing their education or beginning their careers, regardless of discipline.

Dr Brett A. Becker is an Assistant Professor at University College Dublin in the School of Computer Science. His research area is computing education broadly construed, normally focusing on the psychology of programming, programming error messages, novice programmer behaviour, metacognition, and generative AI in education. In 2020, he was awarded a National Forum Teaching & Learning Research Fellowship, Ireland’s most prestigious national award in higher education.

The impact of AI tools on the student experience in programming courses: A preliminary study with an intersectional analysis approach ​(12 March 2024)

Speakers: Yash Tadimalla and Mary Lou Maher (University of North Carolina at Charlotte)

AI tools like ChatGPT have significant potential benefits for students: they allow students to work constructively on their own schedule and they provide an easy way to ask for help. However, before these tools can be meaningfully introduced into a course, their impact on a student’s ability to learn must be assessed and concerns that they lead to lower competence and poorer programming skills must be addressed.

Yash Tadimalla and Prof. Mary Lou Maher have performed a preliminary study to understand university students’ experiences of learning to program using AI tools. The students were asked to perform a programming task under one of three conditions:

  1. With no external help

  2. With the help of an AI chatbot

  3. With the help of a generative AI tool like GitHub Copilot

The mixed method study also included pre- and post-task assessments and interviews with the students about their experience and perceptions of the tools. The students demonstrated a range of attitudes and behaviours towards the AI tools and these preliminary findings provide insights for future research and plans for incorporating such AI tools into a course.

Yash Tadimalla is a PhD student in the College of Computing and Informatics at UNC Charlotte, where he is pursuing an interdisciplinary degree in Computer Science and Sociology. His research explores how an individual's identity influences their interaction with and learning of technology, particularly in the domains of artificial intelligence and computer science education.

Professor Mary Lou Maher is a professor in the Department of Software and Information Systems at UNC Charlotte, and also directs the Center for Education Innovation and co-directs the Human-Centered Computing Lab. Her research spans integrated cognitive and deep learning models in AI, ethical human–AI interaction, cognitive models of curiosity for education technology, AI-based co-creativity design, and inclusive computer science education.

Can large language models that generate code help K–12 students effectively learn Python programming? (13 February 2024)

Speaker: Majeed Kazemitabaar (University of Toronto)

Large language models (LLMs) can generate Python code from natural language descriptions, which may help students learn to program by reducing frustration and increasing self-efficacy. Majeed Kazemitabaar explores whether this is true by using Codex, a LLM code generator. Majeed asked two groups of students — one with access to Codex and one without — to perform a range of coding tasks that consisted of a code-authoring part and a code-modification part. He found that the Codex students completed more tasks, scored higher, and performed better in unassisted assessments a week later, with no detriment to their performance on the manual code-modification tasks.

Majeed identified four main approaches employed by the students. The ‘AI single prompt’ approach resulted in both the highest scores on code-authoring tasks, and the lowest correctness scores on code-modification tasks. This highlights both the potentials and pitfalls of LLM code generators in educational settings, and the need for balanced integration and curriculum development.

Majeed Kazemitabaar, a Computer Science PhD student at the University of Toronto under Professor Tovi Grossman, explores the intersection of HCI and computing education. With a focus on developer and creativity support tools, his work delves into tangible programming, data-driven visual programming, and LLM-powered coding assistants' impact on computing education. His PhD thesis focuses on scaling computing education in self-paced environments.

Using generative AI to create personalised Parson's Problems and explanations (9 January 2024)

Speakers: Barbara Ericson and Xinying Hou (University of Michigan)

Parson's Problems can be used to scaffold students who struggle while writing code from scratch. In a Parson's Problem, learners are given mixed-up fragments that must be placed in the correct order to solve a problem. Dr Barbara Ericson and Xinying Hou first tested Parson's Problems created from the most common student solution and found that students with lower computer science self-efficacy achieved significantly higher practice performance and in-practice problem-solving efficiency than those without Parson's Problems as scaffolding. They are now testing using generative AI to create a personalised Parson's Problem from incorrect code, as well as different levels of explanation. 

Dr Barbara Ericson is a Professor in the School of Information at the University of Michigan. She creates and studies free ebooks with innovative types of practice problems. She is a Distinguished Member of the ACM and won the 2022 SIGCSE Award for Outstanding Contributions to Computer Science Education.  

Xinying Hou is currently a third-year PhD student in Information at the University of Michigan, working with Professor Barbara Ericson. She develops innovative learning techniques to support programming learning and applies mixed methods to evaluate their effectiveness. Currently, she is investigating the use of generative AI to provide engaging, comprehensive, personalized, and high-quality scaffolding for novice programmers as they write code.