Comparison of Three Programming Error Measures for Explaining Variability in CS1 Grades

Valdemar Švábenský, Maciej Pankiewicz, Jiayi Zhang, Elizabeth B. Cloude, Ryan S. Baker, Eric Fouh

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Programming courses can be challenging for first year university students, especially for those without prior coding experience. Students initially struggle with code syntax, but as more advanced topics are introduced across a semester, the difficulty in learning to program shifts to learning computational thinking (e.g., debugging strategies). This study examined the relationships between students' rate of programming errors and their grades on two exams. Using an online integrated development environment, data were collected from 280 students in a Java programming course. The course had two parts. The first focused on introductory procedural programming and culminated with exam 1, while the second part covered more complex topics and object-oriented programming and ended with exam 2. To measure students' programming abilities, 51095 code snapshots were collected from students while they completed assignments that were autograded based on unit tests. Compiler and runtime errors were extracted from the snapshots, and three measures - Error Count, Error Quotient and Repeated Error Density - were explored to identify the best measure explaining variability in exam grades. Models utilizing Error Quotient outperformed the models using the other two measures, in terms of the explained variability in grades and Bayesian Information Criterion. Compiler errors were significant predictors of exam 1 grades but not exam 2 grades; only runtime errors significantly predicted exam 2 grades. The findings indicate that leveraging Error Quotient with multiple error types (compiler and runtime) may be a better measure of students' introductory programming abilities, though still not explaining most of the observed variability.

Original languageEnglish
Title of host publicationITiCSE 2024 - Proceedings of the 2024 Conference Innovation and Technology in Computer Science Education
PublisherACM
Pages87-93
Number of pages7
Volume1
ISBN (Electronic)9798400706004
DOIs
Publication statusPublished - 3 Jul 2024
Publication typeA4 Article in conference proceedings
EventInnovation and Technology in Computer Science Education - Milan, Italy
Duration: 8 Jul 202410 Jul 2024

Publication series

NameAnnual Conference on Innovation & Technology in Computer Science Education
ISSN (Electronic)1942-647X

Conference

ConferenceInnovation and Technology in Computer Science Education
Country/TerritoryItaly
CityMilan
Period8/07/2410/07/24

Keywords

  • computer science education
  • introduction to programming
  • introductory programming
  • novice programming
  • programming education

Publication forum classification

  • Publication forum level 1

ASJC Scopus subject areas

  • Management of Technology and Innovation
  • Education

Fingerprint

Dive into the research topics of 'Comparison of Three Programming Error Measures for Explaining Variability in CS1 Grades'. Together they form a unique fingerprint.

Cite this