Using AI Tools to Give Learners Feedback
The challenge of feedback for facilitators
We know that for learning to be effective, learners need frequent, targeted feedback on their output.
The problem? This can be burdensome and time-consuming for an individual facilitator, especially depending on their number of learners.
Can AI tools help?
In a conversation about advanced use cases of AI for instructional design, ChatGPT suggested to me that it could help “generate targeted feedback and learning activities to help [learners] improve in areas where they need it most.” I wanted to test this theory.
I asked ChatGPT to assess a work sample from my time as an educator: a poetry analysis (the most human assignment I could think of!) of William Blake’s poem “London,” using a four-point rubric that I had written.
How it went
When given a detailed rubric, ChatGPT assessed the work sample as Proficient, citing depth of analysis and mechanical errors as areas for improvement. I agreed with its assessment.
But even when pressed with follow-up prompts, ChatGPT was unable to meaningfully offer paths forward for the learner based on that feedback. For example, it suggested that the learner could improve their analysis of poetic devices by “us[ing] quotes from the poem to support your analysis of how the poetic device develops the theme.” The learner had already done this.
By contrast, as an instructor, I’d offer my learners further questions about the poetic devices they had identified, like, “Based on the rest of the stanza, what kinds of things does Blake consider to make up the ‘mind-forg’d manacles,’ and why? What are the manacles chaining the people of London to?”.
My takeaways (using a medical metaphor)
ChatGPT can provide learners with an initial diagnosis
At this moment, ChatGPT may be useful in providing learners with a diagnosis of their current performance and a general sense of the areas they need to improve. It would be most effectively leveraged on formative assessments, before a learner submits any final work.
Don’t let ChatGPT update a patient’s medical chart without oversight
But without getting another set of human eyes on the feedback, I still wouldn’t feel comfortable letting it update a patient chart, or use its assessments as formal scores/grades—we know it can make mistakes. So for scored submissions, I’m not sure how time-saving this tool would really be, since facilitators would still need to carefully review both the original submission and ChatGPT’s output.
ChatGPT struggled to move from diagnosis to specific treatment plan
Using the inputs I provided, ChatGPT wasn’t able to use that diagnosis to generate a meaningful treatment plan, or specific feedback for improvement that learners could take and use. Facilitators and educators still have a valuable role to play in coaching learners up.
Facilitators might be able to improve ChatGPT’s effectiveness here by increasing its knowledge base around the specific learning topic and/or pedagogical principles, but again, that would require an up-front time investment that facilitators may not be able to provide.
Eventually, custom GPTs, trained on a specific knowledge base, will likely address this limitation, but for now these are inaccessible to most facilitators and educators.
Further learning
To explore some other use cases of ChatGPT in L&D, check out Dr. Luke Hobson’s video “15 Ways to Use ChatGPT as an Instructional Designer, Instructor, and Teacher,” in which he explores strategies like using ChatGPT to write learning objectives, generate course outlines, write scripts, and more.