At UTP, I was tasked to create the interfaces and workflow for a tool designed by our Technology team to facilitate the review of student assignments by instructors of online courses. This would, ideally, free up time from having to do tedious tasks (reviewing many short/mid length papers and written assignments that often display similar error patterns) and let instructors use this time intofor more high-value tasks, such as 1 on 1 sessions with students or having specific sessions to reinforce topics the class might be struggling with.
Initially, the generator was created for ESL teachers to use, but we also wanted it to be scalable to 100+ courses.
Despite that pretty massive ask, the initial desired feature list for the generator was fairly simple:
My team had already designed a product flowchart for the generator - my job was to turn it into clean interfaces and suggest extra improvements.
In general, it allows course instructors to generate written comments for students' graded assignments. It uses GPT 3.5 to parse the assignment we feed it, and then it writes a short paragraph of feedback to respond to what the student wrote. This usually contains a supportive or motivational message for the sudent and a list of recommendations to amend the errors the generator detected. It detects errors based on a shortlist of common mistakes we found in the ESL. At the time, error detection was not up to standard and we still relied on instructors to go over the text and add any the generator missed.
Once the program generates said feedback, the instructor can copy and paste it into their LMS' assignment review page (which allows comments to be added when entering a grade), or download it and send it to the student directly. Since the platform was a basic PoC on its way to become an MVP, it was not yet directly connected to the LMS, and so this process had to be manual.
I received some sketches of how my team wanted the generator to integrate with the LMS, but as this was a basic PoC that was going to be tested outside the LMS, I redesigned the prototype slightly to make fast development and future integration easier. It was also important to make sure the instructors who used it were aware this was not an official part of their workflow, at least, yet. Also, speed was of the essence, so I skimped on a lot of unnecessary detail and kept the general look pretty barebones.
After the initial deployment of the feedback generator, ESL instructors observed a ~33% reduction in the total time spent reviewing assignments per class (which has a maximum of 60 students). Since the course load of some instructors can reach up to 5 classes per semester, the generator can offer significant time savings for them. Additionally, students reported an increase in the closeness and trust they felt with the course instructors - possibly because the comments they receive are more personalized. This likely translates into a better perception of their relationship with the professor, though we didn't go into this topic with much depth.
WRT the scalability to other courses, we managed to make it usable for one humanities course as well. However, we ran into some problems with the results and realized catering to very different courses would probably require a different way of implementation. One that should allow us to control the input prompt with a lot more finesse and, ideally, be able to add other elements to better contextualize or fill in parts of this prompt with course-specific material. This is, in part, how we started work on Gradient.