Using AI to Revise Instructions
When I first started at my last institution, there was an assignment (the second project) in place in our pre-designed courses called a Comparative Rhetorical Analysis (CRA).
The first project was an annotated bibliography (AB) where students focused on gathering interdisciplinary articles on their topic; students were asked to write this next paper using resources that they had already selected and read to complete the first assignment. On paper, it was a beautiful assignment sequence with lots of scaffolding to help students get from AB to CRA.
It didn’t always work out that way, though. In fact, most of the time, students didn’t complete a comparative rhetorical analysis. They completed a comparative content analysis, no matter how many rhetorical analysis scaffolding activities they were given.
The class was 7.5 weeks, which added to the chaos of trying to teach a complicated concept to students, and I quickly received permission to instead have my students create a persuasive digital poster as an alternative assignment (big fan of multimodal assignments).
However, the flop of that assignment has haunted me, and so I wanted to see how AI might have helped me back in 2017.
This is intended to be a “do it with me” piece where I’ll walk you through how I used an AI writing tool to help me revise an assignment instruction sheet with a specific focus on the issues students had with the project.
I happen to be lucky enough to have access to the new AI writing tools in the PowerNotes platform, so that’s what I used, but, you can also, with some extra prompting, do this in ChatGPT.
If you’d like to try out PowerNotes’ AI features for yourself, reach out here.
TL/DR: You need to feed the expertise, context, and EXACT problem to the AI to get what you want. Don’t forget that as the expertise changes for what you’re looking for, you need to tell the AI to put on a different hat.
Getting Started:
To start, I copied and pasted the assignment instructions into a ‘freeform note’ in PowerNotes and prompted it to:
Act as a rhetoric and writing teacher, and revise the instructions in the note to help with the issue of students not comparing the rhetoric used in the two articles; your goal is to revise for student understanding of the assignment.
It largely kept the language the same, which is a breath of fresh air compared to many editors who prefer their own style and tone. The changes in that first output fell into two categories:
- Language additions that refocused students on the current assignment after being given an example.
- Making it more concise.
I’ve included the original in the first column and the changes in the second (and noted if it’s type 1 or type 2).
While I appreciated the re-centering and making the instructions more concise, they likely weren’t going to have a huge impact on my issue. I responded with:
This is better, but students often focus on comparing the ideas in the articles instead of focusing on comparing the rhetorical approaches. Rewrite the previous prompt to help students focus more on comparing rhetorical approaches.
This actually produced some changes that could be helpful, and it continued to focus on that same section. Interestingly, it made the language less concise, which I noted below, but it did add a sentence that pointed out the focus on rhetorical approaches. It also didn’t include the focusing language from earlier, but rather just instructed students on the type of articles to choose.
Another instance in the rewrite that was better with this second round was in the goals of analyzing rhetorical approaches.
Down the Rabbit Hole
Right now, there are a lot of conversations happening about prompt literacy and AI literacy. One of the conversations I recently had was about how specific is ‘specific enough’ in terms of defining the rhetorical situation for the AI. That’s part of what I’ve been doing with these “do-it-with-me” style posts lately.
It only took me two rounds, and if I had been more specific with what students were doing and what they were supposed to be doing, I would have gotten there faster. But how much did the AI need to know about my ethos (my authority and the authority with which to write from)?
To find out, I revised my prompt to:
In the final project, students often focus on comparing the ideas in the articles instead of focusing on comparing the rhetorical approaches. Rewrite the instructions in this note to help students focus more on comparing rhetorical approaches.
Interestingly, the first thing it did this time, that it didn’t any other time, was add to my title:
Additionally, it also rewrote quite a bit more of the instructions - taking out more pieces than would be useful. For example, in the table below the AI cut out a pretty crucial piece of the assignment: it should be two articles from two different disciplines. This is due to a lack of context of who to “be.”
Overall, it wasn’t an improvement. It would have confused my students further because of the removal of too much important info.
I then tried to replace the ethos with the context. I put in:
Use the instructions in the note that are for a project in a freshman writing course. In the final project, students often focus on comparing the ideas in the articles instead of focusing on comparing the rhetorical approaches. Rewrite the instructions in the note to help students focus more on comparing rhetorical approaches.
When I tried to replace the ethos with the context of the class, it was a bit better than removing the ethos entirely, but wasn’t quite as good as the one where I had given it an ethos.
Clearly, it needs an ethos, which isn’t surprising based on my experience so far and what I’ve been reading, but testing to see the impact, I think, is important. I wanted to see how narrow I needed to focus that ethos (how the AI understood the different roles). To my original session (where I had already given it the ethos of Rhet/Comp specialist), I added the prompt:
Please provide a list of scaffolding assignments for this project.
Yes, I can’t help but be polite; I say please to my Google Assistant, too. As expected, it gave me a list of generic activities like:
Clearly, it still thinks it’s a specialist in rhet/comp, but a rather boring and inefficient one. So I gave it another ethos and asked for more detail:
Update that scaffolding list as if you were an instructional designer and provide more detailed scaffolding activities.
Yeah, this was much better. Not only did it give me 9 instead of 7 options, these were much better activities.
The whole list was actually pretty good for a starting point before my first project (the AB), and the ones focused on Rhetorical Analysis were much more detailed and got at the pain points students have when doing this kind of work.
So basically, as an Instructional Designer (ID), the AI provided actual knowledge building activities instead of generic references to analysis. Not gonna lie, I’m a little disappointed that it didn’t “see” that in the rhet/comp ethos.
What I Learned:
Focusing on what I wanted the specific output to be reaffirmed the need for giving it an area of expertise to pull from, a context to work in, and a goal to achieve. As your “hat” changes, though, you need to give it that additional expertise to pull from.