After facilitating a Classroom/Course Design Conversation about AI and higher education back in June a few things have occurred to me. I promise there are things other than AI I’d like to write about but AI is the thing that keeps coming up for me right now. (Also yes, the June thoughts are coming out in July, it’s summer, that’s how fast I think right now.)
You are the product
If you don’t own the software you rely on, you are putting yourself at gigantic and unmanageable risk. ChatGPT is not guaranteed to stick around forever; Google can kill Bard at a moments notice and if you are relying on those tools you are going to have everything break. This is true in a lot of areas around technology, but I think this is really coming back to be a problem right now.
We used to say that if you’re not paying for the product, you are the product and now I think with most software companies they’re looking to have you pay to be the product. This is maybe a little better with institutional licensing, but I think any adoption of any software is a concern for viable teaching.
At a minimum, ensuring that a teaching plan doesn’t expect a particular piece of software, but a tool to do a task is important. Better yet, where we can, instructors and institutions should be looking at open software (and open education resources) to be the tools we bring into the classroom. And yes, that does mean we need to put more resources into supporting and updating (and building) our tools.
This last year has been a good reminder for me that when you work with for-profit technology platforms (even if they have an altruistic, educational bent), you are at the mercy of the profit. Your LMS could be better, but there’s no profit in improving it. I think AI makes that problem worse because there’s a lot of faith that “the system is learning and improving” when that’s not really what’s happening at all.
Student centred discussion
The majority of discussion I’ve seen around AI is very instructor and institution centred. We’ve talked about Academic Integrity, using AI to improve your teaching work flow and how students might use AI in a course, but I don’t think we’ve really talked about how students are thinking about AI and how they are using AI and how students could benefit from using AI.
In general it seems to me that students aren’t really on board with higher education in general. Looking at reddit discussions or other anonymous places, the whole idea of higher ed seems to be poorly understood, including it’s purpose and that trickles down to any of the actually choices that instructors in the classroom. Obviously that’s cherry-picking the for the most complaining people, but it still suggests that whatever changes we’re considering, we’re already behind in having students on board with us.
I think a lot more work needs to be done in understanding where students are at with AI (and higher ed in general), what they want and how we can make reasonable space in our classrooms. Internally we’ve been discussing how much more data and information we need when it comes to our learning technology and it feels to me like there is a student shaped hole in the information that comes to us. At the same time, having been reading student feedback on a new learning tool, it’s clear that students don’t always have the experience and perspectives necessary to give really helpful information, but that doesn’t mean we shouldn’t try. It seems to me that simply means we need to adjust how we scaffold their input, the same way we do for assessment.
There is probably a lot more change coming
One of the questions raised in the session I was moderating was (roughly), If AI is going to replace all low level work, why are we wasting our time teaching low level things? Responses varied, but the thing that stuck out to me was that there’s a lot of focus right now on LLMs as the only possible form of AI and beyond that, that ChatGPT is the only viable form of LLM.
I don’t think either of those things are true. I do think in the next few months we’ll see successful alternative LLMs come out to the public (beyond whatever it is that Google and Microsoft are doing). I don’t know that be that useful, or that they’ll grab public attention and hype the way ChatGPT has, but they will be somewhat viable. I also think we’ll see a lot more AI things, come to the fore, and I think we’ll see those beyond a lot of the current “We added a GPT call to our tool to give you some extra text which might or might not be useful (or even correct).”
I also think it’s worth keeping in mind that GPT / ChatGPT is going to keep getting better, at least at sounding like a reasonable writer. I recently saw someone insisting that it was impossible for ChatGPT to ever improve, and while I think most of that argument was aimed towards the “implement AI everywhere” folks, I do think that the quality of the writing will improve. That’s not to say that an LLM is going to develop the ability to do math, or generate critical or creative ideas in the text it writes, but the text is going to get better.
To that end, on the academic front, any approach that suggests we can rely on tools, TAs or even ourselves to detect AI based writing is apt to fail eventually. As academics, our best response is to continue to improve our pedagogy and course design. The best thing we can do for our students is to be upfront, discuss what’s going on and why we’re assessing them the way we are, make the work matter, give more lower stakes assessments and assess them much more on the progress and process of their work rather than by looking at any outcomes.
Comments
4 Responses to “Thinking more about AI – June 2023 edition”
This is really important – thanks for taking the time to formalize your thinking on this. You’ve sparked a few ideas, but maybe the most important (or at least the one that we might be able to make the biggest impact with) is the shift back toward open source tools. We used to build our own tools, then we hosted other people’s tools, and now we rent access to other people’s cloud services. We’re so far removed from being able to have any meaningful control over our learning environment – beyond deciding which checkboxes to click in an admin interface. We’ve been talking about ecosystems of tools, and I think it’s time to lean into that and the “small pieces loosely joined” model. I think we have a chance of showing what that might look like in our context, perhaps with a prototype service to kick the tires…
Thanks to the commenter above (Hey D’Arcy) I read your excellent post early in the year explaining AI. This one resonates as well re: tools. When I taught Digital storytelling we never prescribed software- if a student had access to Photoshop, fine but other west use we’d suggest Gimp or web based tools. It was freeing not to teach software mechanics and focus more on the ideas they were trying to create. Plus these tools are churning way too rapidly to teach tools.
I am completely doubtful AI detection will ever be reliable— they are using statistical means to classify something statistically generated that has no absolute markers to identify. It’s been interesting to hear about students taking ChatGPT text and rinsing through another AI summarizer to fool detection.
If Fool Detection were reliable, I’d never be able to publish anything…
When I was teaching undergraduate programming, one of the things I delighted in was removing the tools from the curriculum. Giving the students time to figure out how they liked to work seemed really rewarding for all of us and although I didn’t really measure it, seems like it helped the students gain confidence.
I’m not sure, but my understanding is that there’s no mathematical way to detect AI generated text. Especially given the aspect that AI generated text is based on the “median” of text in the training data, more median writing by a human will look more AI like and a lot of academic writing is expected to be median. The articles about bias in AI detection that came out this week seem to agree:
https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/
https://arxiv.org/abs/2304.02819
Looking at some of the stress we’ve had around learning tools in the last few years, that misalignment between our priorities and the tool maker’s priorities seems to be a major contributor. And I’m not that sure we have that good an understanding of our priorities, with thousands of faculty members and tens of thousands of students there are a lot of different ways people want to use their tools.
It will be a real switch to go back to maintaining and creating our own tools though.