Related Projects
- AI & The Future of Education: Teaching in the Age of Artificial Intelligence
Transcript
Angela Kelly Robeck: I'm Angela Kelly Robeck, host of The Empowered Principal Podcast, a part of the Education Podcast Network. Just like the show you're listening to now, shows on the network are individually owned, and opinions expressed may not reflect others. Find other education podcasts at EduPodcastNetwork.com.
Will: Welcome, folks, to another episode of the High Tech Podcast. This is your host, William Blingworth, joined as always by my good friend, my colleague, my gaming pal, my Lord of the Rings watching friend, my Star Wars nerd, Joshua Swartz, my other co-host. How you doing, my friend?
Josh: That was such an intro. It was nice, you know.
Will: It was good. Just build me up. I don't know, I just started.
Josh: Yeah, yeah, yeah. All those things — right back at you, you know. Thanks. Yeah, exactly. Yeah. Anytime. Yeah. What's up, people? Welcome to High Tech Podcast. It's another week.
Will: Episode 133. We have a guest on here. I'm very excited.
Josh: We have a guest. How exciting.
Will: We don't always have guests — they're a rare occurrence. Today we have Priten Shah. Thank you, Priten, for being on the podcast.
Priten: Thank you. Thanks so much for having me.
Will: So Priten, what is it you do? Like, what is your purpose right now?
Priten: Yeah, so I do a lot of things. But basically, for the past several years I've been doing work in education and technology. The specific work we've been doing lately is kind of focused on the intersection of AI and education. The work is a combination of research — I wrote a book on the topic that came out last year with Jossey-Bass — and then our startup, Pedagogy Cloud, is doing two things. On the one hand, we're building AI tools for schools and colleges, and on the other hand, we are providing that training, that professional development to educators who want to be AI literate.
Josh: I love that. I like to start kind of getting into some questions here, asking like, what prompted your interest in artificial intelligence, particularly in relation to education? And was there a specific moment or event that made you realize the significant impact AI could have on the educational landscape?
Priten: Yeah, so I studied at the Harvard Graduate School of Education. One of the things that I was doing during grad school was building lots of products — engineering products, mostly. And one of the things I started to notice was that it was hard to build them in ways that were responsible. This is higher education, and it was still not an amazing environment to try to do this responsibly. We were running into things like data privacy issues and concerns about whether or not our things were accessible — these were concerns that I kept running into as an engineer. And I imagined that lots of other folks in more commercial environments, or even at least less education-centric environments, were facing the same things and probably in more dire situations.
And so I wrote a book about it. When ChatGPT came out, 2022-2023, the book was very timely, and the reason it was timely is because ChatGPT raised the stakes. A lot of the things I was talking about that were educational technology problems suddenly became these very big new ethical problems. And that's really where our interest lied, and where the startup started to form — OK, if we're going to help folks deal with this, what is the best way to do that? And I think the answer we came up with is training, professional development, and getting folks AI literate.
Josh: I love that. So I know that one of the things when you're kind of working with these educators — what are some of the key challenges that educators face when integrating AI technologies in their teaching methods?
Priten: Yeah, so I think one of the biggest challenges — and it honestly hasn't changed with AI — is that most educators are not AI literate. What I mean by this is they haven't been learning about AI, and I don't think that's their job to have learned about AI. But because they haven't learned about it, they're now being thrust into a world where they're having to make lots of really big decisions about what AI tools to use with their students, how to handle their students using AI tools outside of the classroom. They're being put into a position where they think they should be experts on it when they don't have the foundational understanding for it yet. And so what we see is a lot of fear, confusion, misunderstanding.
And then maybe what I think is most concerning — early on it was like, "Oh yeah, this thing can be a personal tutor." A lot of school districts I know ended up buying access to OpenAI or buying access to these products that were built on top of ChatGPT. And a lot of the promises there didn't really pan out in the way that lots of schools wanted. So I think that's also damaged the trust and the comfort level that a lot of teachers have with AI, because they've already been slightly burned by some early hype.
Josh: I think maybe kind of an interesting follow-up is — a lot of the debate seems to center around can we use it, should we not use it? But it seems like the more valuable question is how do we use it properly. So what are some strategies in your opinion to effectively integrate AI in educational settings while maintaining ethical standards?
Priten: Yeah, so I think the answer to this is different depending on the situation you're in, the age group of your students, the particular topic, and even the kind of school that it is. Like, the culture of the school matters a lot, the socioeconomic status matters a lot. So I think the first thing is for us to stop looking for one universal answer to how we ought to be using the technology.
And I think the second thing is specific to teachers. One of the most effective things we've found is having teachers learn AI for their own work first. The discussion of AI tends to be like, how do I use it with my students, how do I have it in my classroom? But most teachers have done very little exploration with AI for themselves outside the classroom. And there's a lot of value to the technology specifically in terms of teacher workflow. Being able to go to teachers and show them — this can help you differentiate and plan your lessons. Not just "here's a lesson plan from ChatGPT," right? That's the quick and dirty approach. But there are really intentional ways to use AI for differentiation, to use AI for drafting your rubrics, and things like that, that teachers might find are really meaningful to them.
And then once they learn to use it for their own personal use, I think it's really easy for teachers — like, teachers want to serve their students best. And so immediately, once you're seeing how it impacts your workflow, how you're thinking better, how it's helping you approach something in a better way, then you're like, OK, maybe my students can use this in a way that's productive and not just harmful.
Will: Yeah. And that comes down to modeling. I think one of the things that many instructors are scared of is they're just going to cheat. But what they don't understand is if they use it well, they'll demonstrate to the students how to use it well. Like calculators — yes, some students are going to use the calculator to make a game on it and turn it into a Game Boy. Some students are just going to use the calculator to do the math for the assignment in the room. Are we going to ban calculators because kids in the 90s were bored and made video games on them? No, we're not going to ban calculators. The TI-83, etcetera.
I think there's this natural — I can't remember the name of it now, of course, at the moment — but the adoption loop of like, people are going to be the early adopters to laggards. Those folks who are dealing right now with this impact, the shockwave of ChatGPT in their laps, they're just not ready to know what it can do. They're just scared of it. And then it's tough. I get it. There's parts of it I'm scared of. But I know for myself, if I am stuck on an outline trying to get something into a visual format, I know I can use ChatGPT for some things and Claude for some things. They have their strengths and they have their weaknesses. As Josh and I say, it's always going to need that human element — that's what I've been learning.
Priten: Yeah. And I think you're right about the modeling piece. This is where I've become a lot more cautious in my thinking lately, just because the initial reaction was let's have teachers model good AI usage and then have students adopt it. I do worry that when we are bringing it into the classroom, the discussion is too much on the technology and not enough on the underlying pedagogy. And so when I think about having students use it, I want to be even more careful now than I was before.
And then, to your point on modeling — there are lots of teachers who don't have to integrate AI into their classrooms to still be doing their jobs well. We should also leave space for teachers to not — not every teacher has to integrate AI into their classroom to prove that they're good at what they do. Really letting teachers be self-selective in terms of how they incorporate it, I think, is important.
Josh: Yeah. I'm gonna let you go ahead, Will.
Will: Well, I was going to ask you — along those lines, what would you say is the most effective way of still, let's say AI is not your thing as a teacher, what would you do that is maybe old school or traditional to still connect with your students and still be able to get through to them, since they have access to AI at home?
Priten: Sure, yeah. So there's a couple of things. I think one of them is something we don't want to hear, which is that homework in the traditional sense is probably over. Here's where I challenge people: think about your most tech-savvy student and then ask yourself, is there any homework assignment that I come up with that that student couldn't use ChatGPT or Claude or any of these tools to cheat on? The honest answer is most students know how to cheat on your assignments. Your traditional assignments are a lost cause if you send them home with students — any assignment that you come up with, AI can do, and if not, by December 31st of this year. Like, there are very few homework assignments I can envision that they can't do.
The second is that doesn't mean you have to suddenly become an expert in AI and adopt it in your classroom. You can return to lots of things that we already know work in education. You can have debates in your classroom, you can have Socratic dialogues in your classroom, you can have the case study method where you're cold calling students.
These are things that educators have used for hundreds of years, some thousands of years, that work for our students and they would still work in an AI world, right? You can still have students go home and prepare in whatever way they want for any of these activities, whether it be an in-class role-playing activity, an in-class simulation activity — all things that teachers have used. And like one-off cases, at least right, give a presentation in class, have a question and answer period afterwards. At least start there. Go back to thinking about, if you want to think about making your class serve better for your students, know that trying your traditional assignments and having students be able to detect — being able to detect that — is a lost cause. It doesn't mean you have to fully become an AI expert. You can still go back and do things you're really good at. And then here's how it saves your time, and here's some really good reasons why you should start incorporating it if you're not fully ready to think about how you would have it incorporated within your classroom. And those are specifically around teacher workflow. And I think that's the bridge from teacher to student.
Josh & Will: I love that. Priten, I'm curious — within the context of teacher librarians, or just librarians, or even media specialists, there seems to be a lot of crossover. There's a huge opportunity for librarians, including teacher librarians and media specialists, to really be a part of this conversation, to leverage their skills around digital citizenship and media literacy. And I'm curious if you are seeing that in what your role is in advocating for that kind of crossover of the librarian slash media specialist conversation, especially when it comes to thinking about AI and digital literacy.
Priten: Yeah, so I think you brought up one of my favorite points, which is that AI literacy and the skills we need our students to have are not entirely brand new. And a lot of what you just described, a lot of these things that folks have been working on — I'd add in the social emotional learning movement, which has been doing a lot of this work. They've been teaching responsible technology habits, they've been teaching how to evaluate sources. Librarians have been teaching this for a long time. The information literacy aspect is probably one of the most critical ones to AI — just knowing how to make sure you're finding good information. The library has been teaching this for a long time.
And so what I think we need is for those voices to be a part of the conversation. What I've noticed is that those voices aren't being heard or listened to. The reason is because the voice of the tech industry is much louder and they're the ones getting all the media attention, they're the ones getting all the funding. And so the voices of the sort of digital literacy experts are being drowned out. What I hope is that these folks realize that their skills are critical here. And I think they should be much more vocal about it.
Josh & Will: Yeah, I love that. I see a lot of overlap, especially with the work that I do in libraries, and the crossover of that with AI is fascinating because we were always having these conversations about what's trustworthy information, how do we evaluate. But now it's kind of like a whole new ballgame because the tool itself can present information in a really convincing way. And that's a fun thing for a librarian to think about and tackle with the students in a school. So I love that.
Priten: And we need it.
Josh & Will: Yeah, yeah. Just to switch gears a little — what role do you think AI plays in addressing the diverse learning needs of students? Can it be a tool for more personalized education, or do you think there's limitations to what AI can do? I think there's a lot of talk about the promise of differentiation. What has your experience and research shown?
Priten: Yeah, I think we're a long way from the dream being realized. I think the promise of AI as a tutor in the classroom, or as a teaching assistant in the classroom, is really powerful and I think it can get there. But what we're seeing right now is that a lot of the tools being developed for this purpose are not doing a great job, for a couple of reasons.
One is that the underlying technology hallucinates — it makes up information. That's being improved, but it still has a very real accuracy problem. So the idea that we'd be training our students on information that is unreliable is concerning.
The second, more practical concern I have is something that a lot of the research is showing, which is that a lot of the AI tutoring being done right now is effectively just making students dependent on the tool. They're not really learning. They're just getting the answers from the tool. And that's not really what we want. We want the students to learn. We don't want them to just get through the assignment. The goal of the assignment is not to get through the assignment — the goal of the assignment is to learn the material. So I think the dream is there and I think the technology is getting better. But as of right now, and I think for the next several years, I am not comfortable saying that AI tutors or AI teaching assistants are a good replacement for in-classroom interactions.
Josh & Will: And it goes back to what you said before — I love this idea that you mentioned, which is that for the teacher, there are a lot of benefits. But when we flip it around and look at the learner side, it doesn't always translate the same way. And I think that's a really important distinction that a lot of people miss.
Priten: Yeah, exactly. And I think this is where I've been really encouraging teachers to use it for yourself first. The dream of AI is that it helps us do our sort of operational work faster and better, and it's doing a lot of that for teachers. But the magic of education, the thing that actually gets students to learn, is still that human interaction in the classroom. And so anything that takes away from that human interaction, I think we should be really cautious about.
Josh & Will: That's beautiful. One of the things — just a conversation that Josh and I have had a lot, and it's been in a lot of conversations on this podcast — is what is the responsible way for AI to be used? What are some principles of responsible AI? And in the context of education, what would you say to that?
Priten: Yeah, so I think the biggest principle of responsible AI is transparency. What I mean by this is that we know what the technology is doing. Right now, a lot of the AI tools that are being marketed to schools — we don't know what they're doing with student data, we don't know how they're making their decisions, we don't know whether they're biased in the responses they give to students. And I think that's a really big problem. So the first principle I'd advocate for is we need a lot more transparency from the companies building these tools about what exactly is happening under the hood.
The second thing I'd say is that the decisions about how AI is used in schools should be made by educators and communities, not by tech companies. Right now, a lot of these decisions are being driven by marketing and by the availability of the technology rather than by an intentional pedagogical decision. I think the most responsible thing we can do is slow down and make these decisions intentionally — involve parents, involve students, involve teachers — and make sure that the technology is serving the needs of the community rather than the needs of the tech company that developed it.
Josh & Will: Yeah, I love that. One follow-up question, Will — can I? Go ahead, please. I love this. I think AI governance is kind of what I'm hearing — this idea of governance in a school setting. Is this something that you're seeing as a trend? Are schools creating governance structures or policies specifically for AI? And what would you say is best practice for schools who are looking at starting to create these governance frameworks?
Priten: Yeah, so the short answer is yes. We're definitely seeing more schools develop AI policies, but they're at very different stages. Some schools have banned it, some schools have embraced it, and most are somewhere in between.
What I'd say is best practice — and this is what I try to help schools with — is that it shouldn't be a top-down decision. It shouldn't be the superintendent or the school board decides and everyone else just follows. It should be a collaborative process that involves teachers, parents, students where appropriate, and really reflects the values of that community. And the other piece is that any policy needs to be a living document. This technology is changing so fast that any policy you write today is going to be outdated in six months. So building in a mechanism for reviewing and updating these policies is really critical.
Josh & Will: Wonderful, I love it. I just want to follow up on that point. You mentioned transparency, and I think that is the answer to so many things — not just from the vendor perspective, but from the educator perspective. If we're going to use it, we need to model transparency. If you're using AI, tell your students you're using it. If you're using it to help grade, tell them. If you're using it to help plan, tell them. Because that models the behavior we want from them.
Priten: 100%. Yeah, I couldn't agree more. And I think that's where the trust piece comes in. Students are more likely to be honest about their AI use if they see their teachers being honest about theirs. Right now there's a lot of teachers who are using it but aren't telling their students, and there's a lot of students who are using it and aren't telling their teachers. If we just had more open conversations about it, I think a lot of these issues would start to resolve themselves.
Josh & Will: Yeah, it's funny — I use it quite a bit. And what my students have seen, which was unintended, was that it doesn't give a perfect answer. When I make a quiz, or I make something, or whatever I'm using it for, sometimes the content that comes back needs to be modified, needs to be adjusted. It actually generates a pretty good discussion on why do I still need to edit this, what was wrong with the output. And that sort of became an organic teaching moment.
Priten: Yeah. And I think that's one of the most powerful things you can do — let students see you struggling with it. Let them see you being like, no, this isn't right, and then let them see you fixing it. Because that teaches them critical thinking. That teaches them, oh, I can't just take this at face value, I need to evaluate it, I need to think about whether this is right or not. And that's one of the most important skills we can teach them in an AI world.
Josh & Will: Beautiful. Well, Priten, I think we're getting close to the end here. But I do want to ask you one more thing — what do you think the next two to three years look like for AI in education? Where do you think we're headed?
Priten: Yeah.
So I think the next two to three years are going to be really interesting. I think we're going to see a lot more tools developed specifically for education. I think some of them are going to be really good and some of them are going to be really bad. And I think the schools that are going to fare the best are the ones that are being intentional and slow and thoughtful about how they adopt these tools.
I think we're also going to see a lot more pressure from the tech industry to integrate AI into every aspect of education. And I think that pressure is going to be really hard to resist for a lot of schools. But I think the schools that do resist and that make thoughtful decisions are going to be the ones that serve their students best in the long run.
The other thing I'd say is I think we're going to see a really interesting conversation about what we value in education. Like, is education about preparing students for jobs? Is it about preparing them to be good citizens? Is it about helping them find their passions? And I think AI is forcing us to have that conversation in a way that we haven't had to in a really long time. And I think that's actually a really good thing.
Will: One of the things that makes me think of — and I know Josh has some thoughts on this too — but as someone who teaches both online and in person, I think the conversation of traditional assessment versus alternative assessment is going to be one of the biggest conversations that I see coming. I don't know, Josh, what are your thoughts on that?
Josh: Yeah, I totally agree. I think the whole idea of, like, what does assessment look like in an AI world? — that is one of the most pressing questions we have. And I think there's a real opportunity to move towards more authentic assessments, more project-based, more portfolio-based, more conversation-based. And I think that's actually better for students anyway, regardless of AI. Like, I think we've been over-reliant on traditional assessments for a long time. And AI might be the push we need to finally make that shift.
Priten: Yeah, I'd agree with that.
Will: Beautiful. Well, Priten, thank you so much for joining us on the HiTech Podcast. Is there one thing that you'd like to leave our listeners with? And then where can people find you and stay in touch?
Priten: Yeah. So the one thing I'd say is just start. Like, if you haven't started playing with AI yet, just open ChatGPT and start a conversation. That's the best thing you can do. And don't be afraid of it. Like, it's a tool. It's not going to take over your job. But you should be familiar with it. In terms of finding me, you can find me at Pedagogy.Cloud, which is our website for our consulting work. My book is AI and the Future of Education, and it's available wherever books are sold. And then I'm on most social media as Priten Shah.
Will: Love it. Thank you so much.
Josh: Thanks, Priten.
Priten: Thank you both.
Will: And that's another episode, folks. Thank you so much for joining us. Please subscribe, rate, and review the HiTech Podcast wherever you listen to podcasts. We'll see you next week.