Related Projects
- AI & The Future of Education: Teaching in the Age of Artificial Intelligence
Transcript
We all went into education because we want what's best for students — not necessarily what's best for us. Think about that one more time: best for students, not necessarily what's best for us. So often times we put in the hours, we put in the work, and we put off making sure things are efficient for us, as long as they don't hinder student progress.
Hey, this is Dr. Jones, host of the podcast Seeing to Lead — a podcast designed to help educators and leaders with the issues they face every day. This week I talked to Priten Shaw. He's the author of AI and the Future of Education: Teaching in the Age of Artificial Intelligence.
Now back to what I said about what's best for students, not necessarily what's best for us. We talk about AI now being part of society and coming onto the scene in education. Well, here's a news flash for you that Priten explained really well: AI isn't new. It's been with us. It's just now more prolific. The good thing, though, is that AI can be used to lighten the burden on teachers so that they can show up in a better state to provide improved environments for students — social emotional awareness and wellbeing. You see, AI can help teachers plan out and achieve goals they had originally had, to make sure they are being realistic and not burning themselves out.
But there's always a downside, right? So here's the downside. We're in an age where we are offloading our skills to AI bots. We do it all the time with our smartphones, we do it all the time with our computers. So how do we help students be successful in this age, except to teach them the human side of skills — except to teach them to use AI in service of their skills instead of in lieu of their skills?
Look, you're going to love this conversation I have with Priten, because he actually addresses that in his book and in his discussion about the ethical side of AI. Because when it comes to it, it's about students — the original reason we got into education. So AI is here. How can we use it in education to promote the growth and success of students, instead of trying to lock it away, encouraging them to break the rules, making more work for ourselves, and falling short on that original goal of doing what's best for students?
So look, after you gain all the value I know you're going to gain out of this show, do me a favor — hit subscribe, rate and review the show. It does so much for it, I'd be really appreciative. And then share it, especially on social media, with your biggest takeaway from this conversation with Priten Shaw about AI and the future of education on Seeing to Lead.
I find that the argument that a lot of education is outdated to be cheap. I find it to be not — like, this often comes from folks who think the role of education is very career oriented, have a very particular view of what ought to be the aims of education. But I'm sitting here pretty reassured that the kinds of things we want for our students are still the same, the kinds of topics we want to teach them ought to be the same, the kinds of people we want them to be are still the same, and that the parts that might be getting outdated are our tools — and that as technology evolves and we have more tools at our disposal, we ought to update that to best serve those same aims.
Dr. Chris Jones here, and welcome to Seeing to Lead — a show designed to help leaders increase their ability to effectively support, engage, and empower their staff through intentional practices, so that they create an environment where everyone reaches their greatest level of success. On Seeing to Lead, communication rules the day as we hear voices from both teachers and leaders in an effort to examine perspectives, highlight misunderstandings, and provide steps to ultimately bridge the gap between what teachers need and provide, through thoughtful dialogue. This show is about amplifying voices, creating understanding, and providing information to help everyone continually improve. I want to personally thank you for taking the time. Now let's get to getting better.
Priten Shaw is an education entrepreneur and the author of AI and the Future of Education: Teaching in the Age of Artificial Intelligence. He is the founder of Pedagogy.cloud, an educational consulting firm that supports educators in K–12 schools, higher education, and the nonprofit sector to adapt to the increasing capabilities of AI. Priten is also the founder of the civic-focused nonprofit United for Social Change. He has a bachelor's in philosophy and a master's in education policy from Harvard University.
Personally, I'm excited to welcome Priten to Seeing to Lead today because the topic of AI is huge, and his recent book covers in depth what I consider to be the biggest conversation we should be having around AI — how can we use it in education to promote the growth and success of students? And folks, if you haven't picked up a copy of this book yet, you really have to. I'm really happy I did. I thoroughly enjoyed it, and it really provides the nuts and bolts that we're always looking for to help teachers be more successful with students in the classroom. So all of that said, Priten, welcome to the show.
Priten: Thank you for having me.
Dr. Jones: You know, in your book, fairly early on, you talk about the idea of education being outdated. Well, that's kind of been an argument going back and forth about the industrial model versus what students need now. But now with AI, some say that argument might even be more important to be having. What are your thoughts around that, with AI now being where it's at?
Priten: Yeah, this is one where I think I side with the philosophers more than the technologists. And I find that the argument that a lot of education is outdated to be cheap. I find it to be not — like, this often comes from folks who think the role of education is very career oriented, have a very particular view of what ought to be the aims of education. But I'm sitting here pretty reassured that the kinds of things we want for our students are still the same, the kinds of topics we want to teach them ought to be the same, the kinds of people we want them to be are still the same, and that the parts that might be getting outdated are our tools — and that as technology evolves and we have more tools at our disposal, we ought to update that to best serve those same aims.
But when people say education is outdated, that's not what they mean. They're trying to tell you that the content you're teaching is no longer relevant, or that we should refocus our schools to be little pods where everybody's learning how to use a chatbot so they can go use it in their careers later. And those folks I vehemently disagree with, and I'm happy to argue it at all before that.
Dr. Jones: Yeah, no, and that's an important distinction, because I think about that argument all the time — how we go from "well, you should be teaching students content" to "no, you should be teaching students skills," and everywhere in between, and how people say that traditionally education, the idea of a liberal arts education even, where it's well-rounded, there's multiple things that they're seeing is what's important. So you state that in your book — that the primary goal of education remains the same. However, I like what you just said, it really resonates with me, that the tools of how we deliver that is what needs to change and what has changed. So when we're talking about AI — and this is something that's really key in your book — you go into the different types of AI, because people are even confused about that, and what they're used for, and then you talk about what that could look like in a classroom. Can you go into a little bit about that, and how AI can start to play more of a significant role tool-wise, but not changing philosophy-wise?
Priten: Yeah, so there's a couple of different ways that we like to talk about where AI can intersect with our classroom. And the first and easiest place is in the teacher workflow. We like to start folks there, and we think this is probably the least risky place to get some AI benefits in the picture. And this means everything from using it in your personal workflow — so you have a lesson plan and you want to adapt it, you want to create lots of formative exercises really quickly, you want to come up with a creative project idea for your students, you want to draft a case study on a problem or a course simulation activity — all those kinds of things. We find that really reduces the level of effort that teachers have to put in to get the ends that they want for that material, and then they can go focus on implementing those in the classroom, modifying them to be appropriate for their exact context. And those use cases are the ones that I'm most excited about. Those allow teachers to kind of free up some of their own time so they can focus on the human elements of education, engage their students in creative ways, and hopefully solve some of the perennial problems in education that we have.
But there are other use cases, and I think there are some limited uses that we kind of want to think about. In the book, I kind of go through all the possibilities of what people are going to make — like, folks are going to keep throwing out all sorts of AI tools, and so I want us to all be prepared to think about them. So I talk about everything from like a little discussion bot that keeps track of who contributed how much in a classroom discussion and gives feedback on how they might contribute more — and there's already all sorts of AI surveillance coming out, lots of AI bots coming out, some doing poorly, some doing better — and all those kinds of tools will have some place. We might want to use some AI bots with our students, because a lot of students kind of deep dive into an area of interest of theirs that they really can't when it's a 1-to-30 ratio between teacher and student. And so there are some use cases where I see these tools could be avenues for students to kind of do some personal exploration and get some formative feedback on their own.
At the same time, I'm hoping that we can keep refocusing our classroom time on the more human elements, and being prepared for what folks are going to dream up — and what's even already possible, which I think is a huge thing, because a lot of folks think these things are far off in the future, they're not doable now, but they're all there and they're all going to be pitched to our schools. And I know — like, I build, I have an AI tool for student users as well — and so I know it's not like I'm saying there's no scope for these tools. There are really fun ways that students will find them engaging, I think there are ways to use them productively, but also we have to be prepared to know when these are serving our goals and when they're working against our goals. I think that's the key distinction that I'm hoping we can all spend some time thinking about and talking about.
Dr. Jones: Yeah, so I want to revisit that, because I want to sit on that for a little bit — the idea of helping or harming our current goals. But one of the things you said really stuck out to me. So AI — somebody my age, you're quite younger than me, somebody my age hears "AI, artificial intelligence" and we're like, Terminator, oh my God. And so that's a very real fear, with the ethics that play into that. But now it almost sounded — and with that said, AI makes us think of moving away from the humanistic pieces. But you're saying something different. You're saying that AI could actually help us with the humanistic aspect of education. And that to me speaks to social emotional learning, and how we could put more of our focus on that while not losing academics. So it doesn't become an either-or, it becomes an and. Am I getting that right for you?
Priten: Some of my work through the civics nonprofit has been on trying to imagine what it might be like if we took the philosophy that grounds liberal arts higher education and tried to incorporate that into the K–12 setting. And I can talk a lot about why —
I think it's important, but I think there's an equity piece to this in terms of — if we imagine liberal arts education is the best preparation for being an active member of a democratic society, we should do that without the enormous price tags and burdens of student loans, and we should do that in our K–12 setting. Part of that is using the moments in our classroom to make our students better people and better equipped to be happy. And that sounds preachy, but it is fundamentally why we build public education systems and why most teachers go into education and why all of us care about education. I speak to what's best for the student and not what's necessarily best for their future work. So making sure that they have the skills to understand their own emotions, their peers, other humans, and navigate those circumstances in high-stakes situations and really low-stakes situations — all of them — is important. And I think that's maybe more fundamental to what most folks will think about when they think about, like, oh, why do we want to school our children. That is pretty far up on the list, I think, of folks' deep-down concerns and wants.
So I'm hoping that these AI tools can help us do some of that. A big part can just be allowing our teachers to feel less burdened, and that alone makes them more equipped to show up for their students, to be there, to empathize, not feel burnt out — because at the end of the day, the teacher's own emotional state will dramatically change how much they can show up and provide that human element to their students in the classroom. So that's why I'm excited about seeing, okay, what are the ways we can talk about these AI tools that can help alleviate some of the burdens that our schooling system has created.
Now I don't think that AI is going to be the one solution to all our fixes for teacher burnout. Higher pay rates and fundamental problems in education are still there to be solved, and I'll keep thinking about those. But it could help with one particular aspect of teacher burnout, and I'm hoping we can use it in those ways so that we have more time to spend interacting with each other.
It's really interesting that you say that, because when you talk about teacher burnout and teacher stress and anxiety, a lot of times that stress and anxiety comes from: am I being a good teacher? So not only am I there for the students relationship-wise, but when it does come down to it, am I teaching them what they need to know, or are they understanding what I need them to understand before they move on? Specifically — I believe it was, I'm going to take a stab at it here, maybe — I believe it was chapter 4 in your book where you talk about adapting pedagogy, and you hit the three big ones, because everybody believes that students learn differently, and largely in three camps: the constructivist, behaviorist, and social-cultural approach. If we can help teachers see, understand — or do I even say feel — that they are being a good teacher and have that confidence, no matter what pedagogy they believe in, how does AI help them do that by learning these tools?
So I think oftentimes when teachers have an idea of the pedagogical strategy they lean towards, they find their teaching style — because I think we talk a lot about learning styles, but there are also teaching styles, and there are ways that you find yourself to be most effective in the classroom. And then in the context of your individual students and your school setting, you kind of have an intuitive response to, okay, this would work in my school setting. There are some folks who say, okay, look, my students don't have the skills, or the time — that's the class — maybe to use project-based learning, and I don't have the resources in my school to create that kind of time to really tailor the pedagogical approach to center project-based learning, and no way to get around that. But that might be someone's intuitive response when they read some strategies.
The great thing about AI tools is they can be the sounding board, and that's what I try to encourage teachers to use in the short term. Tell — put in a lesson plan you have, say you might want to modify this to be project-based learning, and say you have a classroom with 30 students with only 45 minutes three times a week, and see what it comes up with. Those kinds of ways, I think, will help our teachers achieve the goals that they have already set for themselves — pre-AI, regardless of what kinds of changes society is going through because of AI — and now use these tools to kind of meet those goals of their own that they have already decided are good ones to pursue, in light of what their students' needs are and their own styles.
And then there's also the reality that all these kinds of shifts are not easy. So it's very easy to say, okay, we're gonna do a PD day or bring someone in who's going to talk about inquiry-based learning, and that plays a role — teachers, I know, oftentimes do find some value in that — but it also creates a lot of work, and all the work needs to be done by somebody. So having a partner in doing some of that work is useful. And so if that just means you're coming up with more creative projects, coming up with assignments and rubrics, having someone to do all that with — and I used "someone" in quotes, unfortunately I can't air-quote on an audio-based podcast, but there's an air quote for that one — that does make all of this more possible in a way that really isn't otherwise, because of the amount of effort it would take to implement some of these things.
A great example is mastery-based learning. Oftentimes — I work with a nonprofit that works on creating mastery-based learning for reasoning skills — the amount of exercises you have to build in order to provide students enough formative exercises, and then summative assessment checks, because the idea is they can take as many assessments as they want until they show mastery — there's a huge burden of content production. And these tools are great for that. So if you're someone who's like, okay, I really want to pursue standards-based grading in my classroom, and I'm going to provide unlimited opportunities for them to get formative feedback and take the summative assessment, having a tool that will generate all those assessments for you makes that real — because you cannot possibly find it feasible to, if you're a math teacher, every unit have like 15 different unit tests. It's just not doable as a single human, but that's doable with AI.
And so that makes all these things that we know are research-backed — we know there are reasons why that makes students more incentivized to learn, why that holds them more accountable, why they have more retention year after year — but they're just not practical right now because of the amount of workload it takes, and we don't have the ability to do that as individual humans. Then these tools can help us in those ways.
So that's where I'm hoping that folks can kind of see that these tools don't have to necessarily be at odds with the kinds of things you always wanted to do. They'll require you to reshift your pedagogical strategy, but let's figure out: what are the things you always wanted to do that you've struggled to do because you don't have enough time, you don't have enough resources, and how might AI be helpful in that way? And then we start seeing doors kind of open up, and see, okay, this is not the scary Terminator — this is a tool like any other that I've already used, but maybe one with a little bit more power than the ones I had access to.
And I mean, that's a fantastic explanation about how it helps teachers with efficiency, and helps teachers — I would say — lean into their beliefs and their pedagogical beliefs and practices to a degree that they aren't able to otherwise. So that would lower that level of frustration. But one of the things, going through your book, you talked about it being about knowledge, skills, and mindsets. To me the mindsets piece is huge. So as a leader of a school, I have the ability — through our scheduling process — to grab time and set time aside to help teachers with this, help staff members with this. How do I, as a leader, change the mindsets of my staff? Do you have any practical ways to do that so that they do grasp onto this, start to play with it a little bit, start to practice with it a little bit?
You offer a fantastic thing — another great part of your book — you offer examples and you have a database of prompts, and one of the exit tickets — yes, folks, he's got exit tickets in his book — one of the exit tickets after the chapter is that you tell people to play around with it, join one of these generative AI sites, practice with their prompts, and keep a database of the prompts that work for future use. So that's all great. How do I get somebody to start doing that as a leader?
Yeah, and we do in-school PDs, so the problem that we've been thinking about — and it's honestly shifted as time has gone on, and folks' narratives around it have shifted — when we initially were started being asked to do physical PDs, a lot of the focus was like, start with the pain point, which is plagiarism. That gets folks' interest really quickly. Most teachers are struggling with figuring out what is doable by these AI tools outside of the classroom. It's just starting with, okay, look, regardless of how you feel about it, what tools you want to use or not use, your students are using it. Starting there often gets those folks to just pay attention and see what's possible with the tools, how are students using it or misusing it, and then that conversation naturally leads to the how — how might you use it in a productive way, once you start to see that students — that it is a pretty powerful tool. So that's an important starting place, to say we're past the time of, like, oh, do I bring this tool into my classroom, because our students are bringing it in whether we know it or not.
The second piece of this is — and this is where I think there's some fad nature to the AI topic — everybody is trying to figure out how can we put AI in this tool, in this process, in this project, and some of that will fizzle away. We won't necessarily need everything to be AI-based forever, because hopefully we reenter some of our aims. But there are larger implications to AI in the real world. So I think helping teachers see how major careers are shifting in the last two years — careers that are sometimes normally not quick to shift, like medicine and law — how they're already adapting to the world of AI. There's a great New York Times article today about how white-collar jobs are going to shift dramatically in the next few years as this continues to grow, and there's lots of literature coming out about how the world will look different, and we're already starting to see that. We already hear from lots of employees that they're either using it and their employers don't know, and so it's making their life more efficient, or you're seeing folks say that their employer is hiring for an AI role, or their employer is in fact providing training so that they can all be more efficient in the workplace. So that helps folks see that, okay, there are some career reasons why this is important. And then we can talk about the scary parts of AI, and there are scary parts.
So maybe it's not Terminator, but it gets pretty close when you start thinking about the present. When you talk about the principal who had a fake audio generated of their voice saying something inappropriate, we talk about deepfakes and how that might be used for cyberbullying and how that might be used during election campaigns for misinformation — then teachers start to see that okay, this is not even just about cheating, there are larger dangers to our students if they don't know how to navigate this era. We want to make sure that they're not using it to bully, and we want to make sure that they're consumers of this technology that are critical. So when they see a video on the internet, they don't have the resources to navigate that media literacy that we have been teaching them — with a new element of how does AI change that space. And I think having those conversations about okay, there are dangers that these technologies pose to our students that are much more stark than whether or not they're cheating on a homework assignment, and we need to make sure that they're equipped to navigate those and have the resources to do that — then we start to see teachers say okay, well tell me more about how this is happening, how they might do this, how I can help students build that literacy. And that's sort of that mindset shift — this is not just, you know, let's talk about the cool new chatbot that can do your essay for you, but there are larger societal implications.
So it's even about — and the idea that really hammers it home is if you want to help students, we have to address this and adapt to it rather than try and lock it in a box, because we're never going to lock it in a box. And in reality, students are going to be dealing with this. It almost reminds me of the cell phone argument about how damaging cell phones are to students through addiction and so forth, but people that just lock out the idea of a cell phone — well, that's not reality when the students leave the walls of the school. So it's not a far leap to think about AI being the same thing.
It makes me think of something you said — or that was a deeper explanation, I think, of something you said before we hit record — which is that even if teachers aren't going to use AI, they need to know about it. And that goes to the heart of navigating this era. With deepfakes, with the idea that we need to double down on having students that leave our schools with a stronger ability to analyze pieces of information and have enough background knowledge or depth of knowledge to at least begin to question that — so that they don't just get, and "get taken" is probably the wrong phrase, but get taken by these different things.
No, and I think this is where I was just starting to paint the picture of where the technology is today and how prevalent it is amongst all the tools that students are using. From the beginning of the school year to the end of the school year, we saw a dramatic shift in how much AI was available to students. iPhone apps came out, Instagram, WhatsApp, Facebook all integrated their AI within the apps themselves. Snapchat had their app already last year, but student usage went up. You started to see new abilities — the ability to screen share with the AI tool. Apple announced that every single iPhone and Mac device in the country will have AI built in. And so there is no getting around it, there's no figuring out how do we ban it or how do we just ignore it, because it is everywhere now. It's not just going to be everywhere — it is everywhere. Making sure that students have the skills to navigate that is really important, and they need to know how much they can rely on it and not rely on it.
We talk about this, and it's a topic that comes up quite a bit, especially in the circles with philosophers that I tend to hang out with. We talk about offloading our skills through these tools. We saw this with calculators — our basic mathematical skills have gone down dramatically. So most of us, when we're sitting at the dining table, we'll pull out a calculator to calculate the tip, when maybe 20 years ago that would have been mental math that most people could do pretty easily — although the tip percentage has gone up to the point where, you know, even 20% makes it a little harder, but still, people reach for the calculator to do that basic thing, or what was considered a basic thing in the past. GPS — the ability for spatial reasoning and awareness has gone down substantially. The number of times that if my GPS is not on I couldn't even get to my parents' home from here, which would have been a very easy distance normally. Those kinds of reductions in our ability to do things — we know that happens when technology is introduced.
These AI tools can think and write and read, or mimic all of those things. And if we start offloading all of those things to these tools, that's a danger that we need to make sure we're safeguarding our students from. So this is where we talk about making sure we can find ways for them to use these tools in service of their skills, and figure out which skills do we know they really need outside of the school setting — so that they don't become so dependent on the tool that they no longer have those skills themselves.
And there are folks — and there's so much disagreement on this — there'll be folks who think that the essay writing skill is no longer relevant because it can now do the writing. And then there are folks, and I think I fall in this camp, who think the process of writing is important. The ability to think about your words and word choice and rhetoric and grammar all do say a lot about your ability to think, but also to think about how someone else is going to receive information. And those are fundamental human things that I think we all want to make sure our students can still do. And then that dependency is scary, especially when — and again, lots of great literature on what truth means in an AI world where the deepfakes are there and where it can contour to your exact worldview. We want to make sure that our students have the background knowledge and skills to do something with that knowledge, so that even if these AI tools are producing more content or doing more of our economic work, they know when to chime in as a human and say okay, the ethics of this is wrong, or the harm this is causing is not worth it. Those kinds of things are fundamentally human decisions and human calls, and they need the skills to navigate that.
I know I was on my soapbox for a little bit, but those are the kinds of things I'm hoping — those are the kinds of conversations we need to have about how do we chime into those goals. And I don't think the skills are very different than the ones we already want our students to walk away with.
It makes me think of AI in service to the student instead of in lieu of — or in service to the skills instead of in lieu of. Because if you don't handle that, that's when you run into the ethical issues. But in your book — and I just want to turn this over to you because as I keep mentioning different things in your book I always fear that I may be missing something, because you talk about ethics, you talk about professional development for teachers, you talk about adapting and growing and how to teach students about it — what do you think was the most impactful part, or most important part, of the book as you wrote it and published it?
Yeah, oh, that's a great question — I don't think I've been asked that one before. I think the ethics chapter is my favorite chapter, and the reason is because it's almost like the first chapter folks ought to walk away knowing best. It doesn't make sense without everything else, and hence why it shows up later in the book — but that's not about priorities, that's more about scaffolding the learning for the reader. The ethics chapter kind of outlines some of my biggest concerns when it comes to AI and education, and I think those are the ones that we need to be proactive about. So I'm worried about what this does for achievement gaps across the country. I'm worried about what the existing digital divide means for how these tools are implemented across the country and what access students have to them. And I'm worried about giving up too much of our students' information and privacy in service of what may not really be much in return.
Figuring out the right balance here is a dilemma that I think we'll all have to navigate as these tools become more powerful. What I tell folks is that the more powerful these tools are, the more data they will need on the individual user. And there will be benefits sometimes that we will decide are worth it — for our students, for ourselves as teachers, or even as individual consumers. And then there will be times where we'll have to decide that no, giving up my private information is not worth what I'm getting in return. We do some of that already — we decided that giving up all our data to Facebook is worth it because we get to stay in touch with long-lost friends and family. So we're okay posting all that publicly, or at least within some sort of protected framework.
But these are the new set of dilemmas that we'll have to navigate. Okay, what am I comfortable with — the risk that it's used for training, the risk that it's used for advertising, or the risk that it's used for political ends? Those are the kinds of conversations that I think we need to have as individual consumers of technology, regardless of our role in education, and that we need to have as educational systems about what tools we want to allow and what kinds of guidelines we want to set for instructors and for students. And we want to empower students to make those right calls outside the classroom — because, and this is what I think you alluded to earlier, we can maybe create this bubble within our classrooms and say, you know, we'll block — maybe you find the magic way to block every single AI from every single platform, and the IT person who comes up with that will be a remarkable achievement — but even if you do that, we still want to be able to have the conversation within our school system about how to make sure that our students are using these technologies responsibly outside the school system. And this is where the cell phone example is great, because maybe that —
And I'm someone who lightly leans towards the banning of student cell phones and providing alternative tech to use in schools when resources and logistics allow — but not at the expense of also helping them navigate those decisions on their own. So there's scaffolding that we can do as educational systems about which tools we allow, where we start, but the ultimate goal is: can you navigate the wild west? That has to continue to be the ultimate goal. It can't be "can you navigate the bubble and only the bubble" — but can the bubble help you navigate the wild west? That's how I hope we can start thinking about both sides of that debate.
And thank you for that. The ethics chapter definitely rang true to me because they are considerations, and they are things that people are legitimately worried about. It's easy to say, "Oh, you're just a stick in the mud and you don't want to move forward," but they are very serious things that we need to think about as we move forward.
You talked about how the chapters weren't in order of importance but were scaffolded — and that's another piece of this book that you wrote. It's scaffolded. I found myself going through it and you start right off at the bat explaining the different types, how we can talk to teachers about it, how we can talk to students about it. It's just really helpful.
So again, I have to tell my listeners — and I'm going to say this a couple of times — AI and the Future of Education: Teaching in the Age of Artificial Intelligence is really something you need to pick up. I'm going to be offering a book study with it to my faculty when they come back this next year. I was just really impressed with the amount of resources, the way it's laid out, and the usefulness. I guess that's a word that comes to mind. Oftentimes we read these books and they're bigger picture, and I know it's difficult in a 35-minute podcast episode to dive into it, but it's really worth the read. So I really appreciate being able to connect with you and get through that book.
I appreciate that — it sounds like it's doing its job, because the big questions are important but the goal was: can you pick this up and make something of it for your classroom on September 1st? So I'm glad it's — yeah, sorry.
No, that's fine. I heard something one time — I was sitting in a workshop, it was a smaller offsite workshop for an administrative group, and we had an individual come in who was going to work with us for two days, which is kind of a tall order for a PD provider. He came in for a group of admins and the first thing he did was go around the room and ask what we hoped to get out of the day. One of the first ones — I'm so glad it wasn't me, because it was going to be my answer — but one of the first admins said, "I want to learn one, maybe two things I can take back and utilize." And his response to that individual was like, "Really? That's where the bar is? Just one or two things you can utilize and the rest is talk? We're here for two days."
Your book throws that bar out the window. There are so many things that are takeaways that you can immediately put into practice in the classroom, because it helps hold your hand until you're ready to stand on your feet on your own.
That might be the best thing someone's said about my book, so I really appreciate that. Thank you.
No problem. So like I said, we are limited by time — 35 minutes or so, it's tough to get into the book — but that's why I wanted to make sure to reiterate that people need to go out and get this, and lead by example that I'm actually using it with my faculty.
I do ask two questions of everybody at the end of the podcast, and we're getting to the end, so I'll jump right into those. One of them is: you're in an educational service field — if you weren't in education, who, not what, would you be?
I think my gut is to say I would be a monk. Taking some time and space away from the hustle and the constant "what is the next big development that's going to influence the world" — and taking some time to think about big-picture things. I think I would cherish that time to sit in silence and think about some questions. Maybe after being steeped in AI, we should all spend some time in monkhood to know our humanity.
100%. I could get on board with that — a step away and time to reflect on humanity.
So through all this, with your book and your current experiences, what's the most important piece of advice you would give to leaders as they work to better support, engage, and empower those they serve?
I think everybody taking a collective deep breath ought to be like common practice — maybe the monk is coming out — but just remembering that this is scary. If it's scary for you, that's totally reasonable, and then it's definitely scary for those you're leading in your school systems: students and educators. Recognizing that fear and that anxiety, I think, makes everybody feel heard and seen, and that's a fundamentally good human thing to do. But it also helps us kind of brace ourselves for the kind of changes that these developments are going to keep bringing.
Expecting that we will all feel confident or comfortable with the material tonight, in a month, in six months, is unrealistic. And setting those unrealistic bars for ourselves and our educators is a disservice. I think everybody recognizing that there will always be a lot to learn when it comes to AI — the stuff will keep moving as fast as it has, it's not going to plateau, it's not going to slow down — and that every morning we will wake up thinking, "I have no clue what the world looks like anymore" or "I don't even know what this technology can do or can't do anymore." It will take a piece at a time, but rolling together in that same place of learning removes that pressure to kind of catch up, or to push everybody else to also embrace it very quickly. The fear is warranted, and we have to navigate that while validating it.
What's the best way people can get in touch with you?
Our Instagram is pretty active. I like to post updates on any podcasts I'm on, any conferences we're going to be at, but also lots of news breaks on AI — we try to post the parts that are relevant to educators, some tips and prompts for educators. So that's definitely a great starting place. If you get a copy of the book, there is a free book discussion guide that goes along with it on our website that gives you how you might lead a professional learning community around each chapter of the book. If you go on our site, which is pedagogue.ai, there are plenty of other resources for folks to check out while they're there. But also send me a message — my email, anywhere and everywhere — it's brennan at pedagogue.ai. I'd love to hear from somebody who's navigating a particular question, dealing with a particular policy, or wants to have me come speak. Whatever that may look like, I'm happy to have this conversation.
Fantastic. And you said Instagram — what's your handle?
It's at Pedagogic Cloud.
Okay, let me just put that down because I want to make sure I put that in the show notes so people can reach out to you that way. I find if people hear it, see it, and it's easy to click, they tend to click a little more. So we'll see what we can do there.
Everybody, make sure you get a copy of AI and the Future of Education: Teaching in the Age of Artificial Intelligence. Priten, I can't thank you enough for being on today. I really enjoyed the book, and I can't imagine people will not benefit from this in big ways if they pick up a copy. Thanks for coming on today.
Thank you for having me and for the platform to talk more about these important topics.
Well, that's a wrap, but not the end. Next step: be sure to take action on something you heard here today.
Hey, thanks for listening to the Seeing to Lead podcast. If you would like to connect for any reason, email me at [email protected] or catch me on Twitter at Dr. C. Jones. If you've gotten any value from the Seeing to Lead podcast today, you can help me and other leaders create a world-class environment through a teacher-centric approach by subscribing to the show, leaving an honest rating and review, and sharing this episode on social media with your most valuable takeaway.
Also, one last thing — have you had a chance to pick up my latest five-star-rated book yet? Grab your copy of Seeing to Lead anywhere you buy books, or at seeinglead.com — that's S-E-I-N-G-T-O-L-E-A-D.com — where you can learn more and continue to improve. Now go have a successful week.