Related Projects
- Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12
Transcript
Welcome to the Principal's Handbook, your go-to resource for principals looking to revamp their leadership approach and prioritize self-care. I'm Barb Flowers, a certified life coach with eight years of experience as an elementary principal.
Tune in each week as we delve into strategies for boosting mental resilience, managing time effectively, and nurturing overall wellness. From tackling daily challenges to maintaining a healthy work-life balance, we'll navigate the complexities of school leadership together. Join me in fostering your sense of purpose as a principal and reigniting your passion for the job. Welcome to a podcast where your well-being is the top priority.
Barb Flowers: Well, welcome everyone to the Principal's Handbook. I'm excited. Today we have a guest with us. It's Priten Saundershaw. He's with us today talking about his new book, Ethical EdTech. So Priten, do you just want to start by telling us about yourself, your experience in the edtech industry?
Priten: Yeah, my experience in edtech started back in high school, and I started my first edtech nonprofit at that point when I really haven't deviated from that plan since. Since then, I've got a bachelor's degree in philosophy, a master's in education policy, and I've spent a lot of that time also developing edtech tools for educational institutions of all sorts — so nonprofits, universities, and startups. During the pandemic, we built a lot of tools for educational institutions to help them transition to online learning, specifically for extracurricular activities. And then since 2022-2023, we've been doing a lot of work helping educators in schools navigate what AI means for the classroom.
Barb Flowers: That's exciting stuff, and I love the conversation about AI because I feel like people have such strong opinions about it. So talk to us a little bit about your new book that is talking about ethical EdTech and how AI fits into that, and just the conversation of the book.
Priten: Yeah, so definitely some strong opinions here, and that's what kind of gives rise to the book. So my first book was really like I laid out all the different possibilities for what AI might mean for education — so how might we think about it for student usage and teacher usage, what might be on the horizon. But it didn't really tell us what we should do with the technology, more what it could do.
In the last few years, I've become increasingly concerned with how we've been approaching technology in our schools, frankly. Largely it's driven by marketing pressures, funding pressures, political pressures, and it's pretty fast — we've been moving really fast, I think because of those pressures. And so the book is largely a call to slow down, but to think about exactly why we're integrating technology when we do. A lot of it is focused on AI in the book itself. But of course the kinds of skills that I'm hoping to build, the kind of vocabulary that I'm hoping to build through the book, they're timeless. And so it would apply to any sort of technology that we see in the future or that we've already been seeing, whether that's social media, cell phones, devices, things like that.
Barb Flowers: OK, so you just said something — I'm a principal of a kindergarten, 1st, and 2nd grade building, and you said we need to slow down, and time on computers was the first thought that came to my mind. Being an elementary principal, but I'm sure even K to 12, right — think about kids in different classes all day and they're using their devices. What are your thoughts on that, on how much time kids are on computers and how much technology is being used in schools?
Priten: Yeah, so in the book I don't take a stance. The book is largely meant to help equip individual people to make their own decisions that are right for their context. Holistically, though, I have concerns about that much technology in our classrooms, especially at the younger age levels. And I think there's a couple of different things. I think there's good documented evidence about what the technology does to attention spans and social-emotional development, and so I think that those are especially important at the K-5 level — probably later on as well.
And I think the second concern is that our students are using technology almost every minute they're not in schools. And so one of the only places where there's a structured time where we can control whether or not they're on a screen is within a school building, and I think we should take advantage of that. So those are my quick thoughts on it.
I know there are lots of folks who want to make sure that our students are prepared to use the technology effectively, and I think that's a great argument for later in schooling. I think by the end of middle school and high school, we ought to start teaching them responsible usage of the tools. But I think most of the pedagogical goals we have for our younger students really can be achieved better without the technology, when we think about all the different aspects of learning beyond are they getting the right number of personalized math questions — which is an important part of this, but not the only part.
Barb Flowers: Yeah, yeah. And I think it's so important. What is your take on computer usage? I'm always — we always make the comments like AI will eventually take over teachers and things like that. And I don't believe that to be true, but I do believe how we're using technology is super important, that if we don't want it to take over our jobs, we need to make sure that we're not just throwing a kid on the computer and letting them self-pace their learning, right? So what are your suggestions for principals leading teachers on this?
Priten: Yeah, I think that's the exact right framing, because I think one of the things that I've been trying to show educators is that a lot of this is like intentionally giving up our agency and giving into the dialogue of the tech industry, largely. And so I think the way you framed it — that the more we integrate the technology into our classrooms, the more likely it is that the teacher becomes less important — is true, but that's not because the teacher is less important, right? We're downplaying the importance of the human in the classroom by doing that.
And so I think the right approach is to figure out how can we get our teachers to not feel the pressure to integrate that much technology. And the second part is helping folks see what the value of that human interaction is in the classroom. So one of the arguments for an AI tutor bot is that students won't have to wait for the teacher to come and give them an answer — everybody can get instant support at the same time. And that might be true for a very particular academic goal, but especially K-2 to 5, we're teaching them a lot of other things, right? You also want to learn patience and the ability to wait. And so if they are used to getting instant answers in every moment, they're not learning that. Having to wait two minutes for a teacher to come answer your question — that's a great opportunity to learn some patience and some emotional regulation, right? The opportunity to ask your friend for their opinion or help on how they're approaching the question — there's another opportunity to learn how to solve problems on your own rather than, again, be given all your answers.
And so I think the more we can show that there are other parts of learning that are happening when we keep our classrooms human, I think that would be very effective.
The other part of this, largely, is — and this is not really within the control of any individual educator, or maybe sometimes even school buildings — but some of the larger pushes for technology integration are driven by standardized tests. And that's the reality of the systems we work in. The more we can advocate for getting away from standardized testing, I think the less incentive we will have to do that kind of rote drilling that I think a lot of times the technology ends up being a good use case for.
Barb Flowers: Yeah, I think that's a great point. Well, let's dive into the ethical piece of EdTech. What are the things that you think that maybe we're not thinking about when it comes to ethics and education? I know a big one is AI and plagiarism, things like that, but what other pieces do you discuss in the book?
Priten: Yeah, so I borrow from bioethics in the approach I take on ethics. And that's because, very similar to medicine, education also makes a lot of really important decisions in the moment. And so there aren't universal answers that we can necessarily say apply to every single case of edtech use, similar to how there's not one universal answer for how organ donation ought to be handled in medicine. And what medicine has done is empowered individual decision makers — doctors and hospitals — to make those decisions based on their individual context, because so much of it depends on who are your patients, what's your community, what are your resources at your particular institution, all of which apply to education as well.
And what bioethics does is provide four principles that we can kind of use to think about what the ethical problems with decisions might be. So those four things are: does it actually do something good? An example of this is I think oftentimes we hear "innovation" and we hear exciting new technology developments, and we rush to integrate it because there's all this narrative around like this is the future, this is exciting, this is innovative — but we're not really asking if it's doing something concretely good, and we're not asking for evidence of that good. And so step one is to make sure, before we do anything, that we know it's going to benefit someone.
The second step is the do-no-harm principle in medicine — what are the risks we're exposing our students to? And there are two types of risks. There are risks that we absolutely cannot tolerate — those are our hardline boundaries. One instance of that would be if a student becomes dependent on a technology tool in the classroom; we might think that emotional dependence by the student is not something we want to ever risk having in our classroom. The second type is we kind of have to weigh the risks and benefits, right? So there are harms that maybe we have to accept as part of the integration. An example might be that we sacrifice some level of peer relationships because the academic gains are so great. So it's not an all-or-nothing — like if there's any sort of benefit we should automatically say yes, we'll do this. Nor is it that if there's any sort of harm we automatically say we won't do it. That still keeps a pretty good general framework in terms of how to make those decisions.
The other two principles start to give us a little bit more clarity. The third is autonomy and informed consent — how much are the parents involved in making these decisions and knowing what's happening in our schools, how much of that is happening proactively rather than reactively when they're asking questions or when they're concerned, and how much say are we giving our students as developmentally appropriate? So the amount of say that we might give our younger students might be very different than the say we give our high school students. But are they getting the opportunity to make the call about whether or not they want to use technology?
One of the things that, especially with AI, we're seeing is that older students have their own ethical concerns about the technology. They're worried about the environmental impact. They're worried about what it's fund—
Barb Flowers: ...right? So when they express those concerns, are we allowing them to opt out of usage, or are we requiring them to use it?
And then the 4th component from bioethics is justice. And that's the question of, when we're thinking about all these harms and benefits, who is being affected by them? Who's getting all the benefits, who's getting all the harms? And how are we really making sure that those are allocated fairly? And so every time you integrate technology, if it's always helping your gifted students but your special needs students are being left out of that conversation, that might be a reason to think about whether or not the justice angle is being thought about.
The final piece of this — and this is what I argue that we need in education — is the care element, which is that, you know, in medicine, your relationship with your doctor oftentimes doesn't really matter. The example I give is if you really hate your surgeon, they can still take out your appendix and you can walk away healthier than you were before. But if you hate your teacher, it's a very different situation, right? It does actually impact your ability to gain the goods and the benefits that schooling ought to bring you. And so we need to make sure that the decisions we're making are good for the relationships we have with our students. And that's everything from: do they trust us? Do they feel surveilled by us, or do they feel like they have some safety with us? Are we actually getting to know our students in that personal way? Do our students feel like we are invested in their own education? Right. All those kinds of concerns that are really relationship-based. I think that's often the one that's really left out of the conversation with EdTech, and especially with AI these days.
Priten: Yeah, I love that. And I love how you relate it to medicine, because I guess with technology I wouldn't have thought of these different areas. Talk to me a little bit — and I know I'm behind on this — but about the topics with AI and why people... how it's become so controversial.
Barb Flowers: Yeah. So I think that there's all sorts of use cases we're seeing, and the headlines are getting scarier and scarier — that's, I think, the least hyperbolic way I can say it. And so one of the things we're seeing is that I think there is a lot of pressure to train students on how to use AI tools. Schools are integrating prompt engineering into different classes. They're figuring out: how do we teach students about AI, to use it for writing, to use it for historical research, to use it in art classes, to generate images and art. And there's some value in students learning the newest and latest technology. But I think the pressure that schools are facing right now is an all-or-nothing pressure where they feel the need to reclaim the relevancy of academic material. They're feeling the need to tie everything into AI. And I think that's largely because that's the narrative from the tech industry. They're saying that it's not important to write anymore because AI can write, and so there's no economic value to you writing.
But as educators — and as, you know, most researchers will tell you — there are other reasons to teach these skills that aren't about immediate economic output. And of course, look, we don't live in an ideal society. Getting a job is a really important part of going to school, at any level of schooling, but especially most high school motivations are based in career prospects, and so are college's. But the reality is that teaching prompt engineering isn't actually preparing them for their careers. A 9th grader learning what to do with an AI tool today — that's not what they're going to be doing when they graduate and step into a workplace. And so we need to step back and think: what are the skills that are actually relevant that we want them to learn, and not just what feels like the headlines are telling us we need to be concerned about?
And of all the things that we teach our students, the one that has stood the test of time is reading, writing, right? The humanities have lasted 3,000 years. This is not something that becomes less valuable as society progresses. Whereas these particular tech skills — at some point typing classes were really popular, and at some point Excel classes were really popular. But as technology is adopted, we've said, well, that's not really important anymore. We haven't done that with our core basic academic material. And so that's one part that I am really concerned about.
The second part is figuring out the plagiarism and integrity angle — which I think you mentioned earlier. I think that schools are struggling because right now, outside-of-classroom assessments are basically not really reliable. And I think that's only getting worse. I think in the last six months we've seen the advent of agents where entire courses can be taken autonomously by very easily available AI tools. And so all the barriers that we've been trying to build for outside-of-classroom assessments are really falling apart. Folks are trying to figure out: what does this mean for homework, for assessments? How do we still assign a take-home essay?
And I think those concerns are important. And this is where figuring out what our values are and why we're having these conversations is so important, because some folks are saying, oh, that's a sign that what you're teaching is irrelevant, right? If the student can do it with AI at home, that's probably a sign that you should be teaching something else. Why are you teaching something that AI can do? But that's similar to the earlier conversation I was having — there's a mismatch between whether a technology tool can do something and whether our students need to learn it. Those are not directly correlated. That's never been the case, right? We still teach our students how to walk, and cars can travel much faster, much better, much more efficiently. That's, you know, that's definitely a strawman argument. But the point being that we do still teach our students skills that technology can do, and we have good reason to do that. And that's probably some of the answer here.
The second is, I do think we're going to have to rethink education. We mentioned standardized testing a little bit earlier, but I think in all sorts of ways, figuring out how we can do more assessment in the classroom that's effective — but also keeps students motivated and engaged and understanding the value of what they're learning — I think is really important. And these are not things that are new to education, right? Math classrooms have done this for decades now. Calculators have existed. Students have had the option to go home and do basic times tables with calculators for their homework. But math is structured such that the more you practice outside the classroom, the more you do your homework, the better you will do when you show up in class and are assessed on those skills in class. So homework isn't really like, oh, did you get this right or wrong necessarily — especially at the lower levels. It's more: here's an opportunity for you to practice. But the actual assessment, the actual demonstration of that skill, is going to happen in the classroom. And that's worked effectively. We've been able to still assess math skills despite the fact that most students have a calculator on their phone, let alone an actual physical calculator. And we can probably figure similar things out for other areas of education now as well.
Priten: Yeah, you brought up a good point. It has me thinking — we get into this all-or-nothing thinking.
Barb Flowers: Right, like—
Priten: Oh, AI can do it, so these jobs aren't going to—
Barb Flowers: Exist, or we're not—
Priten: Going to need this anymore. I can remember in the 90s people saying, what's it going to be like in 2025, 2026? Are cars going to fly like the Jetsons? You know, and here we are — none of that happened. And I think you're right. You bring up such a good point as educators that we need to remember these foundational skills that we've always taught: reading, writing, math. Like you said, there's always been calculators, there's always been things that we can use, but we still have to have—
Barb Flowers: Those skills, and—
Priten: It's important as educators not to go completely to one side of it and forget about that, because—
Barb Flowers: I think with new—
Priten: Tools that come out, it's so easy to—
Barb Flowers: Do, and think that—
Priten: Things are irrelevant, but it's so important for our students still.
Barb Flowers: Yeah, and the pace is scary too. I think that's part of it — everything's moving much faster. And so while some of these things we faced with other technology integrations and technology innovations in the past, with AI you're just hearing some new thing develop every week sometimes, or at least every few months you're hearing some massive development in this space. And so I think there's a lot of fear that drives this, a lot of concern and anxiety that drives this. And those are all valid emotions to have in response to something that's this groundbreaking and this fast. But I think the right response is to slow down and not speed up. And it's so counterintuitive for most folks, but I think that slowing down will help us actually figure out how much we need to speed up — what exactly needs to change to deal with external speed. If we're just trying to match that external speed, I think we're going to get lost and let those external forces control what happens in education. That's just not going to serve the purposes that we want it to serve.
Priten: Yeah, no, I agree. Well, thank you so much, Barb, for being here today. Are there any final things you want to share before we go?
Barb Flowers: Just that I think folks ought to try to have these conversations more often. These are not, you know, magic solutions to any of these problems. But the more we all talk about it, the more we can get closer to coming up with the right solutions for our communities as educators. My book is coming out in May that hopes to help equip folks to do that, and that's Ethical EdTech — it's at ethicaledtech.org. And then if folks want to follow along with any of my other work, where I talk about all these things oftentimes on a soapbox, they can check out my website at pritin.org.
Priten: All right, awesome. Well, I will link that all in the show notes as well, so—
Barb Flowers: You would be able to connect with Barb there, so—
Priten: Thank you so much. I appreciate you being on the podcast, and I look forward to your book when it comes out.
Barb Flowers: Thank you so much for having me.