“But ethics is not something that's just like a buzzword. People have spent thousands of years thinking about what that means and how humans ought to approach those questions. ”
Related Projects
- Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12
Transcript
Welcome, folks, to another episode of the Hi-Tech Podcast. Very excited. If you're listening, you know that this is a regular episode — nothing out of the ordinary. But if you're watching, you're already clued into the fact that this is a guest episode, and we're very excited to have Priten Shaw back to speak with us. Forgive me — Priten Soundershaw. Got a whole new show. We'll come into that in a little bit.
You know, got to have some speed bumps. He can't just drive at full speed. But before we jump to you in one second, if you like this episode, or if you're interested in following more information about Priten, about the book we're about to talk about, about the Hi-Tech Podcast, head over to highpod.us. Find us on all the socials. Follow us at Hi-Tech Pod or Hi-Tech Podcast, and reach out at [email protected] if you'd like to learn more about Priten's work or anything else about the podcast. Now, that's us out of the way. Priten, welcome back. Last time we had you on was episode 133, and this will be episode 238. I can't even imagine it's been two years. How are you?
We're super close to exactly 100 episodes apart. Sorry.
Yeah, that's impressive. That was not on purpose, so I'm kind of glad that worked out that way.
We'll schedule you in for 330 a couple. Right.
Exactly. I'll just, when I see the episode pop up, I'll be like, it's time to reach out to them again. No, but thank you so much for having me back. I'm excited for this. It was a blast last time and I'm sure we'll have a blast again today.
Yeah, we've always had — we find folks all over the world and then they become friends. We're just very happy to make the acquaintance of good people.
In that episode, we talked about your last book, AI and the Future of Education. Nothing's happened since then.
Yeah.
It's basically been stagnant, right? Like, that's basically it.
Complete plateau. We've just — easy, you know. Yeah. Exactly.
Before we jump to Ethical EdTech — what's coming out in front of us — what's happened? Did that book do well? Have you spun off five different AI companies? Like, what's going on?
Gosh. Okay. So this is a complicated question. The book did well, by what I was expecting. And awesome. We got translations in some interesting languages. I think we got Vietnamese and Mandarin and Turkish and Arabic.
Okay.
Yeah. It was not the list of languages I expected to be the first, but it's cool. I'm glad folks found it interesting in those areas, countries. So yeah, it's been exciting. The biggest thing I think that's happened is just a lot more, very quickly. And I think that's in general. There was a lot I talked about in the book that I thought, oh, this is going to happen at some point, and it all kind of just happened. And so we've been kind of figuring out what this means for the speed at which we adapt in education.
But I got a chance to speak to educators all around the country and world, initially about what their hesitations were with AI, and that was interesting because it was much more about, oh well, here's how the tech works, here's some cool things you can do with it. And I enjoyed that because it gave me a chance to put on my technologist hat, but also kind of share something cool and fancy. And it's always more fun to be talking about the positives and kind of say, "Oh, here's this really cool thing you can do with this thing." I spent a good year and a half, two years doing that since that book came out.
And I know we'll get to this in a little bit, but I think at some point I was like, "Oh gosh, people took this really seriously and are moving really fast." The problem kind of flipped — from everybody moving really slow to moving really fast. And I was like, "Okay, gosh, I've got to come back to the other side a little bit and slow folks down." So that's kind of a big picture overview, but we can dig into it.
I love that. Let me write a book. Speed up. Oh no, no, no. Let me write a book —
Call it Slow Down.
Like, okay, we hit the gas a little too hard, folks.
I'm curious, man. Because obviously — Will and I keep joking on the podcast, the joke for us has been like, if you stop listening, we'll stop talking about it. But I feel like all I've done is talk about AI for the last — for us especially, probably the last year and a half has been non-stop AI conversations. And I've seen a lot of change in education and a lot of different reactions to it. When you're saying a lot changed really fast, what were some of the top things you were seeing where you're like, man, this happened way faster than I expected, or the areas where you saw people hit the gas and you're like, yeah, I did not expect this to get here so quickly?
Yeah. No, that's good. So I think the first part of this was it went from being a conversation largely about plagiarism and detection to becoming very quickly about how do we integrate this in every single possible way. And that's a massive turning point — or change, rather. So the conversations that I think I wasn't fully expecting to happen this quickly are completely questioning the relevance of everything we do at all in education, and the value of it. I thought we were a little farther away from that.
I think the large-scale contracts between school districts and universities and the large AI companies was definitely a moment where I was like, oh, we're no longer in control of this narrative anymore. This is not coming from the education side. This is coming from the tech side. And I think that flip was one where I started to get a little bit more cautious — when they start to control the narrative very explicitly. Whereas before it was kind of like, oh, what do educators want to get from the tech companies? Now it's tech companies knocking on your door saying, take it all. And that was definitely a big change.
And then of course everything's just getting better and faster and more powerful. I'll be at a PD event and there will still be folks who think that AI can't do math. And this is like — and it's the reality. Folks can't really keep up with all the developments. No one can keep up with the developments, realistically. I'm sure I've missed a million things in the last two weeks, and this is like all I do. So if you're teaching US history, I don't know when you would go and see what other capabilities the models have developed. But they have gotten better — way better. The deep reasoning models in particular were a good turning point.
I think agents has kind of been the thing that, every time I mention it in a room with teachers, I know I'm hated. That is definitely a moment where they're like, "Stop." Like, we're barely dealing with the other part of this. Can we not?
There's some fear, right? Like, students can now just say, "Oh, I got to go take this whole course online." And I think that scares folks. Scares me. That's kind of the biggest change — the pace keeps quickening. And I don't think we're fully understanding it at the same rate, but also trying to keep up with it by bringing it everywhere. It's such a mess. A mess is probably the short answer.
We — within a few episodes of yours, 133 and 147 was a conversation with Simon Nicholas from Archer. Archer no longer exists. So that's of course one of those things, in the span of AI and changes. But that was a math tutoring platform, and the thing they were saying at that time — he's like, "We still have humans check every math equation because we can't trust it." And it's like, today —
Maybe the high maths, but arithmetic is done. I don't think twice about asking it to help me with arithmetic. So you're right. Holy smokes, stuff's speeding along in that capacity.
And your comments about agents — those conversations — I've been in those scenarios too. It's interesting to see that happen. Mine was — not an agent conversation — but where we started to tell instructors and faculty members about AI browsers like Perplexity's Comet and things like that. I remember the visible moment of describing that to someone and just seeing the drop happen of like, oh, I thought all we had to deal with was chatbots. You're telling me this thing can just edit sites? Okay. Yeah.
The contract thing is interesting. I didn't think about that until you just mentioned it, but yeah, that happened so much faster than I probably expected as well. Especially like OpenAI and some of those companies getting into — Canvas just announced, Instructure just within the last year announced a giant contract deal with OpenAI, and they're plugging it into the entire LMS.
Yeah, that's a lot more than I think any of us expected when we talked two years ago about AI, this quickly.
Yeah. And it's bizarre. I think the Canvas integration in particular really bothers me because we're seeing lots of university faculty struggle with the fact that these browser bots and agents can go and do things on Canvas. And there are lots of other folks in the space who are working on advocacy to get them to stop it. This is really within the control of AI companies. They could very easily put in a guardrail that says, don't do assignments for students on Canvas. We have a massive contract with this company — maybe not make them irrelevant. But it's such a bizarre — I don't know what game they're playing, but it's on both sides, and it just — I'm curious what their long game is here. Those are the kinds of things that are frustrating because it's such an easy thing for them to make the education part of this much easier, and they're refusing to do it while pretending to be an ally, which is just like — I don't —
This is a plain enough segue to bring us to Ethical EdTech, right? I can't imagine it's the only thing the book's about, but we're raising questions of ethics while we're even discussing this. What brought this about? What brings Ethical EdTech the book to being?
Yeah. Largely this. Starting to realize that, oh gosh, we are not thinking critically about what we want for education. I think that was kind of the initial impetus for starting to have the conversation.
So I think this started as, like, oh, I'm going to redo my PD to actually bring in a lot of my philosophy background. This is my inner core of how I think about these things, and I just shut them out because, you know, what folks wanted for the first few years was, like, oh, how does this stuff work? And I was like, oh, I can explain how it works — here you go. But deep down my philosophy side was there and I was thinking about all these things. And I think there was a moment where I was like, oh, teachers are actually very receptive to having those larger conversations now. And that was really motivating. When I realized that I could walk into a room with teachers and they were no longer so hyperfixated on the assessment piece of this — which is still a concern, still things we're figuring out — but they're noticing that there are like larger questions we need to be asking to navigate this for the coming future. And so philosophy, I think, is a great way to do that. Biased, but — and in the way, you know, I think one of the problems we have in education in particular is that we kind of shy away from talking about values and like what undergirds our education system as a whole, when it's really the core of it. The reason we fund public education, the reason we structure universities the way we do in this country — there are like massive value commitments that undergird that. And if we kind of slow down and think about what that means for how we approach decision-making, I think we would be making a lot of different decisions.
So that's the book's goal. It was scary for me because it's a very different style of writing than the first book. The first book was more of a how-to — here's screenshots of how this thing works. And it was, obviously, I'm proud of it and it did something, but it just wasn't like the kind of personal intervention in the field that I think I was — that book really did something, but it was really just like a translation of what was possible. This one is kind of me saying, well, here's what things ought to look like. And that's really hard to do and put out there, permanently, a physical thing that everybody — hopefully everybody's going to have in their pocket. So it was challenging. It still is challenging to even think about it.
And the other part of this is, I think folks in general — there are folks who want to ask the big questions and then there are folks who are just like, I don't have the energy to ask the big questions. And my goal isn't to just speak to the folks who want to, you know, have the time or want to make the time for it. We really need everybody to be asking these questions. And so I've tried to make this philosophy like a crash course. It is bare minimum — what do you need to know from ethics and political philosophy and education philosophy to kind of be having the relevant conversations that we need to be having tomorrow? And not like, you know, I'm going to sit in this room and smoke a cigar and talk to other philosophers — not the vibe that the book is going for.
So that's the first third of the book. The second third kind of talks about how we can build better infrastructure in our schools to create policies that are ethical. And again, this is not like, oh, how do you make a very specific tech policy. It's more, what are the procedures we put in place to make sure that the policies we come up with are, from an ethical standpoint, sound. So that's kind of the second third, and then the final part is practice.
The book as a whole — I give no answers. And again, people hating me is clearly the theme today. I'm not as hated as I'm making it sound today. But, you know, there is some reality to the fact that I can't answer all the questions folks really hope I provide easy answers to. And so the book does take a stance that a lot of this is context specific. I borrow from bioethics — we can talk about that if we get to it. But the context specificity of this means that folks need to kind of practice the skill set. And so the third section of the book is case studies. They're real stories of ethical dilemmas that teachers I interviewed over last summer have faced in their actual classrooms, or things that — headlines that were made that I found really interesting. And it kind of breaks it down into a hypothetical scenario. It's fictionalized. And then it kind of puts the onus on the teacher to reason through how they might approach that scenario, to give them an opportunity to practice that kind of ethical reasoning.
Okay. So one might argue the real teacher tactic — like the don't give the answers, get them to reason there on their own. I love that. That's super cool. I'm just going to ask the question because Will put it as a question in our notes the second you said it. You just dropped bioethics and I'm pretty confident Will and I have no idea what the heck you're talking about. So I had other questions I was going to go to, but I'm going to immediately just ask you — what is bioethics?
And how does it connect into this conversation? Will, was that what you were going to ask, or were you just putting bio — you already know. You already know the definition for a reason.
Okay. So, okay, we're good.
No, I know, Josh. I wanted to make sure.
Oh, yeah. You wanted to make sure I knew. Yeah. Okay, that makes sense.
You're quizzing me.
Yes, clearly.
Perfect. All right, let's see if I pass this. I'll take a step back. Medicine as a field did not have a formal structure of ethics for a long time. And I think when folks think about medicine, they think, like, oh, ethical decisions are part of how doctors are trained. But that's like 1970s, 1980s — that's when we first started formally thinking about what it means to make ethical decisions in medicine. A lot of that has to do with some of the really messed up research studies that come out around that time period that are really exploitative. And that kind of starts to say, oh, we need to create a system of making sure that our medical spaces are thinking ethically. So they start to build this whole field. There's an academic field that comes with it and like hospitals actually have ethics officers, and they train every single doctor in bioethics.
And it's kind of bizarre to me. And this is not an original idea of mine. There's a philosopher who used to be at Harvard who's now at Stanford who basically says that we need to do the same thing for education — we need to kind of bring in a formal field of ethics for the field because we're also making really important decisions on the fly. And we ought to have systems in place at our schools to do that and train every single teacher in the background it takes to make those decisions. So that idea I've been sold on for a long time. I was exposed to her writing in college. And then the edtech part of this just makes it so much more obvious, and to me this is like a perfect opportunity.
And I say this in the book pretty explicitly — my goal with the book is not necessarily to just talk about edtech ethics. It's like, can we think about ethical decision-making in education as a whole? But I'm hoping I can kind of get it in the door by making it about edtech, because that's like the problem of the hour. Everybody's trying to figure out how to navigate it. And I think that rather than saying, oh, we should rethink how we do grades, or how we do disciplinary policies, or how we do school lunches — the edtech stuff does start the conversation much more easily. And so that's kind of the approach the book takes.
I'm wondering if I want to do more bioethics stuff with y'all. Really quick — yeah, go for it.
We love our rabbit hole ethics.
Just because y'all asked, you opened a rabbit hole. In bioethics, there's one particular strand that's really popular. And the approach that strand takes is, let's come up with four principles that kind of help us navigate all the kinds of dilemmas that doctors might face. And so the principles are like, are we doing something good by doing this thing? Are we preventing harm — this is like the do no harm principle. Are we making sure we respect the autonomy of the patient? And are we doing this in a way that's fair to our community? So to me, doctors are trained on this in med school, and it's a good little mental heuristic for them to think about when they're making pretty big decisions on the fly.
We ought to borrow a lot of that for education. And there are so many overlaps. A lot of the decisions we make in education are super context specific. Like, it's not like you can have one right answer for organ donation across the country, and you can't really have one right answer for your AI tutor across the country. It depends on who exactly is in your classroom, what resources do you have, what does your community look like — there are all these factors that are really specific to the individual decision in the moment that medicine and education both have in common. And so this is why I think borrowing a framework of thinking is much more effective than saying, okay, here's what I think all the answers are. It's just not going to stand the test of time, or really like location, or anything — it's just not going to be — no one's going to be like, okay, you said stuff, and it's really just not true. And so we need to think about how we think about these questions.
The final piece of this is that I also incorporate a care principle that I think is unique to education that medicine doesn't necessarily need. And my quick way of illustrating this is — if your appendix needs to be taken out and you really hate your surgeon, your surgeon can take out your appendix and you'll go home healthier than you were before, no problem. Like, no matter how much you hate your surgeon, it will not make a difference in terms of the outcome. If you really hate your history teacher, you are not going to go home with the exact same outcomes. Like, you might struggle that year. But it might as well change how you think about history for the rest of your life. The kind of relationship we have with our educators is actually much more important because we spend a lot more time with them. It's not just like a one-hour appointment once a year scenario — you are often in front of your teacher every day. And so I think we need to center the relationship as the fifth principle and make sure that every action we're taking strengthens relationships and doesn't weaken them. So that's kind of the approach the book advocates for and kind of tries to borrow from bioethics.
The principles themselves are fantastic and, of course, not original — but as you're listing each of the four, I was like, hm, wouldn't it be great if those principles were being used when considering where to put a data center, or whether or not we should change the size of our language models or what we're doing. There are so many parts to what we're doing just around AI before we even touch the edtech space where it's like those principles would significantly help humanity navigate this moment.
Then you layer that on the edtech side of it. It's like, oh, the goal of edtech — excuse me, the goal of education is transforming students' lives. That's something Josh and I at least share, and I think that's generally pretty good for most folks. We are going to make them knowledgeable in something, which should ultimately make them a different or better or greater person. But if we are doing something or bringing in implicit philosophies inside of a tool or inside of another methodology that we're not aware of — that's becoming a part of our curricula, that's becoming a part of the school experience — it can have that negative effect without us ever trying. And we see that with issues of bias, issues with hallucinations, issues with, previously, math and accuracy. AI has so many implicit issues that if we just drop it into a school there may be problems. So taking those principles as that buffer — here, let's all come up with this manifesto for the edtech buffer and get these principles in place. Of course, folks should turn to your book and start there, because if you've written them down, sounds like a great place to get the ideas.
And I think the point you just made — where it's like, there's a lot of this is just, we talk about ethics really, really... like everyone's talking about it. You hear it from every single talking head that's out there, every single tech CEO, every single principal, superintendent. But ethics is not something that's just a buzzword. People have spent thousands of years thinking about what that means and how humans ought to approach those questions. And I think this is a great moment to be like, oh, we have lots of stuff written on how to do this, actually, and effectively, to think about these questions. So we just need, generally, more ethical reasoning skills — point blank — to solve whatever problem. It doesn't matter. We're here today talking about edtech, but it would really help with a lot if we just slowed down and spent, you know, three hours reading the first section of my book maybe.
I feel like that's a great recommendation — just start there, learn more about bioethics, dig into how it connects.
Please read. That's the lesson.
Please read — that's the lesson. But no, as I'm listening to you talk about this, even in this conversation right now, I'm in full agreement that ethics has become an interestingly buzzword-ish thing in the age of AI especially, in a way that we haven't seen in other — it's kind of like pivoting during COVID — but to your point, there's a lot more thought to this. Unprecedented —
Yes, unprecedented. Okay, that word didn't mean to be said. We're tired of that word.
That hurts. No, I'm just joking. Like your point, there's been a lot of thought about this over the years, from philosophy to a lot of other places, about what ethics looks like. I remember sitting in ethics classes during different philosophy tracks I had over the years in college, and there are a lot of ways to think about this. There are a lot of people who have done different thinking. And I love — I didn't know about it being connected in the bioethics part. I will fully go with Will. I really love that framework and how it connects to kind of undergirding what we're doing in education.
But that was my immediate thought as you're talking, and probably obviously why you created Ethical EdTech as a book and a direction — is that over the years, doing so much edtech integration, whether it's been other tools or the conversations around AI, Will and I have talked about this a lot on the podcast: one of my bigger concerns in technology in general is that we don't always ask the question of, okay, we can do this, but should we be doing this? And I'm guessing probably you've been seeing a lot of that happening in the AI space — like, okay, we need to set up some ethics for ourselves, because if we don't, we're starting to do a bunch of stuff with this tool that I don't know we've asked ourselves if we should. And, again, for those listening, that's coming from — I know from the last time you were on, how much you were talking about AI usage and the investigation you've done and all that stuff. This is coming from three different people who use AI. We're not the Luddites saying don't touch this tool. So is that kind of where your heart's been behind coming into this — you're seeing some places where maybe we should be pausing and thinking before we do this in education?
Yeah, the can-versus-should framework actually is exactly how I frame it in the introduction. And that's like — it was a bizarre moment. I was in the middle of thinking about all this, and I was presenting in Singapore, and my wife and I went to a bioethics exhibit. And in huge letters it said, just because we can doesn't mean we should. And I was like — really simply put, right, that is the message we all need to be thinking about.
And I think that's also the transition from the first book to the second book. The first book was like, here's everything we can do. And now it's like, okay, which of these things should we do — that's kind of the second part of this. But you're absolutely right. I think we often get so sold on what is possible that we're not really asking what we actually need, what we ought to do in our schools, what the long-term implications are — because we do get caught up in a sense of FOMO. That is very much part of this. Folks are scared of missing out, so they make a rushed decision. We need to have more discipline and just slow down and figure out what the actual need is before we integrate something into our schools.
And I think what's probably the most frustrating thing to me is that if we took that approach, we would actually end up getting better tools. Because in the rush to build and deploy, some of the stuff that these companies are producing is frankly pretty bad. You have all these ChatGPT wrappers that are being sold into schools for ten times more than they should be. And if we had that ethical framework in place and we had asked the question of, oh gosh, what do we actually need, we would probably not buy some of this stuff. Or we would say, oh, actually we have about 80% of this use case covered by our existing systems — can you build something that covers the other 20%? — instead of throwing everything out and buying some new thing.
So I think that the ethical approach to this is also the fiscal approach. You end up with better outcomes, and if we took our time on the front end to think through what we actually need, we're probably going to spend less money and get something that actually works better. That's one of the things I try to ground a lot of my points in — okay, this is not just ethically the right thing to do, it's also better for the institution, it's better for kids, it's better for everybody. And so I'm hopeful that that's a way to kind of reach folks who might not come to this from a pure ethics perspective. But yeah, just more discipline around how we approach the technology, and making sure that we're asking the right questions up front — I think that's critical at this moment, given the rate at which things are changing.
And now with teachers feeling like all they're doing is reassessing and reassessing and reassessing — every new tool that comes out kind of starts asking new questions about, oh, what do I need to do with this in my classroom — that fatigue is very real. I think a lot of teachers go into the profession because they want to teach kids. And so when you're spending weeks or months or years reassessing how to teach given new tools, that's kind of taking away from some of the core mission. So I think your point about just stepping back and thinking through what we actually need is probably really helpful for teachers as well. Because it's kind of like giving them permission to be like, oh, I don't have to reassess everything every single time something new comes out. I can just be thoughtful about where this actually fits into my practice. For teachers who are dealing with a lot of fatigue from constant reassessment, that's probably really helpful.
Yeah. And I think the thing I point to most is we should be asking the question of, is this actually solving a problem that we have? If you look at the vast majority of edtech, most of it is solving a problem that no one asked us to solve. A lot of edtech is like, oh, look at this cool thing the technology can do — and we should really think about how to sell it to schools. That's very backwards from how we should be thinking about it, which is, oh gosh, teachers, tell me about the problems that you have, and then we think about, well, how might we solve this problem, and maybe technology is that solution and maybe it's not.
So I think a lot of edtech is frankly unnecessary. And the companies that are building it are betting on the pressure that schools have to keep up with innovation to sell some of this stuff. And I think part of my goal with the book is to kind of give schools permission to say, well, no, we don't actually need this, and we're going to spend our time on the things that we do need. And I think that's going to make educators and students happier, because they're going to be focused on the things that actually matter, rather than having to deal with unnecessary tools. The biggest thing is, we should start from the problem, not from the solution.
Alright, so I appreciate your point about context specificity in the ethical decision-making. But I'm wondering — you talked about how in Singapore you saw the exhibit and it kind of spurred this on, but I'm also wondering if you're seeing any positive models, or places where institutions or educators are asking these questions well and trying to make decisions from more of an ethical framework.
Yeah, actually. Within the last year and a half, I've talked to educators from Montessori schools and some charter schools and some public schools, and they've all actually come to a moment where they've really brought in lots of students into the conversation. So they're bringing in student voice, and then they have a pretty robust conversation about what they're trying to do with technology.
And I think that's really, really good. I've also seen some folks who have gone through — so there's a formal process that some schools go through that's kind of designed by some folks in the Berkeley education school. I think it's maybe like 80 years old? I'm not fully sure. But it's a really robust process that has teachers and administrators and parents and students all kind of think through together, like, what problems are we trying to solve? What are the likely solutions? And then once you identify a solution, you kind of think through what are the trade-offs? What are the equity implications? And then as you're implementing it, you kind of think through, well, how do we know that this is actually working? So it's a really robust decision-making process. I've seen folks kind of use that. And I think whenever that happens, good things happen, because the schools that do that tend to be a lot more thoughtful and intentional about what they bring in. It's cool to see.
It doesn't happen nearly enough, though. And I think a lot of it is because schools don't have the time or the infrastructure or the expertise that they need to be able to kind of go through these processes. And so even though I've seen some positive examples, I think the vast majority of schools are still kind of being swept up in the wave and just kind of implementing things because someone in the district made a decision — and they're not necessarily having that robust conversation. So yeah, it's kind of heartening to see some good examples, but yeah, it's also a little bit disheartening because I don't think it's happening enough.
Josh & Will: And I also wonder, if you look at the contracts that these companies are making with school districts — you know, like you mentioned Canvas getting in with OpenAI, or different LMSes kind of bundling these tools — it looks like the infrastructure problem you just mentioned might actually be kind of turned into a feature, right? Because if you have a school district making a big contract and they get all these tools bundled in, now teachers don't have to evaluate whether they need it or not. It's automatically assumed that they need it.
Yeah, absolutely. And I think that's exactly what's happening. And that's where I think there's kind of a power dynamic that's really frustrating. The school districts are kind of being sold on the idea that, oh, this is going to be so easy for teachers and kids, and then boom, it's just pre-installed. And then what we see happen is that teachers are kind of scrambling to figure out, how do I use this thing? And that's where the infrastructure problem becomes really acute, because suddenly the teachers are responsible for thinking through how do we use this — when actually the teachers should be saying, does my classroom need this? But if it's pre-installed, they kind of have to think about how to use it. And so I think that's a really important distinction, and I think it shows how the power dynamic is pretty lopsided right now between the school districts and the tech companies.
I think another thing that's kind of related is I notice a lot of these large deals kind of happen at the district level or at the state level, and the teachers don't always get that voice in the process. And I think that's kind of a problem, because the people who are actually going to be using the tool are not necessarily part of the decision-making process. And so that's kind of a governance problem. And I think, you know, over the years I've noticed that when teachers are involved in the decision-making, the outcomes tend to be better. So I think that's a really important thing to think about as districts are making these decisions.
Josh & Will: Yeah, and I think what you're alluding to is that a lot of these decisions are being made at the administrative level and then teachers are kind of expected to implement them. And I think there's also a logistical problem where teachers just don't have the time to evaluate new tools because they're already doing a lot. And so I think what your book is trying to do is kind of give teachers and administrators an ethical framework and some guidance on how to think through these decisions, so that if they are expected to implement something new, at least they can kind of think through it ethically and ask the right questions about whether it's actually something they should be doing.
Yeah, absolutely. And like I mentioned earlier, the case study section is designed to kind of give teachers and administrators a space to practice that kind of ethical reasoning. The book is kind of structured in a way where the first section is kind of a philosophical grounding, the second section is kind of about infrastructure and policies, and then the third section is case studies that kind of let people practice that ethical reasoning. And so I think the idea is that once you get through the book, you kind of have the ability to think through these scenarios and ask the right questions about what ethical frameworks you want to bring to your decision making.
Josh & Will: I really appreciate that as a structure for a book like that. And I feel like there are a lot of folks out there who are kind of trying to tackle the ethical AI question and the ethical edtech question. So what would you say are some of the most important things to take away from your book? I mean, we've kind of touched on the four principles or five principles and the structure. But what would you say are kind of the biggest takeaways that folks should take from Ethical EdTech?
Yeah, so I think the biggest thing is — yeah, so I think the first thing is just the fact that ethics is not a buzzword, that we should be taking it seriously. And that we as educators and as schools, we have the power to shape what this looks like. And that we don't have to just accept the decisions that large tech companies are making for us. And I think that's kind of the biggest thing — is that we do have power, and we have the ability to kind of shape what our schools look like and how we approach technology and education.
And I think what your book is kind of communicating is that this power is something that we should be taking more seriously and thinking more carefully about. So I think the biggest takeaway is that ethical reasoning is a skill that we can develop, and that if we kind of slow down and think through these questions, we're probably going to end up with better outcomes for our schools and for our students. And I think that's kind of the biggest thing — slow down and think about these questions and don't just accept the default.
And I think for folks who are listening, I would say, read Ethical EdTech, think about these frameworks, and think about, as you're making decisions in your own contexts, how you might apply these ethical frameworks to your decision making. And I think that's kind of the biggest takeaway that I would push folks toward.
Josh & Will: Yeah, I really appreciate that. And I think that's a really important message, especially given how quickly things are moving in the edtech space. So before we wrap up, I just want to ask a couple more things. You know, given all the changes that you've seen in the edtech space since the last time you were on the podcast, what would you say to educators who are feeling a little bit overwhelmed by all of these changes and kind of just trying to figure out where to start?
Yeah, so I think the first thing I would say is that that sense of overwhelm is completely warranted. And I think there's nothing wrong with feeling overwhelmed — I think a lot of teachers are feeling that way right now. And I think, you know, the changes are happening so fast that it's reasonable to feel overwhelmed. And I think the first step is kind of acknowledging that, and then kind of taking a step back and identifying, well, what are the core values that matter to me and that matter to my classroom, and then thinking about, well, how do I want technology to fit into those values? And I think that's kind of the approach that the book is trying to advocate for — to kind of slow down and think about these questions from first principles, rather than kind of getting caught up in the whirlwind of changes that are happening around us.
Josh & Will: And I think there's also kind of a practical element to this too, which is, you know, start small. You don't have to adopt every new tool that comes out. You can just kind of adopt the things that really align with your values and your classroom and leave the rest alone. And I think that's kind of the message that I think a lot of teachers need to hear — that you don't have to adopt everything. And I think that's kind of the approach that the book is advocating for. And I think for educators who are feeling overwhelmed, the book is kind of designed to help you think through these questions and help you make more intentional decisions about what technology you bring into your classroom.
Yeah, I think this has been really valuable and I really appreciate you kind of breaking down the philosophical foundations and how they apply to the edtech space. And I think Ethical EdTech is a really important book, and I would definitely recommend it to anyone who is thinking about how to think through ethical questions as they approach technology in education. And thank you so much for coming back on the podcast and sharing your thoughts on ethical edtech.
Yeah, thanks so much for having me back. And I'm thrilled to be back. And I hope that folks kind of see the value in taking these ethical questions seriously and kind of slow down and think through these decisions from first principles. And I think the book is kind of a starting point for that conversation. And I think if folks kind of engage with the book and practice the ethical reasoning, I think we're going to kind of end up in a better place as a field. So yeah, I appreciate you having me on. And I really enjoyed the conversation.
Josh & Will: Well, Priten, thanks again for being here. And folks, remember, if you'd like more information about Priten or anything else about the podcast, head over to highpod.us.
Follow us at HiTechPod or the HiTechPodcast. Reach out to inbox at hightechpod.us if you have any questions. And of course, go pick up *Ethical EdTech* by Petton Sondershaw — or Pretton, Pretton Soundershaw. Check it out. It's a really important book for thinking about how we approach technology in education. All right. Breton, thanks again, and we'll catch you on the next episode.