Ethical Ed Tech book cover
New Book · Coming Soon

It’s Time to Put Ethics at the Center of Ed Tech

A practical guide for K–12 educators navigating AI and digital safety.

Podcast Appearances

Ethics and AI

EdTech Startup Show · Scott Schuette

April 16, 2025

ethics

Related Projects

  • Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12

Transcript

Scott: Hey, you're listening to the Ed Tech Startup Showcase, an original series produced by the Be Podcast Network. Hi everyone, and thanks for joining us again. My name is Scott Schuette, co-host of the Fabulous Learning Nerds and your host for today. I'll be taking you through some stories of some wonderful emerging companies in Ed Tech. In this episode, we're gonna go back and check back with our good friends from pedagogy.cloud and pedagogy.ai. Learn how they're doing. learn how things have changed and where they're heading in the future. And with us today is one of the visionaries behind pedagogy.cloud and pedagogy.ai, my friend, Preetan Shah. Priten, welcome back to the show. Priten Shah: Thank you for having me back. Scott: Great, hey, so it's been a minute, right? So let's get caught up, especially for those in our audience that maybe skipped the first episode, like what have you been working on and what's new in the last six months? Priten Shah: Yeah. So when we initially started working in the AI space, a lot of our work was in teacher professional development. So we were building tools for teachers. We were helping, you know, do in-school PDs, build online courses, provide resources, prompt libraries, all those kinds of things. And the Chrome extension that like gives you a walkthrough of Sci2PT from an educator's perspective, everything and anything that you might think about from how does a teacher think about AI in the classroom? And we still do a lot of that work and it's great. we find that, you know, while there's a very small niche of us who think that, you know, everybody's keeping up with what AI is and all the new developments in AI. That's not true universally. So we've been going around the world, doing some conferences, speaking to folks and educators around the world on what exactly is happening, how they can take advantage of it and what it means for their students. But we're also starting to think about the big questions. Like the technology is developing rapidly. And I think that it's developing more rapidly than I think I even like thought possible despite, you know, seeing the evidence and now in hindsight, I'm like, okay, of course it's moving this quickly. And I think it's going to bring bigger questions for how we think about education, technology, our schools, you know, our relationship with our students. And so we're trying to take a couple of steps back and say, what What's the five-year plan for educators and for students and for us? And what does that look like? Because learning what ChatGPT is and learning how to use it and learning how to use the new USAI tool, that's cool and folks need to do that. But that's not what we're gonna be doing in five years. Scott: Mm-hmm. Scott: Help me understand some of the ethical questions that you're pondering and maybe dealing with right now. Priten Shah: Yeah, so one of our, you know, our, we're trying to start by thinking about how do we actually even approach ethical questions when it comes to classrooms and technology. So this isn't a simple, like, okay, is this an ethical thing or not an ethical thing? Right? So there's some decisions in life we get to make like that. And so is murdering your neighbor ethical or not ethical? Well, you know, even that can get ambiguous. There's some philosophies coming out, but most instances you would say, okay, it's not that complicated. Don't go murder your neighbor. But it's not as simple when we think about, we ban cell phones in schools? Do we provide AI devices to certain students, not other students? Which AI device do we subscribe to? Which AI tools do we use? What risks are we willing to tolerate? What benefits do we really want? Can we get the benefits elsewhere? There's a lot more nuance to the ethical questions that we're asking when we're saying, you know, what is ethical AI in a classroom? And so while... A lot of our work these days is thinking about those questions. We're also making it very clear that we don't have the answers. And I don't think anyone's going to come down and say, here is how you use tech ethically in a classroom. Because, and I think every educator listening knows this, it's so context dependent. And so what we think is more important is that folks need to learn how to think about these questions ethically. Build the tool set to ask the right questions. Think about what are, like, how do I even know what is ethical, what is not ethical, right? We throw those words around really easily, but when it comes down to decision making, we don't really have structured apparatus inside of education to say, okay, this is an ethical decision for our school system. This is not an ethical decision for our school system. A lot of it comes down to our gut. And we're trying to make that a bit more robust. Scott: How do you help educators think about those things? Because I imagine they need to start thinking about it. But if I were one, I wouldn't want to, and it would not be on my radar. Priten Shah: Yeah. Yeah. So you're right. Most folks want an answer, first of all. They're hoping that they get an answer from their principal, their district administrator, the state guidance or federal guidance. And I think that's what I'm hoping we don't get. I'm hoping we don't get some top-down federal guidance that says this is what's allowed in our schools and what's not allowed in our schools. As much as I think there's things that we could be doing at the federal level, that's one of those things that I'm hoping we do lead to our individual classrooms even. And that means that teachers have to do a little bit of hard work figuring out where does this technology, whether it be AI, which is the primary focus these days, or future technology or technology that already exists, and how am going to use it in a way that's productive for my students? And so the way we approach this is by we draw on some of the scholarly work that exists in the area. Okay. So we draw on scholarly work in the area. And so we draw on work from Mary Levinson at Harvard Graduate School of Education, who talks about structurally thinking about ethics and providing teachers with scaffolded ways to think about ethical questions in education at large. So she focuses on things from detention and promotion policies. When do we decide to penalize a student for not having enough money for school lunch? Those kinds of larger questions that come up. that we don't really have systematic ways of thinking about. One of the arguments she makes is we might get to learn something from bioethics. know, healthcare is an industry that only recently structuralized how they think about ethics. They only recently had, you know, of the field only recently emerged. It only became like official positions at hospitals and medical institutions recently. And the idea that there are larger ethical questions that need to be asked that folks need to sit down and think about at the institutional level and make decisions about. is something that is very similar. And just like with bioethics, the answer isn't one and done. You don't say, okay, this is exactly how organ donation policies are going to happen across the country, but you have individual people within the institutions making those decisions. And so we're trying to apply some of our work to the tech world. And so we're helping teachers think about what are the questions, what are the factors that you ought to ask in order to think about whether or not something is ethical. And so drawing from bioethics, we talk about four different factors. There's does it do any harm? What are the benefits? Does it reduce any autonomy for anybody? And then at end of the day is it fair? So those are kind of the four factors that bioethicists use to determine whether or not something is ethical and whether or not it's the right decision. And those are really easy for somebody to sit down, grasp really quickly and then reapply, which is why I love borrowing from them for education as well. And so I'll an example because right now I'm talking like a philosopher, but we do a lot more of the just philosophy. So You have the question of, we use this AI tutor in our classroom? And so you might sit down and say, okay, here's a potential benefit. The students are actually getting better test scores when you're having the students sit down with AI in the classroom. Maybe. The research on all of this is all over the place right now, and we probably won't have conclusive research for a bit. But let's pretend the evidence says conclusively it helps students with their test scores. You might also figure out that, well, it also hurts their social-emotional development. So now you're weighing... two different factors. Is my goal to do no harm the way a doctor's is and not introduce a tool that might hurt my students? Is the benefit of actually their test scores going up worth something? And then does that happen equally? Does every student's test score go up or does only some student's test score go up? And that's the fairness question. Are we seeing benefits only apply to certain students and how are we answering that? So if we're seeing our highest achieving students do really well and do better on their tests after an AI tool is introduced. What does that mean for how we treat our students who aren't doing as well? And the final question is autonomy. Do our students want to use AI? And that's fun question that we're seeing a lot more of is, while we can all sit here and say, okay, this is what is best for students and we got to do it, we're getting a lot of feedback from teachers that students are tired of hearing about AI. They've decided that it's not the right way for them to approach their work and they want some ownership over their work, which... is quite surprising. is not what, you know, there of course is the cohort of students who is, you know, is embracing it, is finding ways to use it productively. But we're also hearing that students are coming to class and saying, can we not use technology? this is where we don't want to do this. And so that's a question we to think about when we're making policies about what we do in our schools. Scott: That blows my mind that is actually counterintuitive because most of the kids I know are on their phones all day long, right? When, you know, I want to talk to them, they're on their phone. have to pee little way, right? So they're, they're, I hate to say it addicted to their technology, but in the classroom, they don't want the same technology that they would use at home. That's interesting. Priten Shah: No, that's right. We're, you know, look, I'm not, this is not, you know, we're not seeing this in like 90 % of students globally, but we're starting to hear anecdotal evidence that students are saying, we're spending way too much time on our technology. This is some of the only human interactions that we're having. Can we not just like sit at our computers all day and, you know, plug away at the computer? And, you know, maybe that won't be universal. Maybe we'll still have folks who want to sit on their phones and, you know, there isn't, there was real evidence that there is addictive. you know, habit of students. But if we show them what that might look like and help them enjoy that process of like being in a classroom with humans with that with their phones disconnected, that might allow them to actually start to make those decisions themselves and advocate for themselves and say, you know what, like I spend enough time on my device outside. I find myself learning better when I'm doing something with a peer, when I'm hearing from a human teacher, when I'm like, you know, interacting in a group setting. But the problem is I think a lot of folks are rapidly just embracing the technology, bringing it into the classroom because they see their students, you know, addicted to their cell phones here at Hammerster on TikTok. And they say, okay, we just need to like bring the tech into the classroom as well. And that's what I'm worried about is maybe we can, we can take a vote. let's keep this a space that allows students to experiment with what a world that isn't fully tech enabled means and see if they like that. Because I'm at least hearing anecdotes that some students like it I'm hoping that... As more students get to explore that and more teachers try that, we hear more of that. Scott: Yeah, that's really interesting. So one of the things, know, from an adult learning perspective is we're experimenting in a world where we're allowing them to use their phones because we know that they're going to have them anyway. And one of the nice advantages of that is I can actually get data from an engagement perspective, right? So as I'm following along with whatever you're doing, I can just like I would on Instagram, I give you a nice little thumbs up there or I give you a nice little heart, you know, and as a somebody that creates tools for adults to learn. Priten Shah: Yeah. Scott: That's really powerful data for us, right? So are we creating an environment where people are actually going to engage? I wonder, I wonder if that's part of it, you know, just the tools, like I, we don't see the same kind of engagement when they're sitting on a laptop. The engagement goes up exponentially when we're dealing with a device that is a little more, let's just put it bluntly. If you and I were going to have a conversation, I was going to share some things with you. I'd want to use my phone to do that and not my laptop. So maybe it's that interpersonal thing. there's something that hasn't been invented that I'm sure you would invent, right? That kind of brings those two worlds together because I feel like that's kind of where that needs to be. Priten Shah: Right. Priten Shah: Yeah, and we agree and we are working on stuff in that space right now because I think that is where we kind of need to be, right? So we need to figure out like the alternative, the world in which every student is at their individual computer doing their own thing is a world that no one wants, right? Like there's not, no educator out there thinks that that's what the classroom ought to be. you know, education policymaker out there thinks that's a great idea. And I don't think most students would say that makes me excited about going to school. But we also can't avoid technology completely. And there are advantages to it. And so This is where the ethical question comes in. Where is that right balance of how much data can we collect from the students to do better for them, to get more evidence about their engagement, to get more evidence about where they are in their learning journey without harming them, without exposing them to later data leaks, without exposing them to misinformation campaigns because this random AI tool now has data about their learning process. And so those are questions we can ask. But there's also questions about how do we use this technology to make their human interactions better?