Related Projects
- Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12
Transcript
Announcer:
Welcome to Principal Center Radio, helping you build capacity for instructional leadership. Here's your host, Director of the Principal Center, Dr. Justin Baeder. Welcome everyone to Principal Center Radio.
Justin Baeder:
I'm your host, Justin Baeder, and I'm honored to welcome back to the program, Preetan Soundershah. Preetan Soundershah is an educator, philosopher, and entrepreneur working at the intersection of humanistic values and frontier technology. He's the CEO of Pedagogy Ventures and leads three nonprofits, Pedagogy Futures, Academy for Social Civics, and Thinker Analytics. Creighton is the author of AI and the Future of Education, which we've talked about previously on Principal Center Radio, and he's the author of the new book, Ethical Ed Tech, which we're here to talk about today. He teaches courses on ethics of ed tech and epistemic justice at College Unbound, and is a visiting researcher at the Harvard Department of Philosophy. He holds a BA in philosophy from Harvard College and an MED from the Harvard Graduate School of Education.
Announcer:
And now, our feature presentation.
Justin Baeder:
Preeten, welcome back to Principal Center Radio.
Priten Soundar Shah:
Thank you so much for having me back.
Justin Baeder:
Yeah, it's been a couple of years since we've spoken, and since our topic is artificial intelligence, those years feel more like decades. What have you seen happen in the field related to both education and AI since we last spoke, and kind of what prompted you to write this new book?
Priten Soundar Shah:
Yeah, so I think when we initially met, there was a very slow uptake of the technology in general in our schools. And so a lot of my work at that point was, you know, helping teachers see what the technology was capable of, helping them integrate it into their own workflow, and basically build the literacy they needed to navigate this in terms of assessments as well. Since then, there has been a rapid change, both in the technology, but also in terms of how much our educational institutions are embracing the technology. And a lot of it started to scare me because it's happening at lightning speeds without kind of slowing down and asking the right questions that we ought to be asking. The book's goal is to slow the conversation down and despite the speed of the technology, take a little bit of a quick break, ask what our fundamental values are, and then proceed to figuring out exactly how we want to build the technology into our schools.
Justin Baeder:
I've seen in news headlines recently, a lot of districts rolling out a new technology, like an AI chatbot for students, and then realizing, oh, actually, this has some things maybe we should have thought about before rolling it out. So, you know, this need to maybe pump the brakes a little bit and be thoughtful is is running up against just the constant forward motion that feels like things are just happening so, so fast. And as a philosopher, as someone who thinks about what we should be doing, what we should be thinking about, what are some of your starting points for thinking about ethical ed tech and AI?
Priten Soundar Shah:
Yeah. So my approach here borrows actually from bioethics. And the idea being that a lot of decisions that are ethically very meaningful in education are happening at a very fast, very locally and very context dependent. And so that means that rather than come up with these general rules that apply to every single educational institution. We kind of need to empower our educators to make decisions applying an ethical framework, which is what medicine did, right? They empowered doctors to make those calls on the fly rather than having some statewide or federal policy on every single possible ethical question that comes up in medicine.
What that largely means is rather than focusing on what can the technology do, what are we being sold, what seems the most flashy and innovative thing, and what makes us feel not left behind, we recenter to what kinds of schools do we want to build? What are we looking for in terms of what we want for our students? And what are the long-term implications of these decisions beyond keeping up with the latest fads? And that's difficult. You've pointed out that a lot of the actions we're seeing across the country include there is a need for speed here, right? But there's marketing language that comes at you.
There's funding opportunities that comes at you. And you're hearing about it from your students, your peers, and it's appealing. Like there is an appeal to embracing it, but we're already seeing some of the harms of that. And I think that's where we need to sit back and say, okay, when do we have the right evidence to make these decisions? Do we actually know what the implications of this are going to be? And who are we getting that information from?
I think it's also a very important question right now.
Justin Baeder:
And as much as we would like to say that we are an evidence-based profession, that we always wait for good research before we make a decision, I think you're absolutely right that the sales pitches, the opportunity to do something new and shiny usually persuades us to run ahead of the evidence. And we've seen that come up with curriculum and instruction and just all kinds of things that have turned out to not be successful. positive change that have taken us in the wrong direction. And probably the pressure to do that when it comes to anything with technology is stronger and the rate of change is faster. Do any particular examples come to mind in the past recent years for you with educational technology?
Priten Soundar Shah:
Gosh, there are many. Just this week, we saw a district in California, which has made headlines because there was a You were using a school-sponsored device to have access to an Adobe photo creation tool, and a fourth grader went home and was told that they had the option of either illustrating a book cover or using the photo creator. And when they prompted the photo creator, it created a very hyper-sexualized image of the character that the student wanted. And that's interesting. A great example, because I think when folks are thinking about the use of the technology, this is done according to best practices, right? It's a school-issued tool.
It's not like the student was independently finding a tool on their own. The student was given the option of using their own creative energy, but also some students don't feel like they want to draw, and so it's nice to give them an option to use a different creative outlet. And so it's not the most unreasonable thing to use the tool, but... safety considerations, we still just don't know everything about what guardrails work, what don't work.
And especially at that age, like the exposure is probably more risky than any benefit that we would gain.
Justin Baeder:
Yeah. And certainly an understandable reaction to any particular incident is, hey, maybe we shouldn't be using any technology at all with young kids. Maybe we should go back to paper. What's your take on that?
Priten Soundar Shah:
Yeah. So I think that there are two extremes right now in the conversation. And I think both of those are dangerous because while I end up being a very cautionary voice right now, given that there's a lot of loud voices talking about the positives of the technology, I do think that the right response is not to fully withdraw from the technology. And there's lots of reasons for that. I think that primarily there's lots of benefits that can be gained that actually solve problems that we've been trying to solve for a long time. And so there are use cases where it meets our longstanding goals and it isn't just flashy or new.
And that requires slowing down, right? It's asking, did we have a pre-existing problem before this technology sales pitch came at us? And does it solve that problem? Or am I being told there's a problem where there really isn't one because they're trying to sell me a product? And, you know, we fall for that outside of just like sales procurement of ed tech. That's a marketing force that works very well when you're being sold any device.
I think Steve Jobs very explicitly said that that was his sales tactic was to convince the user that they had a need because the user did not know what their needs were. I think this is a context in which that doesn't apply, right? Our teachers, our schools, they know what problems exist already, what their biggest weaknesses are, what their biggest wishes are for reaching their pedagogical goals and approaching the technology procurement from here's a problem I wish I could solve, which tools might solve this is step one. But then I think step two is what we were saying earlier, which is, is there actually evidence that this both solves that problem and doesn't create new problems for us? And that's where I think the AI conversation is needs to slow down because we don't have evidence on either end of that. We don't fully know the risks and we don't fully know that the benefits that are being promised are actually possible.
Justin Baeder:
Yeah. And I think the risks are the part that catches us off guard so often because we can anticipate, okay, we might use this and it might not work that well. We can't anticipate the scenario specifically that you described earlier with an AI photo generation tool returning something that's inappropriate. Those kinds of specific things are hard to anticipate just by their very nature. And of course, we feel bad when other people learn the hard way that those risks exist. But hopefully we're learning from each other and from the headlines to avoid some of our own missteps.
I wonder if we could get into some of the big principles, the big concepts that serve as the foundation of your thinking. Are these universal? Do people develop these locally? What do those kind of major principles look like?
Priten Soundar Shah:
Yeah, so I think that there's three sets of ethical considerations that come up when we're thinking about these decisions in the educational context. The most obvious one is who are all the different stakeholders? And I think this is where education in particular is really hard because it's not just a doctor-patient relationship where you have one person that you're responsible for in that ethical decision, but you're now thinking about the individual student, your group of students, the impact on the local community, the long-term impact on our society. You're thinking about what parents, what rights they have and what they say about this. And of course, like administrators and policymakers, that's a lot of people to juggle. But we do have to make sure that we're taking into consideration all those perspectives.
And I have news anecdotes for all of this. And so I'll spare you for everything. But there's a great example right now, also in California, unfortunately, of a district where there's supposed to be a public comment period. There's supposed to be a board vote. There's supposed to be parent input and the school district rushed to end up signing a contract with OpenAI and then redacted all of the terms of the contract when they released it, which is not normal practice for the school district. So this is not, you know, some of these things makes the headlines, others don't.
And this is not in any individual unique case. It's things like this happen across the country because we don't really have the systems in place to have ethical decision-making in our schools as a systematic thing. So who is the first part? And then the second part of this is, and I think this is going to be really important as AI continues to develop, is what are our goals for our students, right? And so there's obviously three that everybody talks about. Some folks think it's about career readiness and economic opportunity.
Some folks think it's about civics and making sure that we are building a educated populate for our democracy. And then other folks talk about just the intrinsic benefit of education for flourishing and for students to be happy and build meaningful lives. And I think that those three are probably all important. And I think when you're talking about a diverse society. We have to understand that all of those will be goals. We can't say, okay, we're going to reframe all this because I believe that it should all be about civics to only talk about civics in our ethical decision-making.
And that's where we have to say, okay, there's a lot of rhetoric coming at us about career preparedness, but are we losing sight of those other two goals? And how do we balance all three of those goals? So that's question two is, well, how are we balancing all the various goals that education plays rather than focusing on just one that currently seems to be getting a lot of attention? The final piece of this is those principles that I think are a good little mental checklist of asking the right questions about the technology. And I'll quickly go through them. The first is, are we actually doing something good by using the technology?
Are we avoiding harm? This is the do no harm principle of medicine. Are we respecting the agency of both the students and the parents? Are we doing this in a way that's fair? Are the benefits accruing only for a certain population or are they being spread out? Are the harms being concentrated in a certain population or are they being distributed in a way that's fair?
And then the final component of this, which I think is the most important, is an idea of relationships, right? What's another unique thing about education is that a lot of the benefits we get in education are contingent upon good relationships. And so The example I like to give is like, if you hate your surgeon, like they can still take your appendix out, no problem, like that's not going to affect the outcome. But if you hate your teacher, it does change your entire experience in the classroom, right? You don't gain the same benefits the way you do with your surgeon. And so the importance of that relationship is something we also need to keep in mind.
Justin Baeder:
And I think when it comes to the distribution of the benefits and harms, I think that's something where we think about ourselves, we think about people with a good deal of privilege, and we think about the potential upsides of a technology. But one of the things that I think has caught a lot of educators off guard is just the potential for access to a device to be a net negative technology. So we think about just access in general as being a net positive. But I think a lot of districts have found out that, you know, putting a device in kids' hands at all times, sending a device home with kids, you know, depending on how kids are spending their time, depending on the level of supervision they have at home, depending on what they're interested in, that could be a net negative. And I think there have been some new findings on how much students are accessing maybe sites that they shouldn't be, or at least games that are, you know, spending their time on things that are not helpful.
And there is this kind of naive assumption that if you just give kids technology, they will do productive things with it. They will benefit in terms of their learning. And maybe we think, you know, when I was a kid, I just looked up, you know, advanced math on the internet and that was great for me. So shouldn't all kids have that? Well, that's not necessarily what everybody is going to do. There's much more potential for harm that's out there.
So let's talk about the agency piece, because I feel like technology in general is a pro-agency investment, right? The idea that students don't have to be passive. They can be active. They can pursue things. They can make things. And technology can help with that.
How do you think about agency for both students and parents? Because I'm glad you mentioned both.
Priten Soundar Shah:
Yeah. So I think there's the agency for parents is the easier one to start with. And I think that this is where we're starting to see some parent advocacy groups come out and advocate for particular policies in terms of what they want to be informed about when technology is and isn't used in the school, what the risks are, what data is being stored, how many guardrails are up. And those conversations ought to be step one. We ought to be sharing that information, not when parents are asking it, but telling them that these are the new technologies we will be introducing when we are. Here are the risks that we've considered and why the benefits outweigh them.
And I think that would avoid a lot of the backlash we're often seeing in communities where The parents are caught off guard by not knowing how and when these technologies are being used. And so that's part of respecting the parents agency is, you know, making sure that they're fully informed and you're gaining informed consent from those parents. I think in terms of students, that agency comes up in multiple different ways. And I think you're right that a lot of the framing around technology is that it increases access to information and that is a form of agency. But I also think that there's considerations when you think about, do our students want to use the technology? That's also an aspect of agency where when you, especially when you're looking at high schoolers and college students, you hear students who have their own qualms about not using technology.
They are worried about the environmental damage. They're worried about their data being in there. They're worried about the cognitive offloading. And when they show up to school and they're told, hey, everybody's going to be using this tool, are we giving them the option to opt out? And how are we managing that? Because just saying you ought to give them the option to opt out is very easy.
But in practice, in terms of classroom management, that's difficult when you have like five students who are not going to use a tool and everybody else is using a tool. And so figuring out what that means for when we implement the tools and what are our opt out rights for students, that's another part of this. The final piece of this is that I think as much as the technology can be agency promoting, there are also lots of ways, especially with AI, that it can be a takeaway agency from our students. And I think a lot of this has to do with the cognitive offloading. When they're making less of the decisions, when they're learning less, that is reducing their agency long term. And I think the second component of this is that a lot of the personalized learning and a lot of the algorithm, the risk assessment, all that is built on research.
large assumptions about who our students are, and that also takes away some of their individuality. And so I think as much as personalized learning is framed as something that's individualized, it's also very much extracted and at a high level making assumptions about who our students are to quote-unquote personalize that learning, and that takes away some agency as well.
Justin Baeder:
There's certainly the potential for situations like, well, we've personalized the recommendations for you and therefore we're now giving you below grade level material. Like we're essentially lowering expectations in the name of personalizing in some cases. And again, I think the intentions can be good, but we have to look at those questions that you mentioned. Are we actually doing something good? Are we avoiding harm? You know, is this fair and beneficial to everyone?
Yeah. And yeah, I certainly appreciate your earlier comment about just slowing down and taking the time to think. I also want to acknowledge the role, I suppose necessarily, but perhaps an outsized role that sales has on this. Because certainly in most cases, there is a vendor relationship. There is a sales pressure. People are calling schools and saying, hey, you should buy our thing.
Here's what's going to happen if you do. Here's what's going to happen if you don't. I mean, there really is economic pressure to have that sales process going on. How should educators think about that? So I get a phone call about some really impressing sounding technology. And I, as a school leader, know I have problems that I need to solve.
My students have opportunities that they are not receiving. There are real problems that we could solve here. So it's not like I'm just being silly and taking that call. How do you think about the educator's situation, like receiving that phone call and having problems to solve and connecting those dots?
Priten Soundar Shah:
Yeah. And I think you're right that a lot of this is sales driven. And we see that with the large contracts that all of the major AI companies are signing with universities. Those are all initiated by the AI companies predominantly. They go out and start to build those partnerships and sell them on that vision. I think that the right approach to this is first building that literacy.
And I think this is where there's a lot of work to be done. If you don't know how the technology works, what are the potential risks of that technology. That alone makes it hard for you to ask the right questions when a vendor approaches you. And so this is where the goal isn't to like shut ourselves out from this and not think about the technology at all or say, oh, I'm going to wait until we have all the information. I think we need to be seeking that information now. What is the technology capable of doing right now?
What are the real risks? What are the possible benefits of using this in the right way? And then there's what evidence do these vendors actually have? And I think oftentimes, again, here, we look for quick heuristics. We might see a statistic in the email. We might see a quick anecdote or case study from a local district, but really pushing back and saying, Is there evidence in my context?
Does it take into account my school population, my school size, the other resources I have, the other programs and curriculum I have going on? And then seeing, does that evidence still hold up? Are the kinds of benefits they're talking about, are they going to apply in your context? Do they have evidence that it works in your context? Or are they very much individualized to a very different school setting?
Justin Baeder:
Well, I think that's a good segue into the question of policy because, you know, often there are procurement policies and there are technology use policies, but we don't necessarily have a policy environment for tool adoption or for making purchases that is up to date with the state of the art. So how do you think about policy issues that can prevent districts from getting in the kind of hot water we've seen recently in the media where a tool has to be rolled back, there's some sort of scandal? How do we think about that from a policy perspective?
Priten Soundar Shah:
So this is where I think that the second piece of this is building out policies that center what our ethical concerns are. And so once we understand what risks we're worried about, we might be worried about the black box effect of why are these decisions being made? We might be worried about third party vendors getting access to the data. We might be worried about prediction algorithms and what that means for bias and how that's implemented, whatever that might be. Coming up with that list of concerns and integrating that in the vendor vetting process, I think is important because Oftentimes, our vendor vetting process is very much contingent about what state policy, district policy, or federal policy already require. So it'll say, is it meeting FERPA requirements?
Is it retaining data for this many months? Is there a way for me to delete the data? And those are all important considerations, not that those are not. But I think when we view federal regulation and state regulation as contingent, The end all be all. I think that's when we get into some more trouble because at the end of the day, those are baseline. That's a starting place.
Like those should all be taken into account, but there's a lot more questions that we need to be asking of the tech vendors in particular before adopting. And I think having all that in writing as part of the process is important. I think the other thing is like evaluation moments. Like I think that some of these contracts are like three, five year contracts. And I know that's normal in education and like large procurement in general for a school conference, because when you're building and investing in something, you don't want to, you know, you want to make sure that it's affordable long term, but also that there is a commitment on both ends. I think this is a case where the technology is moving so quickly and we don't have enough evidence to be making a commitment for that long.
I think we need to be making much smaller interventions, start with pilot programs, have an evaluation moment of this work in my school in this very small context before I open it up to every single class or every single teacher in the building. I think all of that kind of needs to change how we make some of our procurement decisions in terms of the technology. I think it's going to require a much more experimental mindset that allows us to be agile and iterate on it much more quickly than I think we can right now.
Justin Baeder:
I wonder if I could run a couple issues by you for a case that I've been reading a lot about lately, and that is iReady. And I don't have any particular ties to iReady. I don't know anybody who works with them directly, but I know that it's in a ton of schools. And what I have seen recently is some concern about the amount of instructional time that's being devoted to doing these exercises on computers. In some cases, it's like...
Half or 40% of instructional time is now spent on iReady and parents are asking questions about how do I opt my kid out? Kids are saying, I had to do this when I was in school or I'm currently in school and I have to do this and I hate it. There are a lot of potential issues here where a big company that is very good at selling is essentially getting its way. And we as educators have to think, wait, is this how we want to be? Is this something we want to use in our schools? Is this how we want decisions to be made?
What are some of the issues that we've talked about today or that you talked about in the book that come to bear on that kind of scenario where there's starting to be backlash from parents and students and educators saying, wait a minute, I'm not sure. We're in pretty deep to this particular tool, and I'm not sure we should be.
Priten Soundar Shah:
first, I think that the case studies are great. That's a third of my book is devoted to real life scenarios that have happened, because I think that really allows us to exercise those muscles. And so I'm just happy that we're getting a chance to do that live on the show. I think you're right that this brings up a lot of the different things that we talked about today. And I think that we're in some ways, this conversation ought to have happened before the tool was adopted into the schools. And so I think If we were asking were parents consulted, if we were talking about where students and parents brought in on a task force or committee before decisions were made about a drastic change in pedagogical strategy, some of this would have been preempted.
I think if we were also considering those five principles that I talked about earlier, which is what are the potential harms of this technology? And are we centering our relationships? And I think that that relationships part is very important for when we think about the move to increase screen time in our classrooms, because there is something appealing about this will raise your math scores by 10%, 15%, right? Like we have real reasons to be concerned about math literacy and regular reading literacy in our schools and tools that promise us benefits in that, that there is appeal there. But at the same time, we need to also think about other metrics that matter and other aspects of our students' development that matter. And so What does that mean for social emotional development for our students?
If they're spending that much screen time, they're not getting to interact with their peers. They're not maybe struggling as much or even just like exercising patience and waiting for the teacher to come to them and like answer their question. There's all these small things that are important for human development that I think we oftentimes kind of don't realize are happening in a school building, but all that's actually happening, right? All part of how we learn to be in a social setting, how we learn to exercise and control our emotions, that does require a level of interacting with other humans and And I think that the more screen time that we give our students, the less of it that they're getting. And the fact that students themselves are noticing that is important and a clue for us that maybe we should step back a little bit and say, how important is that 10% increase in math scores if that means that my students don't actually end up feeling connected to the school? What does that do for our dropout rates long-term, right?
There are larger, longer-term implications beyond this year's standardized test scores. And again, there's well beyond an individual teacher's control factors that cause that focus on standardized testing and make all these sales pitches appealing. But I think as a community, we all need to be asking these questions as well so that we kind of all don't feel like we're alone in this and say, okay, I'm going to be the only one who's going to not care about math standardized testing scores. It's just, it's not realistic.
Justin Baeder:
Yeah. Yeah. To throw one more current event into the mix here, I saw recently that Australia is debating whether to move standardized testing online, or perhaps they've already announced that they are. And I've seen some really thoughtful discussion that echoes much of what you've been talking about with regard to whether that's even a good idea in the first place because of the on devices. If the standardized tests that schools are held accountable for is given on a device, does that pressure schools to spend more time on devices in general, even if that's not a good thing? And I think that's a very good thing that people are actually thinking about that and saying, hey, wait a minute, should we pump the brakes on this because it's going to have these perhaps unintended consequences?
An unintended consequence does not have to be an unanticipated consequence if we're willing to slow down a little bit. Thoughts on that?
Priten Soundar Shah:
Yeah. Another, I think that just like thinking about that in terms of The unanticipated and unintended difference, I think, is very important when it comes to education technology. So I think that distinction is great. I think that this is a worry of mine in terms of just like where we'll move in general with education. I think because there is, again, lots of good efficiency reasons for moving assessments online. There might also be access reasons.
So when you think about some of the larger standardized tests that happen later online, There's good reasons that the SAT being online or the LSAT being online has made it a lot more accessible. You don't have to, you don't have a testing center near you. You're not no longer have to like drive three hours and stay in a hotel room if you're in a rural area, right? There's, there are real accessibility gains to using the technology in some of these contexts. it's much easier to build in accessibility tools into the testing environment, right? So screen readers, all of that can make the test much more accessible.
And so those are real considerations for why we might want to think about the actual benefits of the technology. But I think that this is a problem that I don't know if the tool is the problem or we have a bigger problem here, which is that when we have a standardized test, all of our pedagogical strategy and norms become centered on that standardized test. And so I think there is a real fear of if the test is online, I imagine most schools will do a lot of real under like normal testing conditions prep work. And I think that's a real concern and something we ought to think about. But the problem might not just be that the test is online. The problem might be that we care so much about the test that we're willing to redo our entire strategy around it.
And whether that's the test is not testing the right things or we're not using the right, whatever, there is a bigger problem here than just that the test is online. In the meantime, that probably means we pump the brakes until we can figure out those larger questions to avoid some of those harms. Because we, again, want to make sure that we're minimizing harms in the short term while we solve these bigger problems to gain the benefits that we want to gain.
Justin Baeder:
You touched on an issue that I think is really crucial there, that as educators, we have a responsibility to not just allow these things to happen, not just treat them as inevitable. And I was thinking about No Child Left Behind, and people are very well aware of the narrowing of the curriculum that happened. to just reading and math as a result of No Child Left Behind. Those are the areas that were tested. And as a result, especially at the elementary level, science and social studies largely disappeared from the elementary curriculum because they weren't tested. But I think it's important to recognize that No Child Left Behind did not make us do that.
There was no requirement at all that schools cut back on science or social studies instruction. That was just not in the law. It was an unintended consequence, but it was one that we had the opportunity that we largely missed to stop and say, wait a minute, we're not going to cut back on science and social studies. We don't have to do that. In fact, we might be shooting ourselves in the foot by doing that because it's going to deprive students of the content they need, the background knowledge, the vocabulary that they need to actually do well in reading and math. because we're cutting out something that's always been essential.
I just think there's so much to think about there when it comes to our responsibility as educators to be thoughtful and not to just be consumers of a product that's sold to us or not just pursue one goal without thinking about the whole picture. So I appreciate that you're doing this work. You're getting educators thinking and getting educators pausing and deliberating and consulting stakeholders. Any other big pieces in the book that you want to bring to the table before we close?
Priten Soundar Shah:
The last piece of this is just re-bringing the relationships into the conversation. And I think that one takeaway can be that when we're making these decisions, we ought to really bring that back into the conversation, especially in conversations about AI. Because I think we're seeing challenges to our humanity, right? This is partly going to be a question of what role does our humanity play in our school systems and larger society? And that requires us to realize what the importance of our relationships are and really think about that for a second. And I think like when you even think about like the prioritization of reading and math, we lost something not just in terms of other disciplines, but also in terms of like those disciplines, right?
It became very hyper-focused on those tests. And so the consequences of this are so far reaching when we don't get a chance to slow down and say, what am I losing by making this change? I think that that... The focus on gain is important and appealing, but oftentimes there's a lot we're accepting in terms of losses.
And I think if relationships end up being that in this next phase, I think we'll all be looking back and saying, oh, we made some wrong decisions. If folks want to find the book, they can find it at ethicaledtech.org. And they can find me and my podcast and my newsletter at pre-in.org.
Justin Baeder:
So the book is Ethical Ed Tech, How Educators Can Lead on AI and Digital Safety in K-12 Schools. Ruthen, thank you so much for joining me on Principal Center Radio. It's been a pleasure.
Priten Soundar Shah:
Thank you for having me.