Ethical Ed Tech book cover
New Book · Coming Soon

It’s Time to Put Ethics at the Center of Ed Tech

A practical guide for K–12 educators navigating AI and digital safety.

Podcast Appearances

Best of The Authority: AI and the Future of Education with Priten Shah

The Authority Podcast (Best Of Replay) · Ross Romano

March 21, 2024

Listen
AI-literacy

Related Projects

  • AI & The Future of Education: Teaching in the Age of Artificial Intelligence

Transcript

HTTP 200 Content-Type: text/html; charset=utf-8 The Authority Podcast — Expert Insights and Fresh Ideas for Education Leaders | Transcript: Best of The Authority: AI and the Future of Education with Priten Shah — Teaching in the Age of Artificial Intelligence Ross Romano: AI, Artificial Intelligence. Sometimes it can seem like all anyone talks about nowadays, but if you've been listening to the Summer of AI series on Transformative Principal, for example, you know how important it is that we really do discuss AI thoughtfully, understand what it means for the future of education. And for our students, for the economy, right? And everything that's happening in our schools, preparing our students for that future. So today I'm really pleased to bring you a guest who has much to say on this [ ] topic. He has a brand new book out on it. Preet and Shah is the CEO of pedagogy. cloud, which provides innovative technology solutions to help educators navigate global challenges in a rapidly evolving world. He's also the founder. of Civics Education Nonprofit United for Social Change, and he's author of the book we're discussing today, which is called AI and the Future of Education, Teaching in the Age of Artificial Intelligence. Preet, welcome to the show. Priten Shah: Thanks, Ross. Thanks for having me. Ross Romano: So, you know, one of the things among teachers in particular, but a lot of people in a lot of, in a lot of, different lines of work and part of society. There's a lot of the rumors, confusion, fear around the rise of AI, certainly over the past year or so, it seems like All of a sudden, it's everywhere, and then, and even though the concepts of artificial intelligence, machine learning have been around for quite a while, it's become inescapable, and I think the more inescapable something is, the more some people want to [ ] escape it, right? So I thought maybe what we should start with, just to, to context that for our conversation, is just to get the baseline. definitions of AI and machine learning, understanding just functionally, what does it mean? What is the technology, right? How is it designated when something's in that bucket? So that it doesn't feel like it's everything, right? We have a good understanding of it and then we can really talk about it more thoughtfully. Priten Shah: Yeah, cool. That's a great place to start. So I think like their key place to start is thinking about AI in general. And so, you know, the way we like to explain it is it's the mimicking of human like thinking and processing, by a computer. And that seems to be an easy definition for most folks to start with now. This has a whole range of things. So, we as humans do a lot of different our intelligence, comprises a lot of different tasks, and so we can do something as simple as process auditory input, when we're listening to each other talk. We can see each other's faces and facial expressions and recognize there's a lamp in our background. And we can also do some sort of thinking and processing and [ ] synthesizing of information that's fed to us. And artificial intelligence is the attempt to at least try to get computers to do all of that. And there's subsets that we've been, technology has already been able to do in the last couple of decades. Our Siri systems can artificially output voice the way a human might output voice. Now those aren't really great at Actually processing, the input and coming up with, original thought, but, the ability to mimic human voice and words is actually really good. So that's what it's, the intelligence that it's trained on, the intelligent task it's trained on, my brother, is that. Now, none of that is what's causing the uproar of that one, so. No one's freaking out in schools because of Siri. Folks are freaking out because of a particular new, implementation of this, which is the generative AI landscape. And generative AI, it's kind of in the name there, is artificial intelligence that actually generates tasks. And so while a lot of these other AI tasks are great at regurgitating, at classifying, the new technology that's coming out, it's much better at creating original output. And the way it does this, And I'm going to try to make this as simple as possible for the sake of, so we can get to the, core of the impact here, is that if [ ] it takes a large data set, and so, for example, OpenAI's chat GPT has taken a large data set of text, and then it tries to figure out what's going on in that data set. So it looks for patterns, it looks for, things that it can recognize as, hallmarks of different types of data within it, so that it can replicate those things. And so it might recognize that the word, always shows up in this particular context, the word, always shows up in this particular context, and then it starts to build bigger and bigger understandings of the human language based on that. It uses that understanding to now generate new text, new images, depending on what the data that it's been trained on. And that's where it starts to start to sound more human like because it's not copying and pasting something, it's not, giving you a here's a list of 10 links you can find, the way even some of the search engines do, it's producing brand new original output that caters to what you then put into it. Priten Shah: Okay, maybe I'll stop there for a second, but, we should there's a lot more to dig Ross Romano: Yeah. And when you talk about generative AI and the enormity of that data set and the ways [ ] in which it's used. I think that ties into, you know, there's a couple of different ways to group those who are maybe feeling fear around the rise of AI. There may be some who haven't engaged with it much at all and are just kind of maybe hoping it goes away. But then there's a lot of folks including the creators of the technologies themselves, who are examining various ethical concerns and trying to think about, okay, as these technologies evolve, we understand that they're going to become a part of our daily reality and they're not going to just disappear. But how does that impact ethics in a variety of areas? When we're dealing with students, of course, we have even more concerns around, their privacy and their data and the ways in which they engage with technology. What are some of the ethical concerns that are, important that we think through as [ ] we're considering our implementation of AI tech? Priten Shah: And I think it's a great, that's also another great place to start. I think just centering the conversation on ethics also makes that, you know, I think folks start to, starting to address folks fears first, and it's a great place to. Begin the conversation. And so, I think the ethics things kind of be put into two buckets. And so there's, well, maybe even three buckets, right? So you have the first bucket, which is, what the students are doing within the school systems as they use these AI, technologies. And so, their ethical considerations there end up being our privacy security concerns or data privacy, concerns, and really figuring out what kinds of data from our students are we sharing with third parties, right? So that's, that's a bucket of ethics, ethical, concerns that we can definitely talk about. The second is what are our students going to do with this technology, right? So, how are they going to be using this technology within their own lives, outside of the school systems? What kinds of, BIA literacy are we building with our students so that, when they're at home using ChatTPT, they're not using it in a way that's actually, you know, unethical, or is damaging or harmful to somebody else? And then the third or as hopefully as they all grow up and they [ ] become pioneers in the, in these fields, they start using this technology in their careers, also so that they have that mindset of how what kinds of data sets are we going to use, when I'm working at it as an engineer, and to make sure that my algorithm that I'm building is not biased, right? So all the AI literacy components there in terms of building students who are, responsible users of AI technology and responsible creators of AI technology in the future. The third is, our ethical concerns that we have with any new technology in the education space, which is making sure that we don't make the achievement gap worse which is making sure that the digital divide, is solved so that we're not exacerbating the, implications of that, and figuring out ways that we're not leaving behind students in terms of as we implement new technology, by either limiting their access or by moving too quickly into, implementing the technology. And so I think those are those are three buckets of ethical areas. I don't think we can talk about all of them quite a bit, but I would definitely want to start those. Conserved into those three buckets. Ross Romano: Yeah, what's going on particularly on the student data privacy piece, because that's, of course, been an ongoing concern for as long as tech's been around, right, and every tech company has had to consider that, [ ] grapple with it, improve their processes, and now we have technology that's fast moving, that's freely available, that is collecting more data than ever before, right? And, and that students are also, as you said, like using on their own time, in addition to potentially in schools. So it's not just a classroom technology. And yet. You know, it's happening at such a rate that, that there may not have even been enough time for the students and the adults around them to even think about, okay, what, what data are we even giving to this thing? And, and what does that mean? And how might it be used? And is it personally identifiable? And what is all that? I mean, what's, the progression of those conversations, as it stands? Priten Shah: yeah, yeah, I think, and I think this is where we are starting to see some of the conversations take place and finally see some implications of it too. So, I think thinking through what kinds of data these are, we have media literacy conversations with our students already, right? We talk to them about what they share on [ ] social media, what they chat with folks about on social media, what they post on their profiles on the internet. And we talk, we we try to help create healthy habits around this so that they're not sharing personal identifiable information already with, like you said, existing technologies that, we're either implementing in schools or that we know that they're accessing outside of schools. The reason this technology becomes a little bit more complicated is because it's collecting so much more data and that's crazy in the context of when we know how much data Google is collecting, when we know how much data Facebook is collecting, to say that it's collecting even more data is it's kind of frightening. But the amount of data that can that's exchanged when you're having multiple conversations, naturally flowing conversations, and then especially using an educational context where they can be talking about, you know, what they're, they might be writing a personal statement for a college essay, lots of personal information shared there, you know, they might be reaching out for SEL help, whatever it may be, however we start implementing these technologies in our classrooms, the amount of data they'll collect is going to get greater and greater. Now, data collection on its own is not bad. It might actually make, make this technology actually more powerful in productive ways, right? In fact, I think that as we started with, this relies on [ ] some form of data collection in order for it to be effective, and the way that we reap the benefits of this AI technology long term is by giving them the, the data that we need, and by letting it personalize it to the data that we're providing it. The concern is what this is done with that data. And so this is where the accountability structures, whether they be governmental, whether they be, Consumer based, and whether they be self initiated by these companies themselves, and I think most likely some combination of all three, where I think our school, our teachers and schools will be making decisions that will influence what kinds of safeguards the companies put in place, and those look like these companies are at least trying to, have a data privacy first, rhetoric at least we'll get to see how much of this becomes implemented in actual practice, And then we'll have to wait for some policy guidance from the top. But all of that the concern really is, like, we need to share as much data as possible to make it as effective as possible, while making sure the data is not used for any other purpose. And that's that's always the key with any of our data privacy concerns. There's always, what is done with that data besides what we are asking them to do with it. And this is where we saw some, some of the opt out clauses that started coming out in terms of allowing it to be trained, deleting of the data that's, used within these systems. And I think [ ] we'll see more of that conversation, right? I think we'll see more about, you know, the data is only stored for X, Y, Z days, it's not used to train anyone, who, what's it, who, what what access is provided to anybody else, because my worries are it's one thing to commercialize this data, and that alone is a massive, concern if folks, the amount of data they have to be able to and combine that with the power of generative AI, right? And now you have hyper targeted commercial ads, you have hyper targeted pamphlets going into the mail based on these conversations, and that's a concern on its own. And then as a civics non profit founder, I have to mention that there's also a huge civics component to this, and so, these com these companies having access to that much personally identifiable information, knowing how a student thinks, knowing how a student writes, you know, knowing how a student is processing information that is being fed, can all be a a a massive tool in misinformation campaigns. And so the concern really is, are we making sure this, these companies that are collecting this data, are using it only for educational purposes that we're opting into, that students are opting into, that parents are opting into, obviously, at the K 12 level? And are we making sure that this data is secure enough so that it's not being used in any malicious way, whether those be commercial purposes, or [ ] civil rights abuses, right? Ross Romano: Yeah. I think those are serious concerns that deserve serious consideration. You mentioned earlier some of the limitations of generative AI. Can you talk a little bit more about what you think are the most critical limitations? Priten Shah: Yeah, absolutely. So I think the first and foremost that I always want to start with is that at current technology [ ] is not at the point where there is actual active thinking happening. It is a language production model and that says a lot about human language that we can mimic human thinking so well using just a word engine but it doesn't say much about the computer's ability to think yet. So that's, that's definitely a limitation that we want to think about keep in mind. The other one is the bias in the data set, right? So, no matter what folks are trying to do to safeguard and come up with, like, post training intervention methods to reduce the bias in the outputs that's still a significant limit. The data itself is human data for the most part. Even the, like, Algorithms that are trained on AI data are still, like, pre trained on human data first. And so, all those biases that exist in all our generative outputs, whether it be images or text or audio do all get replicated in some fashion or another. And so, there's great work being done as to how we can minimize it how we can reduce it, but that's, that's a limitation that I think folks need to understand, that it's coming from the fact that it is being trained on human data, and that's inherently flawed right now. Ross Romano: [ ] right? ] the, the creating new art, creating poetry, and it's pretty incredible what, What chat GPT and what generative AI can do on that front. Ross Romano: Yeah. What, what comes to your mind when we think about what do we not yet know? I think about what these technologies can and will become. Are there questions that you still would like to have answered about what will become possible in five years, in ten years? And Priten Shah: Yeah, there are loads of questions here, and I think I would, when I think about, you know, future iterations of this technology, I would like to know if the technology can get smarter without the humans having to provide a, without the technology itself having to be fed with human data in order for it to develop, right? So, as far as, okay, this would be meaningful innovation. This is what's potential potentially out there. Ross Romano: [ ] requires somebody first. Or you know what? Oh, now that I see how it can do this, it's making me think, what if we could do [ ] that? Priten Shah: And there's definitely, there's definitely, lots of ways that we can, that we can put this technology in the hands of teachers and students. But I think the way that we're probably going to get them most impactful and most authentic use cases is by maybe, putting it in the hands of teachers first, right? And so, schools are going to have to make some real decisions about how they're going to implement this technology. You're out muted, Ross Romano: Educators think through what makes an engaging learning [ ] then actually, Facilitating that learning. And, you know, that's going to naturally build, build, build their literacy. And thankfully there are tools that are, that are accessible. And so finding out, you know, from colleagues, from, from their networks, which are the ways, which are the most accessible tools for them to help them you know, get over those fears and find easy ways to, to jump into it. Priten Shah: Yeah, I think that that's a great place to get started. And then, you know, I think about, you know, these students are going to be with us in schools, using these tools already, right? So, in a few years, we're going to have a generation of students who have been using these tools from the time they're, you know, starting school. And so, the question is, do we want them [ ] they don't really think that. They're going to learn how to use them. They're going to develop those skills, whether or not they're using it directly for school assignments now, or they're just developing the skills for the future, they're [ ] going to get it. And the students for whom are not going to access it elsewhere or not get You know, appropriate information and guidance and training on how to use them really well and meaningfully, they're going to fall behind. Priten Shah: Yeah, yeah. And you know, these are the conversations that I think are happening across the country right now. And I think That we like to talk about it in three buckets. And so there, there's just a policy and procedures angle here. And so, it is figuring out, like, what tools you're going to allow your students and teachers to use. There's which, what is actually enforceable in that world too. So, like, these blanket bans on, like, okay, let me just block chat GPT or any URL with AI in it. All these are like short term band aid measures that aren't really going to help anybody. I think we talked in our, like, very first episode about just, like, the, the equity gap that it creates very rapidly when students with, like, other devices, other network access are going to go use these tools anyways. And so we, we try to encourage schools to at least, like, figure out which tools they're going to, like, encourage teachers to use and focus on that, teachers and students, rather than focus so heavily on, like, which, what can't folks [ ] use. And I think that that is the right way to do this. I think getting teachers to become fluent with the tools themselves, become like, become like well versed in what biases might exist and what limitations might exist, all means that they can have better conversations individually with their students as the students start using it. And then once you hand it in, let's see if you cheat it. And we're not really having a conversation [ ] about what that means or how we should think about it. And so, I think this third or this second bucket is about, you know, what kinds of policies we're going to put in place around, you know, how we allow students to use this technology and what that looks like academically, right? And so, the way we're thinking about it is that, you know, we want to move away from, like, any plagiarism framework, right? And we want to move into a more of a transparency and citation framework, where if a student is using AI, they're transparent about it, they're citing their use of it, and then we're assessing how well they understood the content that was either produced with AI or the process by which they used AI. Priten Shah: And so, that the third bucket is, you know, what kinds of systems are we going to put in place? And I think that, you know, one thing that we know, you know, you know, they're only going to be in those roles so much longer. And ultimately they're not going to feel the 30 years down the road [ ] as much as we are. Ross Romano: Yeah. What, What, what do students think, you know, , what, what are you hearing from them? What are, or what are teachers telling you about how their students are responding? Priten Shah: Yeah, I think it's a it's becoming, it's a common thing for for many students too, but I've heard from students that that do use the technology that they're using it quite a bit [ ] and I hear those are popular. I haven't yet talked to students about that, but I, I've heard from students using it in school and at the college level. It's very common. Recently talked to a high school student about they actually were afraid to use it because they hadn't had that dialogue that we were talking about around the plagiarism. And we've gotten a chance to test [ ] non, like, you know, in ways that not, are not cheating there's not clear guidance about what is and isn't cheating. Just because AI has now entered the picture but I think justifying how we teach it, what we're teaching about it, and like how we assess it is probably something we want to spend a bit more time on to keep those students to so the students can put that put it into perspective, right? Like it's easier maybe it's easy for like a career English teacher to like quickly articulate why writing like an essay is still important but I think like that ninth grade student who's like writing their first like real paper and can go home and have AI do it, I think they're having a much harder time figuring out why am I bothering with this when like AI is taking over the world and I want to learn things that will like prevent AI from taking my job. Priten Shah: And you know, this is where like, understanding the, understanding the purpose of education and understanding that, you know, a lot of the things that we teach in schools aren't necessarily direct skills for jobs, but they're foundational skills that help students understand, that they can do anything that [ ] actually go a long way to allowing you to focus on things you really care about. Ross Romano: Yeah. Yeah. In so many ways, it's just yet another opportunity for schools to think about why do we teach what we teach and how we teach it? Is it still relevant? Is it still pertinent? Another opportunity to think about, okay, should we try some flip learning models? Should we try having [ ] students represent knowledge through multiple modalities, et cetera? But what, you know, what do schools need to have done from either an [ ] instructional perspective, or from a policy perspective, or an ethical perspective, what should schools be focusing on right now as they think about AI implementation? Preeti, I'll let you start with this one. Aatash Parikh: Thank you, Ross. Ross Romano Guest Aatash Parikh Guest Inkwire Guest Pedagogy.Cloud | AI in Education Guest Priten Shah pause