Ethical Ed Tech book cover
New Book · Coming Soon

It’s Time to Put Ethics at the Center of Ed Tech

A practical guide for K–12 educators navigating AI and digital safety.

Podcast Appearances

AI and Our Sustained Curiosity About the Bright Future of Education

EdTech Startup Showcase (with Aatash Parikh) · Ross Romano

June 5, 2024

Listen
AI-literacy

Related Projects

  • AI & The Future of Education: Teaching in the Age of Artificial Intelligence

Transcript

EdTech Startup Showcase June 5, 2024 Ross Romano: What do we mean as a baseline for when we're talking about AI and AI bringing something new to what we can do in education? Or classify it — and generative, or recommend something else from the data set — it actually produces things that look like its current data set. Is it important to, currently — and we don't, it's fast developing so even by the time people are listening to this in a week or two it might be a little bit different — but there's certain limitations that currently exist. Is it, how important is it, I guess, to focus on and understand what the limitations are in your perspective? Versus focusing more exclusively on what we can do, what we're trying to do. Not necessarily limitations or risks from a, you know, safety angle. There's no — at current technology, it is not at the point where there is actual active thinking happening. It is a language production model, and that says a lot about human language that we can mimic human thinking so well using just a word engine, but it doesn't say much about the computer's ability to think yet. So that's definitely a limitation that we want to think about and keep in mind. The other one is the bias in the data set, right? So no matter what folks are trying to do to safeguard and come up with post-training intervention methods to reduce the bias in the outputs, that's still a significant limit. The data itself is human data for the most part. Even the algorithms that are trained on AI data are still pre-trained on human data first. And so all those biases that exist in all our generative outputs, whether it be images or text or audio, do all get replicated in some fashion or another. And so there's great work being done as to how we can minimize it, how we can reduce it, but that's a limitation that I think folks need to understand — that it's coming from the fact that it is being trained on human data, and that's inherently flawed right now. The creating new art, creating poetry — and it's pretty incredible what ChatGPT and what generative AI can do on that front. Yeah. What comes to your mind when we think about what do we not yet know? I think about what these technologies can and will become. Are there questions that you still would like to have answered about what will become possible in five years, in ten years? As far as, okay, this would be meaningful innovation — this is what's potentially out there. Requires somebody first. Or, you know what, now that I see how it can do this, it's making me think — what if we could do that? Educators think through what makes an engaging learning experience and then actually facilitating that learning. And that's going to naturally build, build, build their literacy. And thankfully there are tools that are accessible. And so finding out from colleagues, from their networks, which are the ways, which are the most accessible tools for them to help them get over those fears and find easy ways to jump into it. They're going to learn how to use them. They're going to develop those skills, whether or not they're using it directly for school assignments now, or they're just developing the skills for the future — they're going to get it. And the students for whom are not going to access it elsewhere, or not get appropriate information and guidance and training on how to use them really well and meaningfully, they're going to fall behind. Priten Shah: Yeah, yeah. And these are the conversations that I think are happening across the country right now. And I think that we like to talk about it in three buckets. So there's just a policy and procedures angle here. And so it is figuring out what tools you're going to allow your students and teachers to use. There's also what is actually enforceable in that world too. So these blanket bans — like, okay, let me just block ChatGPT or any URL with AI in it — all these are like short-term band-aid measures that aren't really going to help anybody. I think we talked in our very first episode about just the equity gap that it creates very rapidly when students with other devices, other network access are going to go use these tools anyways. And so we try to encourage schools to at least figure out which tools they're going to encourage teachers to use and focus on that — teachers and students — rather than focus so heavily on what can't folks use. And I think that that is the right way to do this. I think getting teachers to become fluent with the tools themselves, become well versed in what biases might exist and what limitations might exist, all means that they can have better conversations individually with their students as the students start using it. And then once you hand it in, let's see if you cheat it. And we're not really having a conversation about the skills — they're only going to be in those roles so much longer. And ultimately they're not going to feel the impact 30 years down the road. Ross Romano: Yeah. What, what do students think — what are you hearing from them? What are teachers telling you about how their students are responding? A common thing for many students too, but I've heard from students that do use the technology that they're using it quite a bit. I hear those are popular. I haven't yet talked to students about that, but I've heard from students using it in school and at the college level it's very common. Recently talked to a high school student about — they actually were afraid to use it because they hadn't had that dialogue that we were talking about around the plagiarism. And we've gotten a chance to test out ways that are not cheating — there's not clear guidance about what is and isn't cheating just because AI has now entered the picture. But I think justifying how we teach it, what we're teaching about it, and how we assess it is probably something we want to spend a bit more time on, to keep those students — so the students can put it into perspective, right? Like, it's easier — maybe it's easy for a career English teacher to quickly articulate why writing an essay is still important, but I think that ninth grade student who's writing their first real paper and can go home and have AI do it, I think they're having a much harder time figuring out why am I bothering with this when AI is taking over the world and I want to learn things that will prevent AI from taking my job. Those skills actually go a long way to allowing you to focus on things you really care about. Ross Romano: Yeah. In so many ways, it's just yet another opportunity for schools to think about why do we teach what we teach and how we teach it? Is it still relevant? Is it still pertinent? Another opportunity to think about, okay, should we try some flip learning models? Should we try having students represent knowledge through multiple modalities, et cetera? Aatash Parikh: Thank you, Ross.