Today we’re joined by Meredith Broussard. Meredith is a data journalist whose research and reporting centers on ethical AI and data analysis for the social good. She’s an associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the NYU Alliance for Public Interest Technology. And she’s an author, including writing Artificial Unintelligence: How Computers Misunderstand the World and the recently released More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech.
We invited Meredith on to specifically talk about the intersection of the recent rapid growth of consumer-focused generative AI, such as ChatGPT, Midjourney, DALL-E, etc. as well as their integrations into commonly used education tools like Microsoft Office and soon, Google Documents. And I know that many educators are already worried about the implications of AI in classrooms…but it’s going to be quite jarring when Google Docs has a built-in AI text prompt. In our view, we’ll need to find ways to talk about AI and technology more broadly with students, guiding them in the use of these platforms and problematizing them — as opposed to just banning them outright.
Dr. Meredith Broussard, associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the NYU Alliance for Public Interest Technology, and author of Artificial Unintelligence: How Computers Misunderstand the World and the recently released More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech
Chris McNutt: Hello, and welcome to Episode 134 of our podcast. My name is Chris McNutt, and I'm part of the Progressive Education nonprofit Human Restoration Project. Before we get started, I want to let you know that this is brought to you by our supporters, three of whom are Kristina Daniele, James Jack, and Marcelo Viena Nieto. Thank you for your ongoing support. You can learn more about the Human Restoration Project on our website, humanrestorationproject.org, or find us on Twitter, Instagram, or Facebook. Today, we're joined by Meredith Rousard. Meredith is a data journalist whose research and reporting centers on ethical AI and data analysis for the social good. She's an associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the NYU Alliance for Public Interest Technology. And she's an author, including writing Artificial Unintelligence, How Computers Misunderstand the World, and the recently released More Than a Glitch Confronting Race, Gender, and Ability Bias in Tech. We invited Meredith on to specifically talk about the intersection of the recent rapid growth of consumer-focused generative AI, so mid-journey, chat GPT, DALLE, et cetera, as well as that integration into commonly used educational tools, so Microsoft Office, soon Google Documents, stuff like that. And I know that many educators are already worried about the implications of AI in classrooms, but I think it's going to be only more prominent come fall when Google Documents has AI integrated into it and more MO folks are just aware that it exists. In our view, we're going to need to find ways to talk about AI and technology more broadly with students, guiding them through that process, problematizing what AI is and just educational technology generally, as opposed to just banning them outright and expecting people not to use it. But before we dive further into that conversation, we appreciate you being here, Meredith. Welcome to the program.
Meredith Broussard: Hi, thank you so much for having me.
CM: So I want to start off by just introducing broadly More Than a Glitch, where you're writing about how the problems with artificial intelligence, algorithms, educational technologies, technology generally is that they're reflecting the biases and systemic oppression of society broadly. And the software is primarily developed by white guys, white male programmers. And the output of those programs are presented and often accepted as neutral because they're seen as just being computers, so therefore they have to be neutral. And in the book, you mentioned that there's real life implications of that because it is highly subjective and biased and discriminatory. You talk about school testing softwares like surveillance software, which resonated a lot with me because that's something that I saw all the time, especially for a lot of our kids that were taking college classes online, to even future crime predictors, which blew my mind, a very Minority Report-esque. Apparently someone watched that and thought that they missed the entire point of the movie. So connecting back to that intro, in the last few months, really, since your book was released, there's been an explosion of AI that's been marketed at consumers, especially chat GPT has been the big one. I'm curious just about how the work of More Than a Glitch connects to now the recent growth of chat GPT.
MB: Well, I am delighted that people want to have conversations about artificial intelligence now because I've been thinking about and writing about AI for many years and so now I can go to cocktail parties and people actually want to talk to me now, right? It's not that AI is an outlier anymore. But one of the things that I think we need to do in our conversations about artificial intelligence is we need to dwell strictly in the realm of what's real about AI as opposed to getting fixated on what's imaginary about AI instead of getting all caught up in imagining science fiction as reality, right? So what's real about AI is that it's math. It's very complicated, beautiful math, but it is not going to stage a robot takeover. There is a lot of hype right now about AI. This new wave of generative AI is going to change everything and it's going to take your job. And I think that if listeners are feeling any fear about this, I would urge you to just let go of that fear. This new wave of AI is not substantially different than other kinds of AI we've had before. The interface and the popularity is a little different, but it's pretty much the same. It is not going to make everything different. It is going to make some things a little bit different.
CM: What's interesting about it to me is that how much kids, but also teachers as well, are starting to use these platforms and that fear sets in of everything that I do in my class is now pointless or bunk because I can just type it into this platform and it will give me an answer and that answer is very hard to detect as being plagiarized. Or in a worst case scenario, I assume everything is plagiarized. I don't know if you saw that story from that professor in Texas where he failed his entire class because he plugged in all of their essays into the... I don't know what it's called. Oh, he plugged it in to turn it in? It was like truth GPT. It's an AI program design. Oh yeah. One of those things that's supposed to catch GPT written stuff.
MB: Yeah. It's kind of a mess. One of the dystopian future scenarios to me is this idea that we make kids write things and then we are suspicious of the kids and then so we run their writing through the GPT detector and so people are making money off of generating text and then people are making money off of trying to detect cheating and all of this just wasted effort is happening. Whereas really we'd be so much better off taking all that money and putting it into schools and putting that money toward actually teaching kids as opposed to trying to catch kids doing something that we've decided is bad. I don't think that chat GPT or generative AI is an apocalypse for education. I think that once you start looking at it as basically being the same as the auto-complete that we've already had for a while in Google docs or in Gmail, it becomes a lot less scary. When you first use chat GPT or when you first use generative AI, it seems really nifty. I definitely encourage everybody to use it and try it out because it's really cool at first and the fact that you can type something in and then get an answer is just neat. It's entertaining. Of course kids want to play with it, but it becomes really mundane really quickly. You play with it for half an hour and you get bored because the output that it makes is really boring. What it's doing is it's taking all of the text that has been scraped from the internet or grabbed from data repositories, plunks it to the creators, plunk this data into the computer and they say, computer, make a model. The computer says, okay, it makes a model. The model shows mathematical patterns in the data. Then you can use that model to generate new sentences or generate new images. In other methods, you can use the model to create predictions or suggest decisions. The technology is really flexible, but it's also a statistical technique. You can think of it as averaging together all of the writing that's out there on the web and then it becomes pretty obvious what's going to happen. The writing is going to be mediocre. Yes, it could pass muster in a lot of situations, but it's not going to be good writing. It's going to be adequate writing. It's also going to privilege certain groups and certain voices over others. One way to think about it is to think about whose voices are overrepresented on the internet in the corpus of text that has been scraped from the internet and is used to train these AI systems. You can anticipate whose voices are going to be privileged and whose voices are going to be suppressed. You can make a value judgment about that.
CM: Of course. It sounds like there's opportunities there for educators to basically walk that through with kids because even if educators are aware of it, I don't think most kids really understand how the program is operating and also what that kind of means for the results that they're getting in the same way that I think a lot of it like Wikipedia, when that was a thing 15 years ago when people were banning Wikipedia left and right because they were afraid of what it was, but there is a valid use for Wikipedia at times. You've got to know how to use it in the exact same way that I think you could argue for chat GPT and other softwares like it.
MB: One thing I've heard about a way that teachers are using chat GPT is they're having students write work and then plug it in for proofreading. That's exactly the same thing that people are doing already with tools like Grammarly. That could be useful. I've also heard of a teacher who uses it to generate paragraphs for group editing because when you're doing a group editing exercise, you don't really want to use a paragraph that's been written by a student in the class because the experience of being critiqued by everybody in the class is kind of an intense experience and it's not really helpful for every student. If chat GPT makes a paragraph and then you get to tear it apart collectively and talk about why this paragraph stinks and why this paragraph is good, that seems like a pretty inoffensive use.
CM: That's brilliant. I love that. I think it also gets to the point too that it also helps you dissect how it writes. I don't know. Whenever I find I generate something on chat GPT because it is so average, it's not only mundane, but it's robotic for lack of a better way of saying it. It just feels too sterile and it helps us get to the heart of what it means to write as a human. What does it mean to be a creative writer that can say things in a powerful way as opposed to just telling me what the facts are?
MB: Yeah, and that's what we're teaching. Yeah, that's what we're teaching in schools and when we're teaching writing. One of the things that's really interesting to me is that the Washington Post did an analysis of what was in the training data for Bard and for some of the other generative AI systems. In their analysis, the data set that was most represented was the US Patents and Trademark Office data set, which explains in part why a chat GPT has the voice of a 47-year-old compliance lawyer.
CM: I mean, that definitely sounds about right. I think in terms of that training to not shift gears, but also talk about the, I guess, the stereotyping slash biases that exist within it. There's so many different activities you could do with kids to help them recognize that as well. A couple I've seen that I've really resonated with was one person was using chat GPT to generate recommended lists of things, so like top 10 best books to read in school. It gives you, I mean, the most classical white male canon that it could potentially give because that's the top result on Google because it's like study.com or something. Or I find it even more powerful, we actually just made an activity about this for our own organization, is using mid journey to generate, I guess, stereotypes. For example, if you type in a perfect first date or a great family meal or kind of a worst home, but a bad part of town, mid journey will give you some of the most like biased skin-crawly stereotypes almost 100% of the time. So what suggestions would you have for educators beyond just the mechanical use of the software to help them understand the ethical implications of AI when we're talking with kids?
MB: Well, I would absolutely urge everybody to read my book. One of the ways that it is written is that it's written so that you can use individual chapters. And in my work generally, I focus on explaining complex technical topics in plain language and then connecting these technical topics to very human considerations like race and gender and disability. So kids understand this stuff. I mean, I have done workshops on artificial intelligence for kids in pre-K through 12 and they get it when you explain that the computer is doing math and this is how the program you're using works and it's made by a human being. Like when you explain all of that, it demystifies it, it empowers the kids so that they can think critically about these tech tools that they're using. One of the things that was really gratifying to me when I started talking to more kids about artificial intelligence was to realize that the kids are noticing things like the Snapchat AI plugin or feature and they're curious about it and they have questions and they have opinions about their rights in the digital space. They have opinions about surveillance, right? So we should empower kids to have these more complicated conversations about technology and we should also empower them to have a voice in whether and how technology gets used in the classroom.
CM: That's a really interesting point because something that we find a lot and I was guilty of this too when I was teaching is how much educational technology software that kids are required typically to use. A good one would be on Flipgrid, which I think is just called Flip now. It's Microsoft owned. It does surveil you. It takes like all of your data. You have to plug in a lot of personal demographic information that uses it, etc. But kids have to agree to various different data policies in order to participate within the classroom for that activity.
MB: Yeah, and that's not right. Teachers should not be forcing kids to give up their private data. Schools should not be agreeing to these unbelievably complicated blanket contracts. I think part of what happened is that everybody got really excited about, oh yeah, let's use more technology in education. They got so excited about using the technology that they didn't read the fine print and also didn't think about the implications of using what seems like free technology. Because when you're not paying, you are the product. Also, it has to do with funding of schools. Our schools are vastly underfunded. Our public schools need more money. Our teachers need to be paid better. If teachers are using EdTech software because they don't have textbooks and learning materials, that's a really big problem.
CM: What worries me is that it's only going to get worse in a sense. I was just at a tech conference a month or two ago. I would venture that 50% or more of them were all chat GPT-based solutions to the classroom aimed at, in my opinion, de-professionalizing teachers. A lot of them were like, you could work through our trained AI model to teach kids how to read better. Even Bill Gates recently endorsed as being a future possibility, which historically has not had the greatest track record on educational reforms of what it means for kids. And I worry about the dystopian future of kids being forced to sit in classrooms and learning through some kind of self-directed AI without any teacher supervision, at least not a trained teacher. Someone's just going to sit in the room and make sure they do their word problems. It's one of those things where obviously it's not going to work because AI isn't really designed to do that very well, but it doesn't mean that people won't do it to save money or to ensure that teachers don't teach, for example, critical reasoning or connections to culture war stuff, book ban stuff more broadly. So I think that demystification piece helps.
MB: You're making me feel kind of depressed.
CM: I know. Sadly, that's like everything about education always has that tiptoeing into cynicism. But at the exact same time, I think that helping teachers and students understand how the software works and demystifying it also helps them organize and fight back against their implementation. It's not just about how to use it, but it's also understanding why to use it and what it means more broadly.
MB: So there are a bunch of other books that I would recommend in addition to More Than a Glitch and Artificial Unintelligence. I really love Race After Technology by Ruha Benjamin and Algorithms of Oppression by Safiya Noble, Black Software by Charlton McElwain, Technically Wrong, Twitter and Tear Gas. There's a growing literature of what's sometimes called critical internet studies or critical technology studies where people are looking at what is the social fallout of reliance on technology, of over-reliance on technology, and how can we dig ourselves out of the hole that we're in. Also, how can we understand bias? How can we understand the social forces that are at work inside our socio-technical systems?
CM: I would imagine that by being able to connect those books into really any kind of content, it could be science, math, English, doesn't really matter, you could find a way to make that work. That not only helps you understand technology more, but it also helps you understand systemic oppression, which a lot of kids sadly are maybe a little ignorant of depending on their background, but also just generally how technology tends to treat people. A lot of it's rooted in the idea that people are doing something wrong. This is especially the case for kids. A lot of it's generally based on rewards, punishment, surveillance, cheating. Rarely is it used to actually empower someone to do something more positive. How can helping educators and students understand how this AI works help them essentially be more human? How does it allow them to change the world and do better and fight against all these different injustices?
MB: My experience is that once you understand what is going on inside these computer systems, it empowers you and you can push back against algorithmic decisions that are unfair or unjust. That's been really important for me. That's been something important that I've seen in the folks that I've taught, that I've talked with. The more you know, the more you feel like you have agency. And to me, that agency, that ability to speak up, to be believed is a really important part of the democratic process of being an involved member of a democracy. And that is what I want for students. I want them to feel empowered. I want them to be critical thinkers. I want them to learn to write themselves. And I also want them to be really good users of technological tools. Most people are pretty bad at technology. So it's not clear to me that loading on more and more technology in our everyday lives is actually going to be useful because it's very hard to balance all of these programs and remember where all of the buttons are. And I feel like the more technology we've layered into our world, the more time gets wasted just pushing buttons and chasing after little blips that are malfunctioning. Like right now, for example, my computer is dying. And it's because the power strip is broken. So I have to go and like dig out another power strip somewhere else in my apartment. Like that's not the sleek digital future that I was promised.
CM: Speaking to that, one, it would be very funny if that was the last thing you said. And then like the podcast. It would be.
MB: But let me go get the power strip seriously.
CM: Okay, sure…
The Conference to Restore Humanity 2023 is an invitation for K-12 and college educators to break the doom loop and build a platform for hopeful, positive action. Our conference is designed around the accessibility, sustainability and affordability of virtual learning while engaging participants in a classroom environment that models the same progressive pedagogy we value with students. Instead of long Zoom presentations with a brief Q&A, keynotes are flipped and attendees will have the opportunity for extended conversation with our speakers. Antonia Darter, with 40 years of insight as a scholar, artist, activist and author of numerous works, including Culture and Power in the Classroom. Cornelius Minor, community-driven Brooklyn educator, co-founder of the Minor Collective and author of We Got This. Jose Luis Vilson, New York City educator, co-founder and executive director of Educolor and author of This is Not a Test. And Iowa WTF, a coalition of young people fighting discriminatory legislation through advocacy, activism and civic engagement. And instead of back-to-back online workshops, we are offering asynchronous learning tracks where you can engage with the content and the community at any time on topics like environmental education for social impact, applying game design to education and anti-racist universal design for learning. This year, we're also featuring daily events from organizations, educators and activists to build community and sustain practice. The Conference to Restore Humanity runs July 24th through the 27th. And as of recording, early bird tickets are still available. See our website, humanrestorationproject.org for more information. And let's restore humanity together.
Yeah, so I guess as the the final question just pulling this all together would be as we move into fall of next year, as folks are refreshing, recuperating here over the summer, there's going to be a lot of discussions and schools about policies toward AI generally. I think educational technology companies will likely find ways to harness and leverage AI for better or for worse, probably for worse, and implement that into the classroom in a different way than just giving kids chat GPT or mid journey or something. But we're seeing more and more that a lot of schools are banning AI outright. You're seeing that both at the college and K-12 level because there's fear of what it means generally for the classroom. What suggestions slash opinion do you have on banning the use of these things and talking about them so like we can still talk about them in class versus using them and then problematizing them from there?
MB: I am not in favor of bans. I am in favor of enforcing existing anti-plagiarism policies in the context of AI. I think that it's important to have conversations about what does plagiarism mean and what are acceptable uses of generative AI technology because kids are like, kids are ready for those conversations. I don't think it helps to pretend that the technology is not making an impact. Kids have heard about it. They're curious about it. That makes a lot of sense. I think that it is a very useful technology for a very small and not that interesting set of things. So let's look at it. Let's talk about it. One of the lessons that I've heard that people really like is having a generative AI generate a paragraph or an essay and then having the kids do a critical response to it. That works really well. I think you only have to do it like once or twice though. I don't think that every class should have an assignment where you do this every semester because I think that itself will get really boring and then the kids will think that the teachers are out of it. I think that we don't need to ban it. We need to have really honest conversations about what it can and can't do. What are the biases along the lines of race, gender, disability, other kinds of factors? What are the biases that are baked into these systems? What are the biases that are baked into all technology systems? And we need to just stop having so much faith in technology because the technologies that we use are very useful for some things, but they are not omnipotent. There is no educational technology that's going to get us away from the essential problems of being human. There is no technology that's going to replace teachers. All of the companies who are trying to sell you things that are going to do – what do they call it? Leveled learning? That has never worked and it's probably not going to work because the idiosyncrasies of how students learn are actually in opposition to the sleek path that you would take through a technological system. I don't think we should waste money on it and I think that decision makers should be really aware of how much of a money grab is happening right now around generative AI and be really cautious about investing in these things for schools because a lot of the technology does not work as promised.
CM: That's such a powerful statement too. It kind of gets to the heart of what I was hoping that we would get to in this conversation, which is that AI can be used to make us more human or less human in the sense that it could be used to help us understand systemic oppression that could help us just have more open discussions about these things as well as just understand creative writing, what makes our voice human, which sounds kind of weird. Or on the other hand, it could also make us very untrusting towards other people and think that everything's being plagiarized online and that we have to ban things and surveil kids because they're going to be constantly trying to cheat and have this very negative view of the world, which sadly is the exact same debate that we see over phone use, over social media use, over any kind of – I mean, even back to comic books or something 50 years ago, these discussions will continue. But before we wrap up with all that, are there any other final thoughts that you want to add on, thoughts for teachers, et cetera?
MB: So one of the things that I have noticed lately in the discourse around generative AI is you'll have these stories or blog posts or whatever that are like, oh, my God, AI is coming for all of our jobs. It's going to change everything, blah, blah, blah. We can't possibly have it in schools. Then you have students who are writing, well, I used ChatGPT and it was not bad. Look how good a job I'm doing using ChatGPT. Like, oh, this is so cool for doing this thing. So I mean we've got this dialectic going on. But one of the things I've noticed inside this conversation is that the kids seem to think that teachers and professors assign essays because we want to read like 35 different amateur interpretations of the Iliad. We do not, unfortunately. Like the reason that we assign essays is so students can practice writing the same way that a soccer coach assigns drills because then if you do the drills, you're going to be really good in the game when it matters. And so the idea that ChatGPT can just replace the effort of writing and like, oh, we can get away with something by like having the computer do the work for us. Well, that doesn't give a lot of credit to the big goal of education. We're not assigning essays because we're really dying to grade them. We are assigning essays because that's how you practice writing, because that's how you practice thinking. And so when you're trying to do a shortcut and get away with something, yeah, I mean, that's a thing that people do, but you're kind of missing the point of the whole educational endeavor.