Today, we are joined by Akil Bello. Akil is a supplemental education and test preparation expert. He's launched two companies, developed test preparation programs, and trained hundreds of instructors. He was the founding partner and CEO of Bell Curves, a test prep company based on community partnerships, worked for The Princeton Review, and now is the Senior Director of Advocacy and Advancement at FairTest.
Akil and I talk about the advent of "learning loss" after pandemic schooling, the way that testing companies are using this term to generate more tests and test prep software, what was lost in the pandemic, and what we can do as teachers to build back better.
Akil Bello, Senior Director of Advocacy and Advancement at Fairtest, founding partner and former CEO of Bell Curves, and contributor on test equitability, learning loss, and much more
Transcription graciously provided by John O'Briant.
0:06 Chris McNutt:
Hello, and welcome to episode 95 of our podcast at Human Restoration Project. My name is Chris McNutt and I'm a high school digital media instructor from Ohio. Before we get started, I want to let you know that this is brought to you by our supporters, three of whom are Emma, Steve Peterson, and Alicia Huculak. Thank you for your ongoing support. You can learn more about the Human Restoration Project on our website, humanrestorationproject.org, or find us on Twitter, Instagram, or Facebook.
Today, we are joined by Akil Bello. Akil is a supplemental education and test preparation expert. He's launched two companies, developed test preparation programs, and trained hundreds of instructors. He was the founding partner and CEO of Bell Curves, a test prep company based on community partnerships, worked for the Princeton Review, and now as the Senior Director of Advocacy and Advancement at FairTest. Akil and I talk about the advent of learning loss after pandemic schooling, the way that testing companies are now using that term to generate more tests and test prep software, what was lost in the pandemic, and what we can do as teachers to build back better.
1:21 Akil Bello:
So the learning loss narrative is fascinating. Like, there's - so the best I can track it is you actually have to go back to summer slide, if you remember those phrases, right? Which, again, if I can try to track the history and where these things all came from, right, somewhere along the way, they tried to measure how much students lose during the summer. That was driven largely by using test scores from spring to fall. And so what they're measuring is the spring-to-fall test score prediction of score change. So - and that's where things get really weird, right? Because like, is it an actual score dip? Or was it a prediction, so they're using predictions of what's happening, and then they're trying to turn it into months of loss. So it started as summer slide, it became summer learning loss. And then during the pandemic, they picked that narrative up, and then just continued it and became learning loss, and then it was turned into unfinished learning. So it's just - and I think that if we look at it from a non cynical point of view, essentially, it's an attempt to quantify what happens when the normal cycle and schedule of school is disrupted. That makes sense, and that's cool. Like, it makes sense. Let's see if we can quantify and identify places where and how much impact it has by disrupting the normal cycle of school.
However (laughs), and in all things education, it seems to come back to bubbles, right? So, however, the measurement of it was a test prediction, right? It was largely driven by test publishers based on their test results. So either they're saying “we are predicting,” - which was the worst study I saw, I think was NWEA. That's actually early in the pandemic, like June of 2020, NWEA released a report that was like, “Oh, we expect this to happen,” which is like - whoa, that's - we just started the pandemic! And you're like, “Oh, yeah, kids are gonna lose this much learning.” And you know, and so that's bizarre. And then also, the term has been defined multiple ways. So there's the prediction of - basically changes of test scores. There is the narrative definition, which is not really based on research of learning loss, which is, “when crap is messed up, students won't learn as much.” Like, okay, cool. I don't know that we need to, like, we know this! We all knew during the pandemic, things weren't good. Lots of places were struggling. We don't need to put a number and dates and months to that. And then there's the use of the term which almost suggested students were unlearning things, things they had previously known, they don't know. And that's also tied to testing in terms of, like, they were scoring this level on the test, but in the future, they're going to score at this other level on the tests, which means that they've lost some learning. And to me, that just made me laugh, because I've done test prep for 30 years, you know, literally coaching. But that’s like, yeah, if they fall out of practice, they'll probably dip a little bit. That's, like, not hard.
That's not learning loss so much as it is lack of practice of the temporary skills required to do well on your multiple choice test.
4:53 CM:
Is that the report that basically also said that - it implied that teachers didn't do anything? Like, once the pandemic started, it was just like, if no learning at all occurred from March until now, then this is what's going to happen, our scores are going to drop by 30 points, whatever.
5:11 AB:
Yeah, I’ve looked at so many of those reports, they all blur to me. But there's definitely some that are like, if no learning - if no teaching happens at all, here's what the result is. If hybrid teaching, I think they also use, and then if fully virtual. So, like, they created this weird split of, like, you know, there's virtual teaching, there's hybrid teaching, which excludes things like, you know, when you're going back and forth to in-person and not, so there's a lot of nuance that's left out in these reports. I don't remember specifically one that was like, yeah, we're just gonna assume no teaching whatsoever. Which, okay, that's bizarre, right? And the whole phrasing is problematic. Is it learning loss, or is it teaching loss?
5:56 CM:
It's also kind of surprising to me that, at least in a lot of the reports that we were looking at, the scores really don't drop by that much. Which - I don't know, as a teacher who doesn't really concern myself much with standardized tests, that kind of questions the validity of the measurement. Because if you're saying that, like, this is absolutely terrible, kids haven't been learning in months, and then the score dropped by three points, or sometimes stay the same. I'm just like, “well, then why?” Like, what is the validity of the test to begin with, if we're going to take it by their, by their dictation?
6:28 AB:
And that's - a lot of the problems with the narratives around this is that they aren't digestible by most people. And I find - I worry about how policymakers interpret these things, right? Because often policymakers are politicians and not educators and not researchers and not psychometricians. So when you say “a three point drop,” does that mean three percentage points? Does that mean three percentile points? Does that mean three scaled score points? Is that three raw score points? Because if it's a five point scale for the test, and you drop three points, that's huge. But if it's a percentile scale, and you drop three points, that's irrelevant. Or not irrelevant, but it is not very relevant. Right? Now, it could also be a huge change from what's been happening from year to year, if a test is fairly stable from year to year. And a one point drop is almost never seen in that test, then a three point jump is significant in that context. I'm not sure if it's meaningful. What would I expect as a drop out of the pandemic?
So I'm pretty big - because I also wonder if, you know, if you're making kids test during a pandemic, how many of them are mailing it in? I absolutely would have. There's no way on God's green earth my mom's gonna say, “Today, you're not learning with your classmates and teachers in school - you're going to log on to a virtual school and take a bubble test that doesn't matter for you at all.” Oh, I'm getting out of that real fast, and going to play Nintendo.
8:14 CM:
Yeah, yeah, I mean, we've totally been there. At the end of last school year, our students took 17 tests in a two week period. And the last group of tests were the state tests, CTE tests, some other one, and then MAP testing, the one that is being reported out on for this narrative. And, from what I could tell pretty much every kid if possible - they give you, like, an alert if they start just randomly hitting buttons, like it pops up on your computer. And the number of students in my room alone that I had to go, like, “you gotta slow down, because it's gonna lock you out of the test,” were just hitting random buttons, because it doesn't matter. It's irrelevant.
8:53 AB:
And I'm sure there's a percentage of kids who just slowed down enough so they didn't get locked out. I've done that. I'm like, you know, whatever assessment is online or something, they're making you wait and read it, it’s like, “Cool. I'll just wait and do something else. And then click the button randomly, and then wait some more.” Like, because I'm not gonna give this my attention.
9:11 CM:
Let's turn our attention now to, I guess, the more cynical side of this, which is the financial side of this, you know. What's the concern here with the fact that the testing companies that are selling this narrative - I mean, this is a multibillion dollar industry. What should we be worried about knowing that and knowing what this learning loss narrative is trying to do?
9:29 AB:
So let me try to go Pollyanna on this first. I'm going to be positive in my outlook. I'm turning over a new leaf here. So let's say that we know that school - the normal process of schooling was disrupted because of the pandemic. So if we want students on the normal schedule of age-based schooling that we have had for years, then we have to do something to try to get back on track for the normal process of schooling. Fine. So there's going to need to be supplemental things to what we normally would have done. We can't just do 8-3 in the classroom and make up for a year that was disrupted, potentially two years that's disrupted, right? So there needs to be external things and add-ons to catch up, so to speak.
Totally makes sense. I'm totally on board. What are those new things? Who - like, so the government has provided money to do the add-ons? Cool. Let's get some good add-ons in there. Right? So the question becomes: what are the add-ons that are going to make up for the disruptions? Who's going to provide them? Are they of quality? Are they aligned with what students need and want and will participate in? All of those sorts of things. So I think that, from a theoretical point of view, from a high level, it makes sense - things were disrupted. In order to catch up, we're going to add on more, let's choose some things to add on. That's sort of the theory that was behind No Child Left Behind as well. We have to catch up, we have to do some add-ons. I ran a tutoring company at the time of No Child Left Behind. The processes that were put in place for me as an entrepreneur to be paid for those programs almost required me to cheat. Like, so - and that's a lot of my concern, like all of the scandals that came out of No Child Left Behind, all of the EdTech miracle working programs that are currently being sold, although they've never been proven or tested or this and that, those are my concerns. Right, is - as a tutoring company, with No Child Left Behind your the program I remember looking at and thinking about applying to a deciding not to was providing after school tutoring, specifically focused on testing, which was my area, but paid based on attendance. Like wait a minute. So I was, like - and I think the metric - like, and pay based on attendance immediately says to me, how am I going to prove that they were there, right? Now, everybody has to keep track of attendance. But what if the kids don't show? What if they - what if I send 10 teachers, and there's a PTA meeting and the kids don't show, can't show, or the kids decide to go to a football game? So we haven't signed the attendance sheet anyway, which happened. Or the attendance sheet is signed. So I never participated in the No Child Left Behind, I just thought it would, you know, I would be taking too big of a risk to hope to get paid at the end of that, right? Because paying me based on attendance is crazy, because I'm gonna have to put out all these resources to get people to show up and to have programming in place, and materials, on the hope that there aren't conflicts after school that prevents the students from showing up.
How are these programs implemented is almost as important as are the vendors legitimate, so to speak, right? And then there's still the question of do you really want an army of external vendors coming in and working with your kids?
13:03 CM:
Yeah, there's also, like the motivation component and the engagement component, because the people that come in, especially EdTech, that come in to solve these problems, although I'm sure they probably do increase test scores in some cases, it's like, at what cost? Because it's so boring. We've been in so many different circumstances where our test scores at our school are horrendous, they always have been. And the solution, quote unquote, has been: “Well, let's pay $20,000 for this EdTech software. They promise in two weeks test scores will go up.” Sometimes they do. But the kids are bored to death and they hate it. They no longer love math or reading or whatever the thing is that they're testing.
13:45 AB:
I mean, that's part of the challenge, right? Is that the external struggles to education impact the performance in the classroom. The underfunding of a lot of schools impacts performance of the classroom. And the solutions that are often developed don't alleviate those core problems. Right? So it's like if you say that there are five external things that are impacting classroom performance and therefore also impacting test scores - to put something in place to improve test scores ignores all the other things.
So, like, I'm not sure that's going to solve the problem. And also, if you think of standardized testing as impacted by two separate things, one is permanent skills that students were supposed to acquire in their regular school classroom, and temporary skills that are unique to timed standardized testing environments. Test prep predominantly addresses the latter. Test prep can most immediately, short term, test intentional prep, most immediately addresses the temporary test-specific skills. If the problem is predominantly the long term permanent skills, then test prep isn't doing so much to address that.
15:07 CM:
It seems like sadly, a lot of the funding that we're seeing - we have the “Operation Reverse the Loss,” which I think is a very funny name, from the Department of Education - I'll put it in the show notes. FutureEd put together this summary of where all this funding is going in every single state, and a lot of it's pretty like generic broad terms. But you see, for many of them, sadly, the number one thing that they’re addressing is some kind of, I would call like a test score buzzword, like accelerating learning, academic recovery, learning loss, these different words that basically mean increased test scores. And some of them do put on like funding counselors or SEL or putting in money for local communities like free lunch, that kind of stuff, things that are very important that we definitely saw from the pandemic, that were exacerbated because of the pandemic.
But at the end of the day, kind of what you're alluding to here, and you've written about this is that the way that we're really going to solve a lot of these problems is by fixing all of the other things, like funding local communities, ensuring people of stable home lives, investing money into, I don't know, community organizations, because of the kids coming in and they're struggling. And the last thing they're worried about is performing well on the MAP at the end of the year. Like, it's just - it's not relevant.
16:26 AB:
Which (a) is not relevant and (b) there's no penalty to it, right? So what if you don't perform well? Right, like, so if you're going to graduate anyway, if you're going to, you know, like, so who cares? Right? And I think that's, that's good and bad. You know, I don't think graduation should be held up by the performance on one of these standardized tests or moving to the next grade, right? But then I do understand the concerns of, like, then how do you get students to take them seriously? Okay, so it’s like, yeah, I get it. But there's, I think, one of the struggles that are existing right now in education, right, it's like, you can't - it's hard for policymakers to fund stable communities. Let me say that better. It's hard for education policymakers - in education law, to fund and address things that impact education, but aren't directly education. Right? And it's - it almost feels like the infrastructure bill, where people are like, healthcare is not infrastructure. And then I don't know, internet is not infrastructure, right? It's like, yeah, what it is, but you can see those who are trying to push back on funding anything. There is a legitimate argument there. I don't know if the motivation for the argument is honest. But I can see the argument that says these things aren't what we traditionally define as infrastructure. That's what often happens in education, as well as, as we're looking more clearly at the impacts on education are all the social-emotional things going on at home, so we have to solve those. You're gonna get pushback on, well, you know, I can't pay for somebody's home life to be better.
18:11 CM:
Is there a place there for educators to help solve that? Because to me, a lot of this issue just comes from the fact that the testing industry has convinced policymakers that this is the best way forward, and they have a lot of money and their ability to lobby, for better or for worse. I don't think that all politicians are corrupt, but even the ones that aren’t corrupt probably think that this is the way forward because that's the way things are done. It's just like this is the thing that people do. Is there a way for educators to disrupt that narrative? To try to come up with a fix for that?
18:44 AB:
I think so. I think, one - I don't know, right? I don't know that it's corruption or incompetence, or any of those words, I don't know that it is those things. I have personal opinions that sometimes I express on Twitter about my belief, how connected those things are, right? I think the piece that we can address is the information. And that's that's part of why, like, learning loss just boggles my mind because what often happens is there's research, and to make the research digestible to policymakers and the public, catchphrases emerge. Like, nobody is going to read the report that says “the quantified, numeric - the time-quantified implications of disrupted school learning over the past 18 months.”
No one's paying any attention to that, right? But that's really kind of what we're saying, right? Measuring, using time, the disruption of tests, the changes in test scores since the start of the pandemic. That becomes “learning loss.” That becomes “unfinished learning.” That becomes “unfinished teaching” and all of those other phrases that are put out there, right? So I think it's important for educators to make sure the conversation is being had better beyond those phrases. I think that shifting from “learning loss” to “unfinished learning” is an improvement, at least. Right? It gets rid of the idea that something was lost, because nowhere are we saying something was lost. It's really a question of not yet done. Right? Which is, which is fascinating to me, because somebody suggested it - and I don't remember where, but somebody suggested, like, after the pandemic, we just start over. Everybody starts back to where they were, right? Which is a great idea, because it's like, yeah, we just didn't do that year right, let's start again. Except we have an age-driven, time-driven requirement in education, right? There's like, like, it actually would really be hard to have everyone pause, because there's a whole crop of kids that were born and were supposed to be starting school. So what grade do you double up? Right? So there is that problem, right?
But to a certain extent, that's almost what we needed to do. That year was done sporadic, if at all. So for a lot of people, we just need a do over. That's really hard to get driven into policy, right? So I think it's important for educators to make sure that parents, policymakers, whoever you're talking to, is clear what these things actually mean. I'm a parent of a tenth grader and an eighth grader. And, you know, being in test prep, I've always, you know, any test they're taking my question is like, “Can I see it? Can I see the test? Can I have an actual detailed report?” Because I know what's on these tests. I know that - and I also know my kids, right? And so I know that sometimes, like, one of my kids will get pretty much every two part question wrong ever. In his entire life. If you ask me to do two parts, he'll do the first part and he's done. So the test is entirely, you know, “find this - oh yeah, and also that,” I know he's not gonna do X, That's just not how he does it. It's like, I'm gonna do the first thing and then move the hell on, right? It's also how he does dishes in the house.
So I want to know if the test is entirely two part, or if it's all one part, and you actually did know some stuff, right? Which - I think testing agencies don't give that information to parents. Right? So I think tests are hard for parents to use, because the level of detail provided is almost nothing, right? You get these vague categorizations of, you know, math, arithmetic skills using numeric digits, and it's like, great, maybe that means something. And who knows what it is?
22:40 CM:
Yeah, I mean, diving into - we're reading the different reports just for trying to understand the learning loss narrative. I feel like you need, like, a doctorate just to dissect some of the paragraphs. And I started to wonder, honestly, like, is it just made up? Like, there were some things on there that I was like, I remember reading - I think it was Illuminate Education had something that was, like, “they're losing .01 standard deviation of learning every day.” What in the world does that mean? Like, what is .01 standard deviation, and how do you lose that?
23:08 AB:
Yes. I mean, that's just fascinating, right? And to a certain extent, it is made up, right? Because basically, so here's this, like - and I actually read the reports and like, ignore the math. I accept their math, the calculus and crap that they put in there that, like, I don't really know, because I'm not a PhD or researcher. But I've learned the language of these things, right? And so that, to me, the .01 is fascinating, because that says, they’ve looked at test scores, they’ve taken the test score scale, and the standard deviation of the test score scale, and they've divided it by the time between tests, so they can turn the score into a number. So they can quantify it in terms of days. Because somehow, it's important to that person to say “you've lost three weeks and a Wednesday.” Like, I'm not sure what I would do with that. Like, let's pretend it's true. How is that meaningful?
“You've lost two full weeks of school and a Monday and a Wednesday afternoon.” That's not how teaching works. I never quantified my teaching based on Wednesday afternoon versus Monday morning versus three weeks or two, like, so - it's a weird number to put in front of people, right? But from their research end of things, I think what they're trying to do, if we look at it from the most benevolent way possible, is take their test score scale, and put it in context that a school-based person will understand, right? It's not meaningful to say you've lost 10 points on the map, right? But it might be meaningful to say this person is three months behind where we thought he should be. And the piece that often gets left out is the end of that - where he thought he should be, right? Where we thought he should be - where we predicted him to be. Like, I'm not sure that's meaningful. And are your predictions meaningful when you're in a pandemic?
25:10 CM:
Is there an alternative to the way that we look at what is now framed as acceleration, but it also could be seen as remediation? Where instead of looking at this as “hey, throw a kid in a summer school for two weeks,” or “hey, get them involved on this specific online program to catch up their reading scores?” Is there a way that we can look at that, from an alternative lens where we do something that's more - I don't know - student-friendly? Something that students actually want to do that will keep them involved? Because my worry is that, I mean, if I were to go ask the students in my class next year “hey, next summer - I guess it'd be this summer at this point - you're coming in for two weeks, ‘cos we gotta make sure your reading scores go up,” I'm pretty sure, like, there would be an actual revolt.
Or if I told these kids that have used MindPlay in the past that this year they're going to use MindPlay again, because it's the adaptive learning suite that we purchased for this upcoming year, there's going to be groans, they're going to hate it. And I'm not entirely confident that they actually are gonna be learning anything. I think that they're just going to skip through that just like they skipped through the other stuff before. Is there an alternative means of engagement, knowing that there was disruption, but also recognizing that that disruption isn't going to be solved by just cramming a bunch of new stuff in there?
26:28 AB:
So my immediate response to that is almost on a parent level. Which - I'm going to I say that carefully, because I know what I'm saying, right? I know, I'm asking a crap ton of parents, many of whom won't have the resources, time, blah blah blah, to do it. But when I think about reading, I think about - there's technical skills to learning words, decoding words, definitions, blah blah blah. There is engagement and enjoyment. And then there's the meeting of the two. You're speaking it seems more to “how do I create more engagement in order to allow practice of reading to be easier, in order to allow their skill with reading, their fluency with reading, to grow through the practice of reading?”
My immediate - one of my immediate thoughts was I really like Newsela. I really like Newsela, right? Because I feel like it's relevant, it's easier, but they also do the multiple choice piece of it so you can get the testing thing in there. I like that sort of theory of take something loose and relevant and mutate it into test-ish, so that we can connect these things a little bit better without making it heavy and overwhelming and just rote practice. So as you're talking to me about things like that, I'm thinking about - is there a summer program where we can just, I don't know hate-read news articles? One of my Twitter pastimes is hate-reading an article where point by point I show what's missing, what they've left out, things like that. And I think that that's a, you know, I started it because somebody just got so many things wrong, it annoyed me. But I also can see kids loving it, right, like, “wait wait wait wait wait - this is this about Jay-Z. I know that's not right!” And like getting it into something they're more engaged with, right? And it teaches sort of the decoding and to pay attention. So you could create a program around that. I've just turned into EdTech.
So - and that's like the problem with it, right? Also, it's really hard to do with your class of 50 kids where their interests are so varied, right? So that's why it comes back to me, okay - now this has to be a parent level, an individual level. But it's hard to fund “do something different for every kid.” It's hard to fund “trust the teachers,” right? Because to a certain extent, the answer is “trust you.” Give you funding for two weeks to do something good for the students you work with, because you know them. One of the challenges I always had as a test prep person is if you're bringing me after school, I don't know these kids. I actually don't really have any pull or penalties. So what if you don't do the homework for the after school test prep class? Kids will figure that out pretty quickly. “Oh, he can't give me detention. He’s not gonna tell my mama.”
Like, so that's the problem with all of these external folks. Do they have either the confidence or - not confidence, but the goodwill and connection with the student to get them engaged and to do the work? Or do they have the stick of penalties and parents and all of that sort of stuff? And generally speaking, outside vendors don't have either. We haven't had time to develop the relationship to motivate and encourage the students. And you're an outside vendor so you don't have access to the parents or the penalties or any of that sort of stuff. Even the best-intended external vendors are challenged if the students don't have the intrinsic motivation to do extra stuff during their vacation to make up for something they had no control over and disrupted their lives and made things miserable. It’s hard for adults to have the motivation to do extra, you know, given what we just came out of.
30:22 CM:
It’s also - I mean, it’s definitely not good for morale either to constantly feel like you’re behind or you did something wrong and then starting off a school year with that frame, I feel it would be so detrimental. Especially recognizing as we were talking about before, it’s not that kids didn’t do anything for an entire year, it was just different types of stuff. And those different types of things are valuable as well. We can highlight those things. The cynical side of me is, like, “the world’s falling apart, the world’s in flames, it’s terrible.” I also don’t want to communicate a message every single day when I walk in to a group of a hundred and twenty kids over the course of the day, like, “hey, the world sucks and you’re here.” Because that’s not going to be an effective teaching strategy. So this is kind of an interesting question, because a lot of your work has been in test prep, and FairTest, and looking at tests in a variety of different ways to make them more equitable. Is there a place for moving away from the test in general? You’ve written about, like, test-optional universities and how that can be good, and some issues with that as well. What, if anything - could there be a future that just doesn’t have the test? Is that a possibility?
31:35 AB:
I mean, I don’t know. One can dream. One can work for it. I think that the hopeful Pollyanna side of me says yes. The more realistic side of me says in all likelihood we can minimize and diminish the harmful impacts of testing. We can get rid of that seventeen test week for your students. We can return to the 70s and the 80s where it was testing every two years rather than testing every year. I don’t know that we can get rid of it, but I think we can minimize the harm and the damage. To me, what’s interesting is I kind of believe in testing. I actually don’t disbelieve in testing. I do believe tests show some things. I don’t actually agree with the narrative that tests show nothing. I disagree with the narrative that tests show everything. Kind of, that’s almost how we’ve been using them, right? That these, actually, so a lot of my work is in admissions tests, right, like the SAT, which is scored on a 400-1600 scale, in ten-point increments, where a lot of noise is made around the difference between a score of 1000 versus a score of 1010. Which is an absolutely irrelevant difference according to the test makers. Right?
In K-12, testing is often a little bit better than that, where you have these proficiency levels. Right? I actually do think there’s some meaning to going, “okay, that person’s a 1, not a 5.” Okay. That means that we expected them to do a certain level of work, and they didn’t, and if we don’t turn that into too much. Does a 1 mean they didn’t try? Or does a 1 mean they don’t know how to add, and we expected them to know how to add? Right? And what are we telling the student who got the 1? Often the problem becomes how do we interpret those scores, how do we convey those scores, and what do we say to people who get those scores. Right? To look at somebody who got a 1, and say, “Oh, you can’t do first grade work.” As opposed to “You blew that test.” Like, we don’t do it anywhere else. We don’t take a kid and give them a piano test, and say “Oh my god, you’ll never be a musician.” It’s like, no, you sucked at the piano! Okay? And if you value piano playing, you better work on that! My kids can play piano, and I can’t. I’m actually literally okay with that. Like, I have no aspirations to. Like, I have a dream to play piano, but I will never actually work at it, so I’m okay with having no ability to do it, because I will never work at it. So that’s what I think is lost in educational testing, the nuance of, like, this skill - you displayed that this is not a skill that you currently possess. We think you should possess it. So you probably should work at it. Right? And I think if that were more the conversation, it would be healthier.
34:44 CM:
That makes a lot of sense to me, and it makes me think of - and I have to be very careful how I word this, because this could go haywire really poorly - but Susan Engel used to be on our board, she’s an author and researcher, and she convinced me that we should be actually testing more things but in less increments. So for example, like testing “do you feel comfortable at school.” Testing “do you feel safe at school.” Like, these other things in addition to academics to get a better read of that. And I was, like, “Well, how do you even, like, measure that?” And then she’s like, “Well, you know you pick up, like, a Time magazine every year or whatever, and it has some new study about how kids are more engaged at home than ever, and you read that and you don’t really question it. Well, someone did a study of that, someone figured that out. And you’re going based off of that research, so why can’t that be applied to school?”
Again, I have to be very careful in saying that, because that could lead to manipulation of that, it could lead to a billion more tests, and the way that it’s funded - there’s a lot of different issues that could go into that. But it could also force schools to pay attention to some other things instead of just being gauged off of their academic ability.
35:41 AB:
Off of not their academic ability, a narrow swath of academic performance. Right? Because we’re not sitting here, like, science performance, right? For the most part we’re talking about reading, maybe grammar, and then math. So that’s part of the problem, is that often we’re using a narrow swath of curriculum and drawing broad conclusions. We’re using a narrow swath of curriculum in order to develop an estimated ability performance and then drawing conclusions about students, school systems, whatever, based on this narrow swath of a sampling of the curriculum and an estimated score.
36:32 CM:
And also just the ability to understand how to take a test. Even if you don’t - especially as you get older, even if you necessarily don’t know the answer, figuring out how to answer it in a proper way - the reason why, like, a rich kid can go to some crazy tutoring camp and all of a sudden have an immaculate score.
36:50 AB:
Which, I mean, it ain’t hard to find me on the internet listed for $400 an hour. So people pay money to do - I mean, these tests aren’t - if it can be tested, it can be taught. And someone who spends all their time and energy decoding the test will know things about the test that others won’t. Test prep will always work to some extent. The question is, how much can a test score be impacted in X amount of time? That’s the only real question. I can’t take a twelfth grader who’s never been in any kind of school whatsoever - well, not a twelfth grader, at that point we’re really talking about a seventeen-year-old, right? Because if you haven’t been through the grades, you’re not a twelfth grader. If you took a seventeen-year-old who’s never been through any schooling, you can’t give me six weeks to make that up and get him an X score on a test. But if you give me six years, one on one, I could probably make a huge impact. Give me two years, I could probably make an impact, right? So anything that can be tested can be taught. It’s really just a question of time, and I think that’s the one big part of this narrative that’s left out.
There is a time-driven - there is a dedication to precociousness, right, and age-related performance that is built into our educational system that COVID has disrupted. But that’s really often what we’re measuring, right? Think about, like, gifted and talented - that really just means precocious, right? But we’ve all had kids who walked faster, who talked faster, who did all those sort of things. They didn’t get tracked for life, you know? So the kid who tests better faster, or reads faster, gets tracked for life in the accelerated program? So there’s a lot of these time-driven, age-driven things that are baked into what we’re doing that are problematic. Because I’m not sure that they mean much more than “okay, did that faster.”
38:50 CM:
Yeah, yeah, and the labeling as well, and the things that they can carry. I guess the only other question I have for you, then, would just be: what’s next? What are you looking at in your work, both your Fair Test and just your own personal interest? What are you pushing for?
39:07 AB:
So these days, the work work is the same as the work has always been. Let’s try to influence policy and practice around testing to make sure it is as least harmful as possible. We were working hard to try to get opt-outs to happen around this year’s testing and to get the Fed to grant broader waivers for testing. Now the next phase is going to be how are those test scores going to be used? And making sure that there is thought and care and nuance around use of scores if they’re used at all.
There’s one article I read from a psychometrician who gave I want to say it was an eight-point checklist for the things that need to be done to use test scores appropriately. It was actually Andrew Ho out of Harvard, which was funny to me, because his same article, like, “Here are eight things that you need to do in order to use test scores properly this year, so let’s test and do these eight things.” And I was like, because you have a list of eight things, let’s not test at all. So he’s one of the psychometricians I actually agree with, like, he has nuance and thoughtfulness around the use of testing. His conclusion was “Yes, go ahead and test, and I’m going to trust you to do this careful use of test scores.” I’m a little bit more of a cynic, like “Yeah, y’all will never do those things.” If you did them, I’m probably okay with it? But you’ll - I’ve seen too many school districts, and they will never use the scores that carefully and nuanced, so let’s just not test. Right? That ship has sailed. So the question is now holding districts and the federal government and different places to account around the use of test scores, right? And what are we trying to say with the use of test scores?
I feel like it was the IELTS that just came out, maybe it was Illinois who just published their test score data - I haven’t dug into the report yet, but oh, surprise surprise, there was a drop during the pandemic. It’s like - and I’m just going to go in and look for what are the caveats, right? Like, 90% of the students in our study didn’t actually finish the test! Like, there’s going to be all of these weird caveats, so it’s like, why are you even reporting out this data if 80% of the students did it in twelve seconds? And I’m not saying that’s actually in the report, but I expect to find things like that that, like - okay, this data is weird, it’s influenced by the pandemic, some of the kids did it at home, some of them did it in the building, wearing masks, social-distanced, and I’m not sure that this is meaningful data. And then when you take all of that into account, a five point drop - that’s not bad at all. So that’s going to be the work. The work is going to be - since they have tested in a lot of places, how do we show what the scores really mean beyond the big headline of: there was a drop?
42:12 CM:
Thank you again for listening to the Human Restoration Project’s podcast. I hope that this conversation leaves you inspired and ready to push the progressive envelope of education. You can learn more about progressive education, support our cause, stay tuned to this podcast and other updates on our website at humanrestorationproject.org.