Assessing the Un-Testable

I think I mentioned the Talking Right Past Each Other Paradox in my last post. It’s the problem that some education researchers see where they want to do research that helps teachers, but they feel like teachers aren’t reading their work. At the same time, teachers have problems that they could use some help with—and it’s nice to have someone with lots of time to think about your problem give suggestions. But the suggestions don’t always come. Hence, Talking Right Past Each Other.

I had an encounter with said Paradox the other day in one of my classes. We were reading an article that another grad student (Kyle) picked out about “epistemic cognition.” If you feel like spending a half hour with your dictionary, by all means skip my so-so definition of epistemic cognition. If you’d rather not, here’s what I think it is: what people think about knowledge and learning.

The article was about the problems researchers have with figuring out exactly what people think about knowledge and learning. Researchers want to figure this out because they’d like to have a clear trajectory of the development of people’s thoughts about knowledge and learning, tracked as people get older and learn more about particular subjects. Researchers would also just like to be able to assess what people are thinking so that they can help them think better things. Thoughts about knowledge and learning have a lot to do with how you actually learn, so it would be nice to know what you’re thinking about what you’re learning to help you learn it better.

All of this makes sense. From the article, I could also clearly see how the researchers were having problems with their ways of assessing epistemic cognition. I won’t go into that (by all means look it up, it was interesting: by Greene and Yu, called, “Modeling and measuring epistemic cognition: A qualitative re-investigation”). But this was about as far as my thinking went. First, I agree that it’s useful to know about what people think about knowledge and learning. Second, I see how those researchers could be having a hard time.

We were wrapping up the class when Kyle made a comment that jogged my brain. To close the discussion, he said something about how he hopes that researchers figure out how to measure epistemic cognition. I hadn’t had that thought. So I asked him why—why did he hope that researchers figured out how to measure epistemic cognition? His reply was so that he could do a better job of teaching. He wants his students to have nuanced and well-rounded ideas about knowledge and learning, and he felt like have a research-proven, efficient test for epistemic cognition would be helpful.

Somehow this didn’t make sense to me. I mean, it does make sense—I suppose a test for epistemic cognition would be awesome. But I was having two problems. First off, I couldn’t picture what a closed test of epistemic cognition—one that you could administer within one day—would be like. I really don’t know enough about designing epistemic cognition assessments to know, though. The second problem was the bigger one. Even if someone did come up with a test that measures epistemic cognition, I couldn’t see myself using one in a classroom in a natural way.

It’s not like I didn’t keep track of what my students were thinking about knowledge and learning while I was teaching. In fact, helping my students develop a particular set of thoughts about and attitudes towards math knowledge and learning was a major focus of my teaching. And I totally assessed it. I just didn’t give a one-off test for it.

I did raise these complaints in class. (Case-in-point of me being a terrible student—being argumentative right at the end of class.) We talked about them a bit, and then the professor asked Kyle whether he assessed epistemic cognition when he was teaching. Kyle said he did. How did he do it? Kyle thought for a moment and answered, by seeing what kinds of questions his students asked.

Yeah, I thought, that’s partly how I did it, too. I think that works really well. When kids ask probing, authority-challenging, curiosity-sparking questions, that shows they have top-notch epistemic cognition. (Not sure if “top-notch” is a correct measure of epistemic cognition, but oh well.) Kyle seemed to think that this way of measuring epistemic cognition worked really well, too.

But kids don’t ask their awesome, top-notch-epistemic-cognition questions all at the same time. They don’t even do it all the time, even if they do have that cream-of-the-crop epistemic cognition. That’s just not how it works. Sure, the fewer awesome questions they ask, the more likely it is that they think that knowledge is something you pour into brains. (Or that the teacher’s class isn’t open enough to students’ ideas.) But there’s also a personality thing. And the fact that asking awesome questions isn’t easy, even if you do have glorious epistemic cognition. And the fact that kids strut their epistemic cognition stuff in the context of class itself, not on a test that’s just about epistemic cognition.

So can you—should you—turn Kyle’s question-asking way of measuring epistemic cognition into a closed assessment? Would it still work in a real-live classroom full of real-live kids? If I’m trying to teaching my students robust ways of thinking about knowledge and learning, is closed, fast assessment the part I really want help with? I don’t know! But here we go again, Talking Right Past Each Other.

I’m not saying that there’s anything wrong with researching a tool for measuring epistemic cognition. It’s an interesting problem. But I don’t know if it’s the most interesting question about epistemic cognition from a teaching perspective. I think the discussion we had, as thoughtful folks straddling the divide between teaching and research, demonstrates that very well.

Advertisements

There Are No Kids Here

I took a bit of a break from writing because I’m actually not teaching at Saint Ann’s anymore! I started a Ph.D. program in math education at UC Berkeley in September, so the past six months have been consumed with homework, readings, papers—that sort of thing. The first thing I’ll say about that is that while teaching, I have definitely become I worse student. I hear this is common. If anyone would like to do some research on the terrible students that former teachers become, I volunteer myself as a case-study.

I’m writing again for two reasons. First of all, I re-found a great education blog by someone who is also on the other side—Ilana Horn! I was in the middle of a small bout of despair about education writing when I re-found her blog. I have small bouts of despair about things related to education research every now and then, and this one was sparked by what I’ll call the Talking Right Past Each Other Paradox: Education researchers say their main goal is to help teachers, but they feel like their work doesn’t often get read by teachers. My first thought when I heard this was, “Because the researchers don’t blog about it!” And then I remembered that one of them does blog about it! And I realized that it is possible to write something about education research in a human, intriguing, and useful way. And I wanted in.

I’m obviously not ready to do this on even 1% of the same level as Ilana—so that’s not really why I’m writing again. The second, and primary, reason why I’ve jumped back in is basically the same as why I started blogging in the first place—because I’m having teaching problems.

Just like I was totally unprepared to teach math to kiddos when I first started at Saint Ann’s (not the fault of a teacher prep program—I didn’t go to one), I am totally unprepared to teach teaching to college students. This also isn’t the fault of a teacher-teacher prep program, because there isn’t one. I’m really not sure why. It’s not like college students aren’t kids, too, who need care and personal attention. It’s also not like teacher educators have got the whole teaching thing figured out, either.

Anyway, a little back-story to my current teaching problem—I’ve been “TA-ing” and “researching” in an education class for undergrads who are in one of Berkeley’s teacher prep programs. But the other day, I was launched from my cozy position as TA to the much scarier position of actual teacher. The instructors didn’t have anything planned for an hour chunk of class and one of them was going to be late. So they pulled out their secret weapon, the TA, to fill in the gap.

What should I do with 40 pre-service teachers for an hour? I first approached the task like I would if they were 40 high school students with whom I was asked to share something cool and mathy. They’d just read some articles about inquiry learning, so I thought we could do a little math inquiry together. But then I realized that my lesson was missing something essential. I had a great math problem picked out. But they weren’t going to do any teaching. I didn’t have a “teaching problem.”

And this is when the sheer challenge of the task hit me. I needed a teaching problem—but there are no kids here.

If the material of math problems is math, then the material of teaching problems is kids, right? You use math to do math, and you use kids to do teaching. Sure, plenty of math problems you find are missing the “real math”—but if you dig around enough and know what you’re looking for, you’re sure to find something. But, search all you like, you will not find kids in this class.

You may now be wondering whether I’ve been paying attention at all during the last six months of grad school. Yes, I knew before yesterday that there are no kids in education grad school. And, yes, I have been reading my Pam Grossman and I know a bit about “approximations of practice” and that sort of thing. I knew that folks had already identified the lack of kids as a problem in courses about teaching. I guess it didn’t hit me how much of a challenge this really is for developing good activities for pre-service teachers until I had to develop one of my own.

I did come up with something to do with the pre-service teachers. It did not involve the miraculous appearance of kids. Like most spur-of-the-moment, first-time activities run by new teachers (because I’m definitely a new teacher now, as new as I’d be if I switched to teaching English), it didn’t go as planned. I want to try it again, with Round 1 under my belt, and then I’ll write about it. I’m not sure if it was any good.

Until then, though, I just wanted to reflect about this problem for myself and anyone who is interested. I also wanted to ask for help. Does anyone have any good teaching problems? I’m calling them that because it makes a parallel with math problems that’s helpful for me. I don’t want teaching exercises, I want teaching problems, ones that really make people think and engage with genuine teaching. If I tried my hardest to not give kids math exercises when I was a math teacher, I want to try my hardest not to give kids teaching exercises now that I’m a teaching teacher.

It seems like the task might be more difficult, though, because like I said, there are no kids here. That seems like a real problem to me.