Assessing the Un-Testable

I think I mentioned the Talking Right Past Each Other Paradox in my last post. It’s the problem that some education researchers see where they want to do research that helps teachers, but they feel like teachers aren’t reading their work. At the same time, teachers have problems that they could use some help with—and it’s nice to have someone with lots of time to think about your problem give suggestions. But the suggestions don’t always come. Hence, Talking Right Past Each Other.

I had an encounter with said Paradox the other day in one of my classes. We were reading an article that another grad student (Kyle) picked out about “epistemic cognition.” If you feel like spending a half hour with your dictionary, by all means skip my so-so definition of epistemic cognition. If you’d rather not, here’s what I think it is: what people think about knowledge and learning.

The article was about the problems researchers have with figuring out exactly what people think about knowledge and learning. Researchers want to figure this out because they’d like to have a clear trajectory of the development of people’s thoughts about knowledge and learning, tracked as people get older and learn more about particular subjects. Researchers would also just like to be able to assess what people are thinking so that they can help them think better things. Thoughts about knowledge and learning have a lot to do with how you actually learn, so it would be nice to know what you’re thinking about what you’re learning to help you learn it better.

All of this makes sense. From the article, I could also clearly see how the researchers were having problems with their ways of assessing epistemic cognition. I won’t go into that (by all means look it up, it was interesting: by Greene and Yu, called, “Modeling and measuring epistemic cognition: A qualitative re-investigation”). But this was about as far as my thinking went. First, I agree that it’s useful to know about what people think about knowledge and learning. Second, I see how those researchers could be having a hard time.

We were wrapping up the class when Kyle made a comment that jogged my brain. To close the discussion, he said something about how he hopes that researchers figure out how to measure epistemic cognition. I hadn’t had that thought. So I asked him why—why did he hope that researchers figured out how to measure epistemic cognition? His reply was so that he could do a better job of teaching. He wants his students to have nuanced and well-rounded ideas about knowledge and learning, and he felt like have a research-proven, efficient test for epistemic cognition would be helpful.

Somehow this didn’t make sense to me. I mean, it does make sense—I suppose a test for epistemic cognition would be awesome. But I was having two problems. First off, I couldn’t picture what a closed test of epistemic cognition—one that you could administer within one day—would be like. I really don’t know enough about designing epistemic cognition assessments to know, though. The second problem was the bigger one. Even if someone did come up with a test that measures epistemic cognition, I couldn’t see myself using one in a classroom in a natural way.

It’s not like I didn’t keep track of what my students were thinking about knowledge and learning while I was teaching. In fact, helping my students develop a particular set of thoughts about and attitudes towards math knowledge and learning was a major focus of my teaching. And I totally assessed it. I just didn’t give a one-off test for it.

I did raise these complaints in class. (Case-in-point of me being a terrible student—being argumentative right at the end of class.) We talked about them a bit, and then the professor asked Kyle whether he assessed epistemic cognition when he was teaching. Kyle said he did. How did he do it? Kyle thought for a moment and answered, by seeing what kinds of questions his students asked.

Yeah, I thought, that’s partly how I did it, too. I think that works really well. When kids ask probing, authority-challenging, curiosity-sparking questions, that shows they have top-notch epistemic cognition. (Not sure if “top-notch” is a correct measure of epistemic cognition, but oh well.) Kyle seemed to think that this way of measuring epistemic cognition worked really well, too.

But kids don’t ask their awesome, top-notch-epistemic-cognition questions all at the same time. They don’t even do it all the time, even if they do have that cream-of-the-crop epistemic cognition. That’s just not how it works. Sure, the fewer awesome questions they ask, the more likely it is that they think that knowledge is something you pour into brains. (Or that the teacher’s class isn’t open enough to students’ ideas.) But there’s also a personality thing. And the fact that asking awesome questions isn’t easy, even if you do have glorious epistemic cognition. And the fact that kids strut their epistemic cognition stuff in the context of class itself, not on a test that’s just about epistemic cognition.

So can you—should you—turn Kyle’s question-asking way of measuring epistemic cognition into a closed assessment? Would it still work in a real-live classroom full of real-live kids? If I’m trying to teaching my students robust ways of thinking about knowledge and learning, is closed, fast assessment the part I really want help with? I don’t know! But here we go again, Talking Right Past Each Other.

I’m not saying that there’s anything wrong with researching a tool for measuring epistemic cognition. It’s an interesting problem. But I don’t know if it’s the most interesting question about epistemic cognition from a teaching perspective. I think the discussion we had, as thoughtful folks straddling the divide between teaching and research, demonstrates that very well.

Follow

Get every new post delivered to your Inbox.

Join 38 other followers