Being a great lecturer doesn’t make you a great history teacher.

That’s something I wish I had known going into teaching.

I spent my first year trying to be engaging — better slides, better stories, better delivery. Students paid attention. Test scores looked fine. Evaluations were solid.

But when I asked a student to explain why something happened — not what happened, but why — I got blank stares. They remembered everything. They understood nothing.

Most of education still rewards that model. Engaging delivery. High test scores. Great evaluations. But none of that tells you whether a student can actually reason through an ambiguous situation and come out the other side with a defensible position.

We tested this. Using adaptive simulations where students have to interpret evidence, synthesize competing perspectives, and defend their reasoning — not pick from a list:

Interpretation and synthesis scores went from a 69 to an 80. A D to a B. In six simulations. Not six months. Six simulations.

What takes a traditional curriculum 40 weeks to achieve, we did in four. The difference isn’t better content or better lecturing. It’s that the simulation forces the student to do the thinking — and the AI measures the quality of that thinking at every step.

Facts don’t create skills. Decisions do.

DM me a course. I’ll show you what your students’ reasoning actually looks like — and where it breaks down.

Leave a Reply