Not all simulations are built the same.
Most show you what students decide. They should show you how they think.
Two students walk into a nursing simulation. Both choose to escalate a patient to the ICU. Same decision.
One recognized the early warning signs and weighed them against three competing diagnoses before acting. The other panicked and escalated because they didn’t know what else to do.
That’s the ceiling. And every program doing experiential learning has hit it.
If your simulations are multiple choice, you get scale but you lose depth. Students click an answer. You see what they chose. You have no idea why. If your simulations are in-person (standardized patients, mock trials, classroom observations) you get depth but you lose scale.
A nursing professor told me she runs in-person simulations that are the best assessment in her program. But she can only evaluate a handful of students at a time. She knows it works. She just can’t do enough of them.
That’s not a pedagogy problem. It’s an infrastructure problem. And it’s one nobody’s solved because until recently, the only thing that could assess open-ended reasoning in real time was a human brain.
We built something that does it at scale. Students respond in their own words with no answer choices and no scripts. The AI assesses how they reason at every decision point: what evidence they used, what they ignored, how they adapted when the situation changed.
And it does this for every student in the cohort simultaneously.
Faculty don’t get a score. They get a map of each student’s reasoning — where it was sharp, where it broke down, where it improved. Across 30 students or 300.
The two nursing students who both chose to escalate?
Now you can see the difference. The one who reasoned through it moves forward. The one who panicked gets targeted coaching on diagnostic reasoning before they’re anywhere near a real patient.
For the first time, you can see inside the thinking, not just the outcome.
DM me a course or program. I’ll build a simulation in 48 hours so you can see what granular reasoning data actually looks like.