You are here:

Why negative feedback is better for improvement (or is it?)

In the years after World War II, a group of Israeli Air Force instructors believed they had discovered a hard truth about human nature. They noticed something strange during flight training. When a pilot executed a perfect maneuver and received praise, the next attempt was usually worse. But when a pilot performed badly and was harshly criticized, his next flight tended to improve. It felt like solid evidence. Praise seemed to weaken people, while criticism toughened them up. Many instructors became convinced that tough feedback was the only way to improve performance. The conclusion was neat, logical, and entirely wrong.

When the psychologist Daniel Kahneman (2011) visited the base years later, he realized what was really happening. The instructors had not stumbled on a new insight into motivation. They had simply been tricked by statistics. What they were seeing was something called regression to the mean. It sounds abstract, but the idea is simple once you see it. Performance always varies. A pilot who flies exceptionally well one day probably had a bit of luck on his side. On the next day, when luck returns to normal, his performance naturally falls closer to his personal average. The same happens in reverse. A pilot who has a bad flight will often do better next time, not because he was scolded into improvement, but because it is unlikely that things will go that badly twice in a row. So the pattern that looked like cause and effect was really just randomness evening itself out.

Imagine a different setting. Suppose you have one hundred students, each asked to guess one hundred coin flips. A few will do astonishingly well. Maybe one of them correctly guesses eighty out of a hundred. It looks like luck has nothing to do with it. However, if that same student tries again, the odds almost guarantee a return to somewhere near fifty correct guesses. The “gift” vanishes, because the first streak was never skill at all. That is regression to the mean at work. When something extraordinary happens, good or bad, what follows is usually more ordinary. Yet people rarely see it that way. Statistics, however, does not care about our emotions. It quietly reminds us that extremes rarely last.

This misunderstanding runs through much more than pilot training. Teachers, managers, coaches, investors are as well all tempted by the same illusion. A student who aces an exam is praised, then earns a lower score next time. A manager applauds a high-performing employee, only to be disappointed by a later dip. A stock that skyrockets in one quarter sinks the next. It feels like praise spoiled the student, pride weakened the worker, or confidence ruined the investor’s judgment. In truth, the numbers were just correcting themselves.

Even governments fall for this trick. Picture an intersection that usually records around three accidents a year. One year, by bad luck, there are eight. The sudden spike alarms city officials, who rush to act. They install new traffic lights, lower the speed limit, add cameras, and announce a safety campaign. A year later, the number of accidents drops back to three. To the officials, this looks like a triumph. They proudly report a 60 percent reduction in accidents, convinced their intervention saved lives. Yet the truth is harder to pin down. Did the new measures really make the difference, or was the decline simply the road returning to its usual rhythm? 

That is why it is worth being cautious with feedback, both in systems and in ourselves. When we try to “fix” extreme events, like a disastrous grade or a particularly bad presentation, we risk chasing noise rather than signal. Not every spike or slump demands intervention. The real challenge is to understand what shapes your average performance, not your highs or lows. Improve the mean, and the extremes will take care of themselves. You will still have peaks and valleys, but they will orbit a higher level of consistency. That, in the long run, is what real improvement looks like.

References 

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.