The Dunning-Kruger effect posits that those who know the least about a specific skill are most likely to overestimate their competence.
Critics of the White House point to the president and his cabinet to illustrate the principle at work. But that’s a joke, rather than a scientific demonstration. The Dunning-Kruger effect isn’t applicable to general competence; the concept involves specific, sharply defined skills.
Academic controversies involving the concept are legion. Some are substantive. A mathematician has argued that the concept is simply wrong.
I’m interested in the basic idea. Self-assessments and measures of objective performance, such as standardized tests, are not the same. You’d expect different scores, and patterns in those scores are interesting.
All tests are imperfect, but problems should not undermine our commitment to testing. You don’t often hear that 90 percent of a surgeon’s patients die on the operating table because methods of assessment are used to weed out people who cannot master certain skills. The Navy’s methods for weeding out people who were not competent at specific skills were once draconian. Perhaps they still are.
I also think that those small competencies add up in some way that is obvious, although perhaps not not measurable. If you can’t master the basic skills for handling classified information that the lowest ranks are expected to master, you probably shouldn’t be secretary of defense. If you read the Internet constantly and can’t tell science from science fiction, you probably shouldn’t be in charge of public health.
No comments:
Post a Comment