2 Comments
User's avatar
⭠ Return to thread
Peaceful Dave's avatar

The culling process that I mentioned was not necessarily about eliminating people who were unsuited to the field. The tests were curved, and each question was weighted by its history. If I missed a question considered to be harder (more people historically got it wrong) than someone missing an easier question resulted in me acing the test and the other person getting a 99. The people scoring at the bottom were culled, though it was a bit like ranking the Apostles. The people who were culled might have been capable, but the objective was a heartless best attempt to find the "most" capable.

The psychology part was (I think) pertaining to suitability for combat. Most of the military people who went to Vietnam (my war) never fired their rifle. Many of those people were subject to incoming fire and were wounded or killed even though they were not active combatants. You didn't want somebody who was likely to have a cold rifle barrel at the end of contact because he was laying low. Laying low during a mortar attack and during a small arms firefight are different in that one is logical, the other is a hazard to your comrades. Psychology probably has some validity as a predictor, some.

While I would never argue that tests are flawless or without any bias, there are times when they are appropriate when they are the best that we can do.

Expand full comment
Jacky Smith's avatar

People who can design THAT level of testing get my complete admiration - and probably had experience on the front line as well as book learning. Marking those tests was as difficult as designing them, too - they had to be the result of a lot of iterations of adjustment, re-testing the test & checking what happened in reality.

But that's a test for a very specific situation. General intelligence tests have to be more, well, general. And it's so much harder to evaluate how accurate they are - there's no validation tests that can be guaranteed to be correct and objective, whereas your military test had one very specific and very objective deciding factor: were the guys who passed able to operate under fire?

There's a big difference between using psychological testing in such a very specific situation with such a specific success criterion, & using data from thousands of tests to make grand statements about whole populations.

When you do that, you can make reasonable claims about the statistics you generate, but you need to know the background of how the tests were carried out in some detail to evaluate their real value. I've seen many examples of "standardised" tests that could easily be misinterpreted, and seen papers showing that people who had practiced those tests & been given detailed feedback about their results were able to improve their scores significantly. In those circumstances, well-prepared candidates are going to appear more intelligent and the results are going to be completely unreliable but still have the veneer of respectable science.

Expand full comment