3 Comments
User's avatar
⭠ Return to thread
Jacky Smith's avatar

Most therapists these days regard addiction as an illness rather than a behaviour, but I'm not saying there aren't tendencies - particularly to illness & sometimes even mental illness - that relate to genetics.

But the big general stuff, like "intelligence"? Far too complex to tie to any gene, or even a bunch of genes. Most people who spout off about it can't even define what they mean by it, in any rational sense. Environment, upbringing, what your grandma ate & how much practice you've had at intelligence tests all strongly affect the outcomes, even when the test was designed by someone who comes from the same cultural background as you.

I'm all for applying a battery of aptitude tests to see who might do well with the chance to train for a job, as long as the tests are broadly related to the job, and there are a wide enough range of tests to give the employer some sort of general picture. Your example of the military using a battery of aptitude tests is a good example - and even then half the people who passed the tests didn't make it through the training. But I do wonder if some of the people who failed the tests would have done better? Did they waste some good applicants? Testing is tricky stuff when you're dealing with people.

There are companies (and worse, schools) that use a "general intelligence test" to evaluate people & that's just silly. I did one for IBM many years ago, got offered the job but turned it down because I didn't want to work with the kind of person who thought those tests were fair. They were clearly so biased, I lost interest. Happy days when you could afford to turn down a job!

The trouble with testing people's psychology is that often it tells you more about the people who designed the test than the people taking it. Medical tests are different, as are the sort of tests you can do on a newly designed widget.

Expand full comment
Peaceful Dave's avatar

The culling process that I mentioned was not necessarily about eliminating people who were unsuited to the field. The tests were curved, and each question was weighted by its history. If I missed a question considered to be harder (more people historically got it wrong) than someone missing an easier question resulted in me acing the test and the other person getting a 99. The people scoring at the bottom were culled, though it was a bit like ranking the Apostles. The people who were culled might have been capable, but the objective was a heartless best attempt to find the "most" capable.

The psychology part was (I think) pertaining to suitability for combat. Most of the military people who went to Vietnam (my war) never fired their rifle. Many of those people were subject to incoming fire and were wounded or killed even though they were not active combatants. You didn't want somebody who was likely to have a cold rifle barrel at the end of contact because he was laying low. Laying low during a mortar attack and during a small arms firefight are different in that one is logical, the other is a hazard to your comrades. Psychology probably has some validity as a predictor, some.

While I would never argue that tests are flawless or without any bias, there are times when they are appropriate when they are the best that we can do.

Expand full comment
Jacky Smith's avatar

People who can design THAT level of testing get my complete admiration - and probably had experience on the front line as well as book learning. Marking those tests was as difficult as designing them, too - they had to be the result of a lot of iterations of adjustment, re-testing the test & checking what happened in reality.

But that's a test for a very specific situation. General intelligence tests have to be more, well, general. And it's so much harder to evaluate how accurate they are - there's no validation tests that can be guaranteed to be correct and objective, whereas your military test had one very specific and very objective deciding factor: were the guys who passed able to operate under fire?

There's a big difference between using psychological testing in such a very specific situation with such a specific success criterion, & using data from thousands of tests to make grand statements about whole populations.

When you do that, you can make reasonable claims about the statistics you generate, but you need to know the background of how the tests were carried out in some detail to evaluate their real value. I've seen many examples of "standardised" tests that could easily be misinterpreted, and seen papers showing that people who had practiced those tests & been given detailed feedback about their results were able to improve their scores significantly. In those circumstances, well-prepared candidates are going to appear more intelligent and the results are going to be completely unreliable but still have the veneer of respectable science.

Expand full comment