Any argument that begins with a demand for definition of an uncontroversial term has already fallen on its face. Same for "who gets to decide" in all its incarnations.
Intelligence is the ability to process information, to discern patterns, to think in abstractions.
Any argument that begins with a demand for definition of an uncontroversial term has already fallen on its face. Same for "who gets to decide" in all its incarnations.
Intelligence is the ability to process information, to discern patterns, to think in abstractions.
Your hostililty to the idea seems extreme. Care to share why?
Edit: "At a time when there's confusion between "artificial intelligence" and "machine learning", this is not a trivial question."
Software development is a steaming pile of neologism manure and imprecise terminology replacing the precise, because the new ones sound "cool." You cannot draw any conclusions from this more profound than "programmers are compulsive conformists." The best I can venture is that ML is a subset of AI.
Let me give you an example ... "unfinished work" is now called "technical debt."
This is one of the places that you can help me with this idea. My work pertaining to computerized things was heavily about controlling things. The code has subject to rigidly defined rules. There will never be a HAL telling Dave, "I'm sorry, I can't do that." A computer becoming sentient is fantasy in my mind.
There is software that incorporates heuristics which could be considered to be learning but I don't know about a relationship to intelligence. Do chess playing computers actually learn, or just have the power to map possibilities in a timely manner due to ever increasing computing power and incorporate it heuristically?
This is a bit astray of the original article but what are your thoughts on artificial intelligence? I put emphasis on "artificial" and don't conflate it with intelligence in a sentient being.
I would not be so sanguine about machine sentience. Right now software is structurally unsuited for it; software executes in response to some event, be it a user operation or a signal of some kind, runs whatever handler is triggered by the event, executes in a deterministic fashion (or, maddeningly, unpredicably), and finishes.
If you have not heard of John Horton Conway and the Life game, you should look it up. Even the most entry-level programmer can implement it (I wrote a version on a TRS-80 with Cassette-BASIC), and it does seriously Wonderful Things. It's a grid, with a clock, and on each tick the state of one cell in the grid changes from off to on or vice versa depending on the number of cells around it that are on.
It's an example of cellular automata.
The point is that complex behaviors emerge from a supremely simple set of rules. Consciousness is a complexity that emerges from the firing states of neurons, which have rules more complex than a flat grid but ... well, you get the picture.
Machines do learn. Chess-playing machines can, yes, calculate vast permutation sets but they also remember mistakes, self-modifying memory; if we insist on looking at everything as bits and data it's easy to say it can never exhibit Complexity ....
.... but then, we don't experience synapses, either.
Be. Very. Afraid.
It will probably begin with simulated consciousness, which means machines that are never idle, always running, their processes distributes across vast numbers of ... cells.
In aviation the pilot is king or queen and can override automatic functions (a violation of that not long ago demonstrated the wisdom). But there was a case where it is believed that a pilot purposefully flew a commercial aircraft into the ground. Could "no, I can't let you do that" be implemented? Yes, it could but mistrust of computers is greater than mistrust of pilots.
How can self-modifying code be certified when safety of human life requires it? It could already be happening in areas I have no visibility to and there are valid arguments in some applications for crowning the computer king, but I am very afraid of that for reasons I don't need to describe to you.
I can't remember when I first played with that game of life. I think it was with a Commodore 64. I didn't think about it being a case of cellular automata, but now that you mention it...
To prevent what happened on flight 990 (assuming that it was an intentional crash) a preventative measure would be a major change in philosophy. When the pilot dropped paddles, he took computer controls out of the picture. The fix would be computer monitoring and ability to overrule the pilot when there is supposed to be no such capability. That disconnect is in hardware and is a specifically tested flight safety function. A modification to change that (and it would be a major one) would have visibility.
When computers become more trusted than humans it will truly be a brave new world. Computers smarter than humans would be that. I do worry.
I think I was 14 the first time I read that Ellison piece. And then of course there're Colossus and SkyNet.
OTOH there is on Neal Asher whose stories take place centuries after a Quiet War in which the AIs bloodlessly took over everything and humanity enters what we would regard as a golden age; life extension, no war within the Polity (hostile aliens, though) where an AI executes on quantum crystal and a planetary administrator can be hidden in an ashtray.
The AIs do have emotions, they have incomprehensible priorities and, well, there are some smashing good ideas. Also a hell of a lot of military SF which isn't my cuppa and a rogue AI called Penny Royal.
The point about how poorly we predict the future is spot on. My background is limited to trying to make software that does something specific and constrained be bug free which is challenge enough for me. I really have no idea about creating a generalized intelligence in software. The thought does occur that it would have equivalence to genetics since new software is often dependent upon the old functions it is built upon where ancestor code potential for a bug is revealed in new software that uses it.
"Uncontroversial"? In an era of machine learning vs artificial intelligence? When you want to "fix software"? As someone trained in maths, but also as an experienced systems analyst, I always start by clarifying what people mean by the terms we're discussing - it saves a lot of misunderstandings.
I do agree about the problems with software development, though - run by salesmen rather than developers, to a large extent. Currently we have "machine learning" that is totally not intelligent because it's entirely determined by the data it's given to learn from. But the philosophical & neuroscientific questions that arise from the question of whether we can develop genuinely intelligent machines are fascinating - and "what do we mean by intelligence?" is part of that, related to the hard question of "what is consciousness?".
That's an interestingly limited description of intelligence that you offer, though.
When you talk about "processing information", what do you include/exclude? Are you limiting that to information that can be written down, categorised, counted? Reading body language, for example, is a key human skill but it's a whole new dimension in itself, hard to measure without including a lot of cultural bias.
Patterns, ditto - if you talk to someone who's deeply aware of the ecosystem they inhabit, the patterns they perceive will be completely different to anything you could use in an IQ test: subtle, complex and ever-shifting. That habitual depth of perception bleeds through into every aspect of their thought, too.
How exactly can you measure people's ability to think in abstractions? Because what's an obvious superficial fact to one person is an abstraction to someone else. This whole discussion is a case in point.
As a mathematician, it annoys me when other would-be scientists (psychologists for example) misuse the tools provided and then claim authority for their crackpot theories. IQ testing is part of the whole eugenics thought-pattern: it assumes the superiority of the Western mindset, and yet it doesn't even respect the rules that mindset imposes, breaking them whenever it suits. Psychologists' use of statistics is famously "the way a drunk uses a lamp post: more for support than illumination". Attempting to apply a measure to an unmeasurable space is a bit special, though, even for them.
I have a friend with "learning difficulties" whose ability to work in 3-D is astonishing: he can look at an object like a car with a rusty inside wheel arch, and cut & weld a replacement with its complex sets of curves, faultlessly. And yet he can't read well enough to do one of your tests. His information processing, pattern perception & even thinking about abstractions are phenomenal in some ways, just not in the ways that suit IQ testing. His way of tracking how much cash he's got left in his bank account is dizzying - but it works.
I score highly on IQ tests, as long as I remind myself to give the answer the test compiler expected, rather than the many other possibilities that spring to mind. Offered a chance to discuss the results, as I have been when people were trying to develop new ones, it became clear that the people designing the tests wanted to reward people who think just the way they do, and to reject anyone who thinks differently. That's only human, of course, but it's a big problem if you're claiming universal validity for your test.
I came through college at a time & in a place where the leading mathematicians of the day were developing chaos & complexity theories, and I've carried on following developments in this topic. It's taken me into all sorts of areas of study and made me realise just how limited Western thought has been over the last couple of hundred years. It's achieved some wonderful things but only in limited areas, and if we are to move forward, we need to understand the gaps in our thinking. Assuming that everything that matters can be measured & counted is one of those gaps - some things are just not measurable, in maths or in real life.
ps - steaming piles of ideas manure are a vital part of a healthy ecosystem of ideas. The dead ones have to go somewhere to be broken down ready for reuse!
"That's an interestingly limited description of intelligence that you offer, though."
That wasn't a description, that was a few examples.
I come here for discussions, not for arguments; it seems you want to take almost everything I write as the springboard for an argument. I'm not interested in another dormitory bull session debate on what consciousness is nor the deficiencies of Western thought; I live in the East and they don't seem to have figured out much more than we have.
A pity; this is an area that interests me a lot, I too am degreed in mathematics and have studied Devaney and Kaufmann since that introductory article in Scientific American (before it turned into another Popular Mechanics).
The comparative measure of intelligence doesn't require an infinite number of dimensions and it can't account in generality for anomalies like savants. It can compare Weyl and Einstein but cannot account for a Galois.
Sorry, I never had a "dormitory bull session debate" so I don't understand the reference. Intelligence is a really interesting field, though, and the overlap between maths & psychology is fertile ground as far as I'm concerned.
I'm curious: why do you think we need to measure intelligence?
I'm not trying to answer for Chris, but I can tell you where it mattered. I was an active-duty Marine in the 1960s. At that time, we had no choice in the occupational specialty we would be assigned to. The ability to independently perform while under stress was critical. The decision process involved testing.
There was the GCT test which was a rough equivalent of an IQ test. MENSA will allow, or at one time did, a Navy GCT score. There was also a battery of aptitude and psychological tests. All of those were used in the decision-making process. Notably, every MOS had a minimum GCT requirement, but it was not the only consideration. If you were qualified for a hard to fill field, you'd probably be assigned to that rather than one you scored higher in. After that, the training was set up for a high attrition where some washed out nearly 50% and reassigned to another field.
It was a highly successful method for getting the right people into the right jobs. The reason for both a base level of general intelligence and aptitude should be obvious since that zeros in on smart for what and able to do what? Over the years I saw people who were smart enough to get a degree in engineering, computer science or mathematics who ended up in management because they did not perform well in their field. The ability to apply knowledge is a big deal. I don't know if that can be predicted with a test other than actual performance.
Sorry that I rambled a bit, but history has shown that a general level of intelligence is required for success in some things although I strongly think that a sharper focus is needed after that. You don't need to go to the level of idiot savant to see that ability is not level across a single number measure of intelligence like IQ.
Every century or so the world coughs up a mathematician whose mind is off the scale. It's not about metrics; these people see relationships and make advances that change everything. Two interesting consistencies: (1) they are all men and (2) they do their best work before the age of 20. I mentioned Galois; he let himself be lured into a duel and was killed before he could produce any more.
Then there are the people like Richard Feynman, Enrico Fermi, Hermann Weyl (who assisted Einstein), whose brilliance allows them to see interconnectedness that others can't. At the first atomic bomb test Fermi walked around the tower tearing up a pad of paper; after the explosion he made eyeball estimates of how far the scraps had been blown, and in his head calculated the bomb yield. He was 95% accurate.
I wonder what it's like to live in a head like that.
You're posing a lot of pointless questions that I am not going to engage in. You are being aggressively competitive with someone uninterested in competing.
Because it's really expensive & tells you nothing useful. (Even you couldn't come up with a use case you thought would convince me.)
Because the results are unreliable, and that can be disastrous. (Never mind the wastefulness of really bright kids who fail, ever had a really incompetent boss who'd passed all the tests but knew nothing worth knowing?)
and finally
Because intelligence isn't measurable. (However you rejig the tests, they'll never be reliable. Sad but true - sometimes reality is a bitch like that.)
And by the way I notice that you accuse me of being "competitive" when you can't come up with answers to my questions. I've been polite throughout, but I do reply to the points you've made. If you see that as "aggressive", it's not me that has a problem.
" IQ testing is part of the whole eugenics thought-pattern"
This is unpardonable hyperbole. Intelligence testing is Heinrich Himmler and Zyklon-B. I really think you should dial that WAY back.
"he can't read well enough to do one of your tests."
One of MY tests? I'm not involved in evaluating anyone other than potential software hires, is there some reason you are making this discussion so personal?
"... if we are to move forward, we need to understand the gaps in our thinking. Assuming that everything that matters can be measured & counted is one of those gaps"
You're about a century behind in your education. It is fundamental to science, and has been for a century, that there are limits to what is knowable. Heisenberg's Uncertainty Principle shook up science profoundly and required a fundamental rethinking that went far, far beyond measurement of position and momentum.
And before you object that this is irrelevant to the discussion, the calcium ion gates in the synapse are small enough to have quantum mechanical properties and while the debate over the role of the quantum in consciousness is unsettled, it is far from rejected.
"First problem: define intelligence."
Any argument that begins with a demand for definition of an uncontroversial term has already fallen on its face. Same for "who gets to decide" in all its incarnations.
Intelligence is the ability to process information, to discern patterns, to think in abstractions.
Your hostililty to the idea seems extreme. Care to share why?
Edit: "At a time when there's confusion between "artificial intelligence" and "machine learning", this is not a trivial question."
Software development is a steaming pile of neologism manure and imprecise terminology replacing the precise, because the new ones sound "cool." You cannot draw any conclusions from this more profound than "programmers are compulsive conformists." The best I can venture is that ML is a subset of AI.
Let me give you an example ... "unfinished work" is now called "technical debt."
𝘈𝘵 𝘢 𝘵𝘪𝘮𝘦 𝘸𝘩𝘦𝘯 𝘵𝘩𝘦𝘳𝘦'𝘴 𝘤𝘰𝘯𝘧𝘶𝘴𝘪𝘰𝘯 𝘣𝘦𝘵𝘸𝘦𝘦𝘯 "𝘢𝘳𝘵𝘪𝘧𝘪𝘤𝘪𝘢𝘭 𝘪𝘯𝘵𝘦𝘭𝘭𝘪𝘨𝘦𝘯𝘤𝘦" 𝘢𝘯𝘥 "𝘮𝘢𝘤𝘩𝘪𝘯𝘦 𝘭𝘦𝘢𝘳𝘯𝘪𝘯𝘨", 𝘵𝘩𝘪𝘴 𝘪𝘴 𝘯𝘰𝘵 𝘢 𝘵𝘳𝘪𝘷𝘪𝘢𝘭 𝘲𝘶𝘦𝘴𝘵𝘪𝘰𝘯.
This is one of the places that you can help me with this idea. My work pertaining to computerized things was heavily about controlling things. The code has subject to rigidly defined rules. There will never be a HAL telling Dave, "I'm sorry, I can't do that." A computer becoming sentient is fantasy in my mind.
There is software that incorporates heuristics which could be considered to be learning but I don't know about a relationship to intelligence. Do chess playing computers actually learn, or just have the power to map possibilities in a timely manner due to ever increasing computing power and incorporate it heuristically?
This is a bit astray of the original article but what are your thoughts on artificial intelligence? I put emphasis on "artificial" and don't conflate it with intelligence in a sentient being.
I would not be so sanguine about machine sentience. Right now software is structurally unsuited for it; software executes in response to some event, be it a user operation or a signal of some kind, runs whatever handler is triggered by the event, executes in a deterministic fashion (or, maddeningly, unpredicably), and finishes.
If you have not heard of John Horton Conway and the Life game, you should look it up. Even the most entry-level programmer can implement it (I wrote a version on a TRS-80 with Cassette-BASIC), and it does seriously Wonderful Things. It's a grid, with a clock, and on each tick the state of one cell in the grid changes from off to on or vice versa depending on the number of cells around it that are on.
It's an example of cellular automata.
The point is that complex behaviors emerge from a supremely simple set of rules. Consciousness is a complexity that emerges from the firing states of neurons, which have rules more complex than a flat grid but ... well, you get the picture.
Machines do learn. Chess-playing machines can, yes, calculate vast permutation sets but they also remember mistakes, self-modifying memory; if we insist on looking at everything as bits and data it's easy to say it can never exhibit Complexity ....
.... but then, we don't experience synapses, either.
Be. Very. Afraid.
It will probably begin with simulated consciousness, which means machines that are never idle, always running, their processes distributes across vast numbers of ... cells.
In aviation the pilot is king or queen and can override automatic functions (a violation of that not long ago demonstrated the wisdom). But there was a case where it is believed that a pilot purposefully flew a commercial aircraft into the ground. Could "no, I can't let you do that" be implemented? Yes, it could but mistrust of computers is greater than mistrust of pilots.
How can self-modifying code be certified when safety of human life requires it? It could already be happening in areas I have no visibility to and there are valid arguments in some applications for crowning the computer king, but I am very afraid of that for reasons I don't need to describe to you.
You may have read https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream
I can't remember when I first played with that game of life. I think it was with a Commodore 64. I didn't think about it being a case of cellular automata, but now that you mention it...
I doubt that learning software would be allowed to modify its rules while actually in flight.
And computers don't exhibit suicidal tendencies like people do.
A pilot crashed a passenger plane? That was rude.
There is (political) controversy about this https://www.historicmysteries.com/egyptair-flight-990/
To prevent what happened on flight 990 (assuming that it was an intentional crash) a preventative measure would be a major change in philosophy. When the pilot dropped paddles, he took computer controls out of the picture. The fix would be computer monitoring and ability to overrule the pilot when there is supposed to be no such capability. That disconnect is in hardware and is a specifically tested flight safety function. A modification to change that (and it would be a major one) would have visibility.
When computers become more trusted than humans it will truly be a brave new world. Computers smarter than humans would be that. I do worry.
I think I was 14 the first time I read that Ellison piece. And then of course there're Colossus and SkyNet.
OTOH there is on Neal Asher whose stories take place centuries after a Quiet War in which the AIs bloodlessly took over everything and humanity enters what we would regard as a golden age; life extension, no war within the Polity (hostile aliens, though) where an AI executes on quantum crystal and a planetary administrator can be hidden in an ashtray.
The AIs do have emotions, they have incomprehensible priorities and, well, there are some smashing good ideas. Also a hell of a lot of military SF which isn't my cuppa and a rogue AI called Penny Royal.
First of the series of three: https://www.amazon.com/Dark-Intelligence-Transformations-Book-1-ebook/dp/B07H51PRSM/ref=sr_1_1
Rather timely article here:
https://www.washingtonpost.com/opinions/2022/08/31/artificial-intelligence-worst-case-scenario-extinction/
The point about how poorly we predict the future is spot on. My background is limited to trying to make software that does something specific and constrained be bug free which is challenge enough for me. I really have no idea about creating a generalized intelligence in software. The thought does occur that it would have equivalence to genetics since new software is often dependent upon the old functions it is built upon where ancestor code potential for a bug is revealed in new software that uses it.
An issue made more worrisome when you consider that militaries are actually seeking autonomous warfare capability. What could go wrong?
"Uncontroversial"? In an era of machine learning vs artificial intelligence? When you want to "fix software"? As someone trained in maths, but also as an experienced systems analyst, I always start by clarifying what people mean by the terms we're discussing - it saves a lot of misunderstandings.
I do agree about the problems with software development, though - run by salesmen rather than developers, to a large extent. Currently we have "machine learning" that is totally not intelligent because it's entirely determined by the data it's given to learn from. But the philosophical & neuroscientific questions that arise from the question of whether we can develop genuinely intelligent machines are fascinating - and "what do we mean by intelligence?" is part of that, related to the hard question of "what is consciousness?".
That's an interestingly limited description of intelligence that you offer, though.
When you talk about "processing information", what do you include/exclude? Are you limiting that to information that can be written down, categorised, counted? Reading body language, for example, is a key human skill but it's a whole new dimension in itself, hard to measure without including a lot of cultural bias.
Patterns, ditto - if you talk to someone who's deeply aware of the ecosystem they inhabit, the patterns they perceive will be completely different to anything you could use in an IQ test: subtle, complex and ever-shifting. That habitual depth of perception bleeds through into every aspect of their thought, too.
How exactly can you measure people's ability to think in abstractions? Because what's an obvious superficial fact to one person is an abstraction to someone else. This whole discussion is a case in point.
As a mathematician, it annoys me when other would-be scientists (psychologists for example) misuse the tools provided and then claim authority for their crackpot theories. IQ testing is part of the whole eugenics thought-pattern: it assumes the superiority of the Western mindset, and yet it doesn't even respect the rules that mindset imposes, breaking them whenever it suits. Psychologists' use of statistics is famously "the way a drunk uses a lamp post: more for support than illumination". Attempting to apply a measure to an unmeasurable space is a bit special, though, even for them.
I have a friend with "learning difficulties" whose ability to work in 3-D is astonishing: he can look at an object like a car with a rusty inside wheel arch, and cut & weld a replacement with its complex sets of curves, faultlessly. And yet he can't read well enough to do one of your tests. His information processing, pattern perception & even thinking about abstractions are phenomenal in some ways, just not in the ways that suit IQ testing. His way of tracking how much cash he's got left in his bank account is dizzying - but it works.
I score highly on IQ tests, as long as I remind myself to give the answer the test compiler expected, rather than the many other possibilities that spring to mind. Offered a chance to discuss the results, as I have been when people were trying to develop new ones, it became clear that the people designing the tests wanted to reward people who think just the way they do, and to reject anyone who thinks differently. That's only human, of course, but it's a big problem if you're claiming universal validity for your test.
I came through college at a time & in a place where the leading mathematicians of the day were developing chaos & complexity theories, and I've carried on following developments in this topic. It's taken me into all sorts of areas of study and made me realise just how limited Western thought has been over the last couple of hundred years. It's achieved some wonderful things but only in limited areas, and if we are to move forward, we need to understand the gaps in our thinking. Assuming that everything that matters can be measured & counted is one of those gaps - some things are just not measurable, in maths or in real life.
ps - steaming piles of ideas manure are a vital part of a healthy ecosystem of ideas. The dead ones have to go somewhere to be broken down ready for reuse!
"That's an interestingly limited description of intelligence that you offer, though."
That wasn't a description, that was a few examples.
I come here for discussions, not for arguments; it seems you want to take almost everything I write as the springboard for an argument. I'm not interested in another dormitory bull session debate on what consciousness is nor the deficiencies of Western thought; I live in the East and they don't seem to have figured out much more than we have.
A pity; this is an area that interests me a lot, I too am degreed in mathematics and have studied Devaney and Kaufmann since that introductory article in Scientific American (before it turned into another Popular Mechanics).
The comparative measure of intelligence doesn't require an infinite number of dimensions and it can't account in generality for anomalies like savants. It can compare Weyl and Einstein but cannot account for a Galois.
But then, that's not what it's for.
Sorry, I never had a "dormitory bull session debate" so I don't understand the reference. Intelligence is a really interesting field, though, and the overlap between maths & psychology is fertile ground as far as I'm concerned.
I'm curious: why do you think we need to measure intelligence?
I'm not trying to answer for Chris, but I can tell you where it mattered. I was an active-duty Marine in the 1960s. At that time, we had no choice in the occupational specialty we would be assigned to. The ability to independently perform while under stress was critical. The decision process involved testing.
There was the GCT test which was a rough equivalent of an IQ test. MENSA will allow, or at one time did, a Navy GCT score. There was also a battery of aptitude and psychological tests. All of those were used in the decision-making process. Notably, every MOS had a minimum GCT requirement, but it was not the only consideration. If you were qualified for a hard to fill field, you'd probably be assigned to that rather than one you scored higher in. After that, the training was set up for a high attrition where some washed out nearly 50% and reassigned to another field.
It was a highly successful method for getting the right people into the right jobs. The reason for both a base level of general intelligence and aptitude should be obvious since that zeros in on smart for what and able to do what? Over the years I saw people who were smart enough to get a degree in engineering, computer science or mathematics who ended up in management because they did not perform well in their field. The ability to apply knowledge is a big deal. I don't know if that can be predicted with a test other than actual performance.
Sorry that I rambled a bit, but history has shown that a general level of intelligence is required for success in some things although I strongly think that a sharper focus is needed after that. You don't need to go to the level of idiot savant to see that ability is not level across a single number measure of intelligence like IQ.
Every century or so the world coughs up a mathematician whose mind is off the scale. It's not about metrics; these people see relationships and make advances that change everything. Two interesting consistencies: (1) they are all men and (2) they do their best work before the age of 20. I mentioned Galois; he let himself be lured into a duel and was killed before he could produce any more.
Then there are the people like Richard Feynman, Enrico Fermi, Hermann Weyl (who assisted Einstein), whose brilliance allows them to see interconnectedness that others can't. At the first atomic bomb test Fermi walked around the tower tearing up a pad of paper; after the explosion he made eyeball estimates of how far the scraps had been blown, and in his head calculated the bomb yield. He was 95% accurate.
I wonder what it's like to live in a head like that.
To refute bigots?
That's a stupendously disingenuous response.
If I were really being difficult, I'd suggest that this means you accept that there's no rational reason to try to measure intelligence.
Try again?
No.
You're posing a lot of pointless questions that I am not going to engage in. You are being aggressively competitive with someone uninterested in competing.
Why measure intelligence?
Why not?
Because it's really expensive & tells you nothing useful. (Even you couldn't come up with a use case you thought would convince me.)
Because the results are unreliable, and that can be disastrous. (Never mind the wastefulness of really bright kids who fail, ever had a really incompetent boss who'd passed all the tests but knew nothing worth knowing?)
and finally
Because intelligence isn't measurable. (However you rejig the tests, they'll never be reliable. Sad but true - sometimes reality is a bitch like that.)
And by the way I notice that you accuse me of being "competitive" when you can't come up with answers to my questions. I've been polite throughout, but I do reply to the points you've made. If you see that as "aggressive", it's not me that has a problem.
" IQ testing is part of the whole eugenics thought-pattern"
This is unpardonable hyperbole. Intelligence testing is Heinrich Himmler and Zyklon-B. I really think you should dial that WAY back.
"he can't read well enough to do one of your tests."
One of MY tests? I'm not involved in evaluating anyone other than potential software hires, is there some reason you are making this discussion so personal?
"... if we are to move forward, we need to understand the gaps in our thinking. Assuming that everything that matters can be measured & counted is one of those gaps"
You're about a century behind in your education. It is fundamental to science, and has been for a century, that there are limits to what is knowable. Heisenberg's Uncertainty Principle shook up science profoundly and required a fundamental rethinking that went far, far beyond measurement of position and momentum.
Some suggested reading:
https://www.amazon.com/Quantum-Theory-Schism-Physics-Postscript-ebook/dp/B00CDUUCEO/ref=sr_1_1
https://www.amazon.com/John-Bell-Foundations-Quantum-Mechanics/dp/9810246889/ref=sr_1_1
And before you object that this is irrelevant to the discussion, the calcium ion gates in the synapse are small enough to have quantum mechanical properties and while the debate over the role of the quantum in consciousness is unsettled, it is far from rejected.