This is one of the places that you can help me with this idea. My work pertaining to computerized things was heavily about controlling things. The code has subject to rigidly defined rules. There will never be a HAL telling Dave, "I'm sorry, I can't do that." A computer becoming sentient is fantasy in my mind.
There is software that incorporates heuristics which could be considered to be learning but I don't know about a relationship to intelligence. Do chess playing computers actually learn, or just have the power to map possibilities in a timely manner due to ever increasing computing power and incorporate it heuristically?
This is a bit astray of the original article but what are your thoughts on artificial intelligence? I put emphasis on "artificial" and don't conflate it with intelligence in a sentient being.
I would not be so sanguine about machine sentience. Right now software is structurally unsuited for it; software executes in response to some event, be it a user operation or a signal of some kind, runs whatever handler is triggered by the event, executes in a deterministic fashion (or, maddeningly, unpredicably), and finishes.
If you have not heard of John Horton Conway and the Life game, you should look it up. Even the most entry-level programmer can implement it (I wrote a version on a TRS-80 with Cassette-BASIC), and it does seriously Wonderful Things. It's a grid, with a clock, and on each tick the state of one cell in the grid changes from off to on or vice versa depending on the number of cells around it that are on.
It's an example of cellular automata.
The point is that complex behaviors emerge from a supremely simple set of rules. Consciousness is a complexity that emerges from the firing states of neurons, which have rules more complex than a flat grid but ... well, you get the picture.
Machines do learn. Chess-playing machines can, yes, calculate vast permutation sets but they also remember mistakes, self-modifying memory; if we insist on looking at everything as bits and data it's easy to say it can never exhibit Complexity ....
.... but then, we don't experience synapses, either.
Be. Very. Afraid.
It will probably begin with simulated consciousness, which means machines that are never idle, always running, their processes distributes across vast numbers of ... cells.
In aviation the pilot is king or queen and can override automatic functions (a violation of that not long ago demonstrated the wisdom). But there was a case where it is believed that a pilot purposefully flew a commercial aircraft into the ground. Could "no, I can't let you do that" be implemented? Yes, it could but mistrust of computers is greater than mistrust of pilots.
How can self-modifying code be certified when safety of human life requires it? It could already be happening in areas I have no visibility to and there are valid arguments in some applications for crowning the computer king, but I am very afraid of that for reasons I don't need to describe to you.
I can't remember when I first played with that game of life. I think it was with a Commodore 64. I didn't think about it being a case of cellular automata, but now that you mention it...
To prevent what happened on flight 990 (assuming that it was an intentional crash) a preventative measure would be a major change in philosophy. When the pilot dropped paddles, he took computer controls out of the picture. The fix would be computer monitoring and ability to overrule the pilot when there is supposed to be no such capability. That disconnect is in hardware and is a specifically tested flight safety function. A modification to change that (and it would be a major one) would have visibility.
When computers become more trusted than humans it will truly be a brave new world. Computers smarter than humans would be that. I do worry.
I think I was 14 the first time I read that Ellison piece. And then of course there're Colossus and SkyNet.
OTOH there is on Neal Asher whose stories take place centuries after a Quiet War in which the AIs bloodlessly took over everything and humanity enters what we would regard as a golden age; life extension, no war within the Polity (hostile aliens, though) where an AI executes on quantum crystal and a planetary administrator can be hidden in an ashtray.
The AIs do have emotions, they have incomprehensible priorities and, well, there are some smashing good ideas. Also a hell of a lot of military SF which isn't my cuppa and a rogue AI called Penny Royal.
The point about how poorly we predict the future is spot on. My background is limited to trying to make software that does something specific and constrained be bug free which is challenge enough for me. I really have no idea about creating a generalized intelligence in software. The thought does occur that it would have equivalence to genetics since new software is often dependent upon the old functions it is built upon where ancestor code potential for a bug is revealed in new software that uses it.
𝘈𝘵 𝘢 𝘵𝘪𝘮𝘦 𝘸𝘩𝘦𝘯 𝘵𝘩𝘦𝘳𝘦'𝘴 𝘤𝘰𝘯𝘧𝘶𝘴𝘪𝘰𝘯 𝘣𝘦𝘵𝘸𝘦𝘦𝘯 "𝘢𝘳𝘵𝘪𝘧𝘪𝘤𝘪𝘢𝘭 𝘪𝘯𝘵𝘦𝘭𝘭𝘪𝘨𝘦𝘯𝘤𝘦" 𝘢𝘯𝘥 "𝘮𝘢𝘤𝘩𝘪𝘯𝘦 𝘭𝘦𝘢𝘳𝘯𝘪𝘯𝘨", 𝘵𝘩𝘪𝘴 𝘪𝘴 𝘯𝘰𝘵 𝘢 𝘵𝘳𝘪𝘷𝘪𝘢𝘭 𝘲𝘶𝘦𝘴𝘵𝘪𝘰𝘯.
This is one of the places that you can help me with this idea. My work pertaining to computerized things was heavily about controlling things. The code has subject to rigidly defined rules. There will never be a HAL telling Dave, "I'm sorry, I can't do that." A computer becoming sentient is fantasy in my mind.
There is software that incorporates heuristics which could be considered to be learning but I don't know about a relationship to intelligence. Do chess playing computers actually learn, or just have the power to map possibilities in a timely manner due to ever increasing computing power and incorporate it heuristically?
This is a bit astray of the original article but what are your thoughts on artificial intelligence? I put emphasis on "artificial" and don't conflate it with intelligence in a sentient being.
I would not be so sanguine about machine sentience. Right now software is structurally unsuited for it; software executes in response to some event, be it a user operation or a signal of some kind, runs whatever handler is triggered by the event, executes in a deterministic fashion (or, maddeningly, unpredicably), and finishes.
If you have not heard of John Horton Conway and the Life game, you should look it up. Even the most entry-level programmer can implement it (I wrote a version on a TRS-80 with Cassette-BASIC), and it does seriously Wonderful Things. It's a grid, with a clock, and on each tick the state of one cell in the grid changes from off to on or vice versa depending on the number of cells around it that are on.
It's an example of cellular automata.
The point is that complex behaviors emerge from a supremely simple set of rules. Consciousness is a complexity that emerges from the firing states of neurons, which have rules more complex than a flat grid but ... well, you get the picture.
Machines do learn. Chess-playing machines can, yes, calculate vast permutation sets but they also remember mistakes, self-modifying memory; if we insist on looking at everything as bits and data it's easy to say it can never exhibit Complexity ....
.... but then, we don't experience synapses, either.
Be. Very. Afraid.
It will probably begin with simulated consciousness, which means machines that are never idle, always running, their processes distributes across vast numbers of ... cells.
In aviation the pilot is king or queen and can override automatic functions (a violation of that not long ago demonstrated the wisdom). But there was a case where it is believed that a pilot purposefully flew a commercial aircraft into the ground. Could "no, I can't let you do that" be implemented? Yes, it could but mistrust of computers is greater than mistrust of pilots.
How can self-modifying code be certified when safety of human life requires it? It could already be happening in areas I have no visibility to and there are valid arguments in some applications for crowning the computer king, but I am very afraid of that for reasons I don't need to describe to you.
You may have read https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream
I can't remember when I first played with that game of life. I think it was with a Commodore 64. I didn't think about it being a case of cellular automata, but now that you mention it...
I doubt that learning software would be allowed to modify its rules while actually in flight.
And computers don't exhibit suicidal tendencies like people do.
A pilot crashed a passenger plane? That was rude.
There is (political) controversy about this https://www.historicmysteries.com/egyptair-flight-990/
To prevent what happened on flight 990 (assuming that it was an intentional crash) a preventative measure would be a major change in philosophy. When the pilot dropped paddles, he took computer controls out of the picture. The fix would be computer monitoring and ability to overrule the pilot when there is supposed to be no such capability. That disconnect is in hardware and is a specifically tested flight safety function. A modification to change that (and it would be a major one) would have visibility.
When computers become more trusted than humans it will truly be a brave new world. Computers smarter than humans would be that. I do worry.
I think I was 14 the first time I read that Ellison piece. And then of course there're Colossus and SkyNet.
OTOH there is on Neal Asher whose stories take place centuries after a Quiet War in which the AIs bloodlessly took over everything and humanity enters what we would regard as a golden age; life extension, no war within the Polity (hostile aliens, though) where an AI executes on quantum crystal and a planetary administrator can be hidden in an ashtray.
The AIs do have emotions, they have incomprehensible priorities and, well, there are some smashing good ideas. Also a hell of a lot of military SF which isn't my cuppa and a rogue AI called Penny Royal.
First of the series of three: https://www.amazon.com/Dark-Intelligence-Transformations-Book-1-ebook/dp/B07H51PRSM/ref=sr_1_1
Rather timely article here:
https://www.washingtonpost.com/opinions/2022/08/31/artificial-intelligence-worst-case-scenario-extinction/
The point about how poorly we predict the future is spot on. My background is limited to trying to make software that does something specific and constrained be bug free which is challenge enough for me. I really have no idea about creating a generalized intelligence in software. The thought does occur that it would have equivalence to genetics since new software is often dependent upon the old functions it is built upon where ancestor code potential for a bug is revealed in new software that uses it.
An issue made more worrisome when you consider that militaries are actually seeking autonomous warfare capability. What could go wrong?