In aviation the pilot is king or queen and can override automatic functions (a violation of that not long ago demonstrated the wisdom). But there was a case where it is believed that a pilot purposefully flew a commercial aircraft into the ground. Could "no, I can't let you do that" be implemented? Yes, it could but mistrust of computers…
In aviation the pilot is king or queen and can override automatic functions (a violation of that not long ago demonstrated the wisdom). But there was a case where it is believed that a pilot purposefully flew a commercial aircraft into the ground. Could "no, I can't let you do that" be implemented? Yes, it could but mistrust of computers is greater than mistrust of pilots.
How can self-modifying code be certified when safety of human life requires it? It could already be happening in areas I have no visibility to and there are valid arguments in some applications for crowning the computer king, but I am very afraid of that for reasons I don't need to describe to you.
I can't remember when I first played with that game of life. I think it was with a Commodore 64. I didn't think about it being a case of cellular automata, but now that you mention it...
To prevent what happened on flight 990 (assuming that it was an intentional crash) a preventative measure would be a major change in philosophy. When the pilot dropped paddles, he took computer controls out of the picture. The fix would be computer monitoring and ability to overrule the pilot when there is supposed to be no such capability. That disconnect is in hardware and is a specifically tested flight safety function. A modification to change that (and it would be a major one) would have visibility.
When computers become more trusted than humans it will truly be a brave new world. Computers smarter than humans would be that. I do worry.
I think I was 14 the first time I read that Ellison piece. And then of course there're Colossus and SkyNet.
OTOH there is on Neal Asher whose stories take place centuries after a Quiet War in which the AIs bloodlessly took over everything and humanity enters what we would regard as a golden age; life extension, no war within the Polity (hostile aliens, though) where an AI executes on quantum crystal and a planetary administrator can be hidden in an ashtray.
The AIs do have emotions, they have incomprehensible priorities and, well, there are some smashing good ideas. Also a hell of a lot of military SF which isn't my cuppa and a rogue AI called Penny Royal.
In aviation the pilot is king or queen and can override automatic functions (a violation of that not long ago demonstrated the wisdom). But there was a case where it is believed that a pilot purposefully flew a commercial aircraft into the ground. Could "no, I can't let you do that" be implemented? Yes, it could but mistrust of computers is greater than mistrust of pilots.
How can self-modifying code be certified when safety of human life requires it? It could already be happening in areas I have no visibility to and there are valid arguments in some applications for crowning the computer king, but I am very afraid of that for reasons I don't need to describe to you.
You may have read https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream
I can't remember when I first played with that game of life. I think it was with a Commodore 64. I didn't think about it being a case of cellular automata, but now that you mention it...
I doubt that learning software would be allowed to modify its rules while actually in flight.
And computers don't exhibit suicidal tendencies like people do.
A pilot crashed a passenger plane? That was rude.
There is (political) controversy about this https://www.historicmysteries.com/egyptair-flight-990/
To prevent what happened on flight 990 (assuming that it was an intentional crash) a preventative measure would be a major change in philosophy. When the pilot dropped paddles, he took computer controls out of the picture. The fix would be computer monitoring and ability to overrule the pilot when there is supposed to be no such capability. That disconnect is in hardware and is a specifically tested flight safety function. A modification to change that (and it would be a major one) would have visibility.
When computers become more trusted than humans it will truly be a brave new world. Computers smarter than humans would be that. I do worry.
I think I was 14 the first time I read that Ellison piece. And then of course there're Colossus and SkyNet.
OTOH there is on Neal Asher whose stories take place centuries after a Quiet War in which the AIs bloodlessly took over everything and humanity enters what we would regard as a golden age; life extension, no war within the Polity (hostile aliens, though) where an AI executes on quantum crystal and a planetary administrator can be hidden in an ashtray.
The AIs do have emotions, they have incomprehensible priorities and, well, there are some smashing good ideas. Also a hell of a lot of military SF which isn't my cuppa and a rogue AI called Penny Royal.
First of the series of three: https://www.amazon.com/Dark-Intelligence-Transformations-Book-1-ebook/dp/B07H51PRSM/ref=sr_1_1