If you wanted to design an AI that would enrage woke and anti-woke Twitter, you couldn’t do much better than Google Gemini. Within hours of its release, users discovered that it refused to answer questions about Tiananmen Square and January 6th, clarified that
I wonder if you followed this controversy: https://www.essence.com/news/ai-racist-stereotypes/ where a user asked AI for photos of black doctors treating suffering white children. AI was unable to generate any and kept generating pictures of black doctors with white children. The controversy was raised in a Microsoft AI training session I attended and according to Microsoft this bias against the presentation of black doctors treating white children is 100% the result of a lack of diversity in the stock range of photos available to AI. Nothing to do with history or reality, just an inexplicable bias in the world of photography.
Apart from the fact this explanation treats us all like idiots, how does obscuring history help facilitate positive change? Surely an unflinching ability to face up to the truth is the faster track.
No! I hadn't heard about this. I mean, an AI image generator should be able to produce an image of a black doctor treating a white child. It knows what a black adult and a white child looks like, can presumably understand concepts like "doctor" and "patient", if it can produce abstract art of Michael Jordan playing basketball in a nebula, it can manage this.
As I said, I don't feel particularly daringly about specific requests. Creating an image of a black person dressed as a samurai or a viking is...fine, I guess. The issue is rpetending that this is historically accurate.
I had some fun with Google Gemini and posted it on Facebook. It wouldn't do images of humans by then, not even George Washington playing cards with Abraham Lincoln, but it did animals. Sometimes. I asked for gerbil drinking a martini and it refused; it argued that alcohol is harmful to animals and would be depicting animal abuse. I said, "I"m not going to give a gerbil a martini, I just want a picture of one!" And it still refused. So I asked, "Show me a gerbil flying a World War I Fokker airplane," and it obliged, slightly imperfectly, but I got gerbils in Fokkers. I posted those on Facebook saying, "Oh fine, Google Gemini will send helpless little animals off to WAR but they won't let them enjoy a martini!" Then I got Poe AI to generate the image (it gave me no crap about a kitten drinking a Christmas martini a few months ago). And I posted in on Facebook. Okay, the martini glass was separated from the bottom of the stem and floating in air, but to be fair, that gerbil's paws were up in a way that *could* suggest magic so...whatever :)
I discovered later that GG will speak French with me and correct me when I get stuff wrong. Since Duolingo eliminated the questions section where someone else invariably asked that same question and others answered it, this is good to have.
This Gemini fiasco started out amusing, then became hilarious, then quickly became extremely concerning. By the time it started committing libel against Matt Taibbi I was ready to shut down all the world's electricity to keep it from ever operating again.
We're walking into a post-truth world in real-time.
Or...histoy is written by the victors. Jesus may not have been white and blue eyed but the victorious Christians have certainly made it that way. Today, we know Asian Nazis weren't real but tomorrow? Who would have thought we would be arguing over gender definitions twenty years ago? I suspect the future will be filled with all manner of historical inaccuracies thanks to AI and whomever is governing at the time.
I actually thought about including a line about this, but it would have ended p being a distraction from the main point.
In short no, I think this is an outdated notion at best. History is written by the people it happened to. There was a time where those people lacked the resources to have their stories told. But today, we have access to so much information that it's almost impossible for the "victors" (however you define that term when it comes to most of history), to hide the other side of most stories. Genocides, for example, are by definition the stories of the losers. Yet we know about them.
What *is* possible though, is for people to be too complacent to seek out the other side of the story. Some people have never asked themselves why nobody in Bethlehem looks like Jesus and the apostles as they're usually depicted. And they ignore it when other people ask.
The arguments about gender are basically the opposite problem, where there are people who *have* asked themselves what gender means and thought about where the concept goes and people who think its better to ignore those questions and simply repeat the dogma.
I am left to wonder, is the current history focus on victims is leading to the victim mentality and scorn of the perceived victimizers? Or is the victim mentality leading to the half-truth history that is being presented at this time?
I wonder if you followed this controversy: https://www.essence.com/news/ai-racist-stereotypes/ where a user asked AI for photos of black doctors treating suffering white children. AI was unable to generate any and kept generating pictures of black doctors with white children. The controversy was raised in a Microsoft AI training session I attended and according to Microsoft this bias against the presentation of black doctors treating white children is 100% the result of a lack of diversity in the stock range of photos available to AI. Nothing to do with history or reality, just an inexplicable bias in the world of photography.
Apart from the fact this explanation treats us all like idiots, how does obscuring history help facilitate positive change? Surely an unflinching ability to face up to the truth is the faster track.
No! I hadn't heard about this. I mean, an AI image generator should be able to produce an image of a black doctor treating a white child. It knows what a black adult and a white child looks like, can presumably understand concepts like "doctor" and "patient", if it can produce abstract art of Michael Jordan playing basketball in a nebula, it can manage this.
As I said, I don't feel particularly daringly about specific requests. Creating an image of a black person dressed as a samurai or a viking is...fine, I guess. The issue is rpetending that this is historically accurate.
"And the only way we can learn, the only way we can make tomorrow better than today, is by having the courage and the strength to look at all of it"
YES!
I had some fun with Google Gemini and posted it on Facebook. It wouldn't do images of humans by then, not even George Washington playing cards with Abraham Lincoln, but it did animals. Sometimes. I asked for gerbil drinking a martini and it refused; it argued that alcohol is harmful to animals and would be depicting animal abuse. I said, "I"m not going to give a gerbil a martini, I just want a picture of one!" And it still refused. So I asked, "Show me a gerbil flying a World War I Fokker airplane," and it obliged, slightly imperfectly, but I got gerbils in Fokkers. I posted those on Facebook saying, "Oh fine, Google Gemini will send helpless little animals off to WAR but they won't let them enjoy a martini!" Then I got Poe AI to generate the image (it gave me no crap about a kitten drinking a Christmas martini a few months ago). And I posted in on Facebook. Okay, the martini glass was separated from the bottom of the stem and floating in air, but to be fair, that gerbil's paws were up in a way that *could* suggest magic so...whatever :)
I discovered later that GG will speak French with me and correct me when I get stuff wrong. Since Duolingo eliminated the questions section where someone else invariably asked that same question and others answered it, this is good to have.
This Gemini fiasco started out amusing, then became hilarious, then quickly became extremely concerning. By the time it started committing libel against Matt Taibbi I was ready to shut down all the world's electricity to keep it from ever operating again.
We're walking into a post-truth world in real-time.
Artificial intelligence is mental Soylent green.
Or...histoy is written by the victors. Jesus may not have been white and blue eyed but the victorious Christians have certainly made it that way. Today, we know Asian Nazis weren't real but tomorrow? Who would have thought we would be arguing over gender definitions twenty years ago? I suspect the future will be filled with all manner of historical inaccuracies thanks to AI and whomever is governing at the time.
"Or...histoy is written by the victors"
I actually thought about including a line about this, but it would have ended p being a distraction from the main point.
In short no, I think this is an outdated notion at best. History is written by the people it happened to. There was a time where those people lacked the resources to have their stories told. But today, we have access to so much information that it's almost impossible for the "victors" (however you define that term when it comes to most of history), to hide the other side of most stories. Genocides, for example, are by definition the stories of the losers. Yet we know about them.
What *is* possible though, is for people to be too complacent to seek out the other side of the story. Some people have never asked themselves why nobody in Bethlehem looks like Jesus and the apostles as they're usually depicted. And they ignore it when other people ask.
The arguments about gender are basically the opposite problem, where there are people who *have* asked themselves what gender means and thought about where the concept goes and people who think its better to ignore those questions and simply repeat the dogma.
I am left to wonder, is the current history focus on victims is leading to the victim mentality and scorn of the perceived victimizers? Or is the victim mentality leading to the half-truth history that is being presented at this time?