CMU School of Drama


Friday, February 23, 2024

Google Pauses AI Image Generation of People to Fix Racial Inaccuracies

variety.com: After critics slammed Google‘s “woke” generative AI system, the company halted the ability of its Gemini tool to create images of people to fix what it acknowledged were “inaccuracies in some historical image generation depictions.”

6 comments:

Julia Adilman said...

I had no idea that Google even had a text-to-image AI tool called Google Gemini. This seems like it would be quite an exciting feature, however, these problems do seem a bit concerning. I am glad that Google has made the effort to make the AI have as much of an equitable view as possible. I would have never thought that compensating for racial and gender bias would cause problems like this. I mean it makes perfect sense as to why this lack of bias leads to historical inaccuracies, however, that is a bit of an issue. I wonder how the Google team will be able to fix this without giving the tool more bias. I am also glad that they have made the decision to pause the tool and that they have decided to only release it until it is fixed. I think that was a wise decision that will prevent any further issues going forward.

Carolyn Burbackj said...

This article’s examples of what the AI has been able to generate as false depictions of historical people makes me think that future historians will have a harder time deciphering the truth and be able to rely less and less on images. Currently if you find an image of a woman from 1890 you can rely on a majority of its details to reflect some truth because there was no way to alter the image. An image generated in 2024 that has been printed out and found centuries later is not a reliable representation of whatever is depicted because of our ability to not only photoshop, but artificially intelligent to be whatever we command. I think instead of Google trying to fix its racial and gender parameters for what people can type, they should just not make it accessible to the public because the internet is a hellscape.

Alex Reinard said...

I remember there was an article last year about some problem with Google’s Bard that made it seem like it wouldn’t be successful. I will say, that problem wasn’t as interesting as this one. As with most problems with AI, it’s probably going to be very difficult for Google to address. On the one hand, you want to maintain accuracy in historical images, but on the other hand you don’t want to have a lack of diversity in non-historical images. I wonder if this is a problem that other AI services, namely OpenAI, had to deal with. I would be absolutely appalled if I asked an AI service to generate me a photo of a German soldier and it returned the photos in the article. I’m not sure if, like the article says, that this problem stems from Google’s “corporate culture”. I would imagine that working with cutting-edge AI is a daunting task, and frankly I’m inclined to believe that other companies might not have even thought about including diversity in the first place.

Karter LaBarre said...

I think it's really good that Google stopped the AI image generator. there were definitely problems going on with its racial accuracy. that's one of the biggest concerns about AI is being able to portray things accurately, and without biases. I think it definitely depends on who is designing the ai, and what their views are. this means that the person who designed the AI has total control over being able to dictate what others get from their tool. I think that's okay I guess, but it is troubling especially in cases like this. I just hope that people are able to recognize that Google ai, and many other AIS are not the peak of highest learning potential or knowledge. Also I think calling any AI system woke is kind of insane, even though they are a technological advancement. I can't wait for people to realize how detrimental AI can be, some people do recognize this but others are just going along with it.

Helen Maleeny said...

I’m unaware of many details about the Gemini tool, I haven’t become very versed in the different types of AI generation programs there are at the moment. I also didn’t know about what was occurring here, though it’s clear that they’ve now seen it’s something to be fixed. I took a writing course first semester on AI, and we did a lot of reading about how the tools are programmed, and how that can lead to errors such as this one. It’s all about the content that it is fed, and what information is being inputted into the device by humans, and so if the programmer or person feeding data into the machine is biased, so will the machine be. It seems like a similar concept of this applies to the Gemini tool, as the article mentions it’s learning from Google searches and seeing the internet, and the internet is so broad now that consuming it without context of everything will probably not be too beneficial in the programming of this device.

willavu said...

I did not know about Google's AI Gemini image generator. I am not surprised by it, nor how people are taking advantage of it. If you give the racist population a tool like this, they will be racist with it, simple as that. And honestly, I’m not sure if it should be ‘illegal’ to do so. I am saying this as a person of color and a Jew. They are not real characters, it is just AI. But what is scary to me is that this may blur history and what happened. It seems to get easier and easier to morph people into whatever they want them to be. When people are looking at images of the holocaust in 20 years, what will come up? Some AI nazis? This is why I am glad Google got its hands on the tool before it maybe got worse. But, it doesn’t make a difference, there are going to be more and more AI machines made, and people will just move to the next tab when they see it's not working anymore. AI is far too advanced at this point to go back and reverse the power it holds.