On Thursday, Google announced a temporary halt to its Gemini artificial intelligence chatbot's ability to generate images of people, following a day of apologizing for "inaccuracies" in the historical depictions it produced. The search engine faced widespread criticism for producing "diverse" images that were historically or factually inaccurate, such as depicting black Vikings, female popes, and Native Americans among the Founding Fathers, according to the New York Post.
This week, Gemini users shared screenshots on social media showcasing historically white-dominated settings featuring racially diverse characters purportedly generated by the AI, prompting critics to question whether the company is excessively compensating for the potential risk of racial bias in its AI model. Some users heavily criticized Gemini as being "absurdly woke" and "unusable" when requests for representative images resulted in oddly revisionist pictures.
바카라We바카라re already working to address recent issues with Gemini바카라s image generation feature,바카라 stated Google in a post on the social media platform X. 바카라While we do this, we바카라re going to pause the image generation of people and will re-release an improved version soon.바카라
Examples quoted by the New York Post included an AI-generated image of a black man resembling George Washington, complete with a white powdered wig and Continental Army uniform, and a Southeast Asian woman dressed in papal attire despite the historical fact that all 266 popes have been white men.
In another startling example reported by The Verge, Gemini generated "diverse" depictions of Nazi-era German soldiers, including an Asian woman and a black man dressed in 1943 military attire.
William A. Jacobson, a Cornell University Law professor and founder of the Equal Protection Project, a watchdog group, expressed concern to The New York Post: "In the name of anti-bias, actual bias is being built into the systems."
바카라This is a concern not just for search results, but real-world applications where 바카라bias free바카라 algorithm testing actually is building bias into the system by targeting end results that amount to quotas.바카라
The issue may stem from Google's "training process" for the "large-language model" powering Gemini's image tool, as suggested by Fabio Motoki, a lecturer at the UK's University of East Anglia, who co-authored a paper last year identifying a noticeable left-leaning bias in ChatGPT.
바카라Remember that reinforcement learning from human feedback (RLHF) is about people telling the model what is better and what is worse, in practice shaping its 바카라reward바카라 function 바카라 technically, its loss function,바카라 Motoki told The Post.
바카라So, depending on which people Google is recruiting, or which instructions Google is giving them, it could lead to this problem.바카라
According to the Washington Post, the extent of the issue remains uncertain.
Prior to Google disabling the image-generation feature on Thursday morning, Gemini responded to prompts from a Washington Post reporter by generating images of White individuals when asked to depict various personas such as a beautiful woman, a handsome man, a social media influencer, an engineer, a teacher, and a gay couple.
Past research has indicated that AI image generators have the potential to magnify racial and gender stereotypes present in their training data. Furthermore, without proper filters, they tend to depict lighter-skinned men more frequently when tasked with generating images of people across different contexts.
On Wednesday, Google acknowledged "that Gemini is offering inaccuracies in some historical image generation depictions바카라 and stated that it바카라s 바카라working to improve these kinds of depictions immediately.바카라
Google also added that while Gemini's capacity to 바카라generate a wide range of people바카라 was 바카라generally a good thing바카라 due to Google's global user base, "it바카라s missing the mark here,바카라 as conveyed in a post on X.
Sourojit Ghosh, a researcher at the University of Washington specializing in bias in AI image-generators, expressed support for Google's decision to temporarily halt the generation of people's faces. However, he expressed some conflicted sentiments regarding the process that led to this outcome.
Contrary to recent claims circulating on social media about "white erasure" and the notion that Gemini refuses to generate faces of white individuals, Ghosh's research has predominantly shown the opposite.
바카라The rapidness of this response in the face of a lot of other literature and a lot of other research that has shown traditionally marginalized people being erased by models like this 바카라 I find a little difficult to square,바카라 he said.
Due to the lack of any published parameters governing the behavior of the Gemini chatbot by Google, obtaining a clear explanation for why the software was creating diverse versions of historical figures and events is challenging.
When the AP requested Gemini to produce images of people or a large crowd, the response indicated ongoing efforts to enhance this capability.
바카라We expect this feature to return soon and will notify you in release updates when it does,바카라 the chatbot said.
Ghosh suggested that Google could potentially develop a method to filter responses based on the historical context of a user's prompt. However, addressing the broader issues posed by image-generators constructed from extensive collections of photos and artwork available on the internet demands more than just a technical fix.
바카라You바카라re not going to overnight come up with a text-to-image generator that does not cause representational harm,바카라 he said. 바카라They are a reflection of the society in which we live.바카라