Skip to content

Google's artificial intelligence once more generates likenesses of individuals, following an unflattering incident.

Google's artificial intelligence once more generates likenesses of individuals, following an unflattering incident.

Google's artificial intelligence once more generates likenesses of individuals, following an unflattering incident.
Google's artificial intelligence once more generates likenesses of individuals, following an unflattering incident.

After a six-month hiatus, Google is back to generating likenesses using its AI software, following an incident involving AI-produced black Nazi soldiers images. The new feature will first be available to Gemini AI chatbot subscribers, according to a Google blog post.

Google's AI software, Imagen 3, shares similarities with rival programs generating images based on spoken instructions. In February, the option to create images of people was temporarily disabled due to AI conforming to current diversity standards, leading to images like black soldiers in Nazi uniforms. Google acknowledged overlooked programming exclusions for inappropriate diversity scenarios and overstepped the mark in this instance.

Addressing AI diversity concerns is crucial in today's digital age. Historically, diversity struggles emerged in facial recognition software, leading to poorer performance on darker skin tones, and AI predominantly featured white people. Tech companies, including Google, have made conscious efforts to promote diversity, but this sometimes comes with challenges, especially in the US due to racial sensitivity concerns.

Google's AI will continue to decline requests for photorealistic images of well-known individuals due to concerns about AI-generated fake images potentially manipulating public opinion, especially during election times. The incident involving AI-generated images of black Nazi soldiers sparked debates on the need for more inclusive AI practices, highlighting existing diversity problems in AI applications.

Google can reduce biases in AI-generated images using various strategies:

  1. By embedding explicit fairness constraints into the prompts using fairness frameworks like FairCoT.
  2. Implement the Citizen Assembly approach, modifying AI values based on political issues data captured by US census.
  3. Employ techniques like jailbreaking and meta-story prompting to bypass restrictions on generating specific types of images.
  4. Prioritize ethical considerations and community feedback when using AI image generators.
  5. Regularly monitor AI outputs for biases and update systems for continuous improvement.

By incorporating these strategies, Google can ensure AI-generated images are more diverse and reduce the risk of controversial incidents.

Latest