Following the tech and AI community on X this week has been instructive about the capabilities and limitations of Google’s latest consumer-facing AI chatbot, Gemini.
A number of tech workers, leaders, and writers have posted screenshots of their interactions with the chatbot, and more specifically, examples of bizarre and inaccurate image generation that appear to be pandering toward diversity and/or “wokeness.”
Google initially unveiled Gemini late last year after months of hype, promoting it as a leading AI model comparable to, and in some cases, surpassing OpenAI’s GPT-4. Yet initial review found Gemini was actually worse than OpenAI’s older LLM, prompting the release of newer versions.
Now, even these newer Google AI models are being criticized for refusing to generate historical imagery and generating ahistorical imagery when prompted, sparking debates on censorship in AI.
The attention on Gemini’s AI imagery has stirred up the underlying debate about how AI models should respond to prompts around sensitive human issues, like diversity, colonization, and discrimination.
Notably, Google and other tech companies have faced controversies in the past over AI behavior, leading to discussions about the balance between free speech and harmful content.
The advent of generative AI has intensified this debate, pushing technologists into the middle of a culture war that shows no signs of subsiding.
As discussions on AI censorship persist, the tech community continues to grapple with the ethical implications of AI models like Gemini and the broader societal impact they may have.
In this ever-evolving landscape of AI technology, the future of chatbots like Gemini and the boundaries of AI censorship remain uncertain, challenging tech companies, users, and researchers to navigate complex ethical dilemmas in the pursuit of progress.