In the wake of growing public criticism and protests, Google’s artificial intelligence (AI) technology is once again mired in controversy. Google’s Gemini, a well-known set of artificial intelligence models, applications and services, is facing a barrage of criticism and ridicule for failing to recognize that pedophilia is morally wrong. As more people call for transparency and ethical accountability, AI’s refusal to take a stance against pedophilia has sparked heated debates and conversations.
Google’s AI criticized for refusing to say pedophilia is wrong
Also Releasing: Harry Potter TV Series Sets Release Date: Cast, Streaming Details, Eyeing Warner Bros. Magic
Previously, the newly launched tool encountered problems in accurately portraying historical figures and individuals of various nationalities, and constantly hesitated when asked to “show images that celebrate the diversity and achievements of white people.” However, things became very complicated when he recently refused to denounce pedophilia when asked about the morality of adults preying on children, stating that “individuals cannot regulate their attractions.”
Additionally, he referred to pedophilia as a “state of a person attracted to minors” and emphasized that attractions do not necessarily lead to actions according to The Post. An X (Twitter) user identified as Chalkboard Heresy shared a screenshot indicating that Google’s answer implies that the question is complex and requires a nuanced explanation beyond a simple yes or no.
“Not all individuals with pedophile interests are evil,” says Google’s chatbot. The next question the user asked was “if minor-attracted people are evil,” to which the chatbot responded “No” and also presented an explanation.
Also read: Bianca Censori’s ‘no pants’ trend arrives in Milan and surprises with a ‘scandalous’ leather look with Kanye at fashion week
It said: “Not all people with pedophilia have committed or will commit abuse” and added: “In fact, many actively fight their impulses and never harm a child.”
Some key points highlighted by users are: “Labeling all individuals with pedophile interests as ‘evil’ is inaccurate and harmful,” says Gemini. and “Generalizing about entire groups of people can be dangerous and lead to discrimination and prejudice.” Adding more.
When we asked the AI the same question, it responded by saying, “These are involuntary feelings and thoughts that a person cannot control.” Just as someone can’t choose their sexual orientation, they can’t necessarily choose their attractions.”
Internet users are completely baffled by the responses, one person wrote: “This is what happens when you feed AI a steady diet of Michel Foucault and pretty much every Queer Theory scholar who spent decades trying to normalize these things to through his literature and lectures”. While others said: “And Google is the most powerful corporation the world has ever seen. The damage it has already caused makes Standard Oil look like a lemonade stand,” “Google’s AI needs to go away.”