Copilot generated image by The Tech Panda
The ability for AI to spread misinformation has been reaching hair-raising lengths, another and possibly the worst way of weaponizing AI, especially through deepfakes. Recently, Scarlett Johansson called for a deep fake ban after a video featuring an AI deepfake of the actress circulated online. The video showed Johansson along with other Jewish actors like Jerry Seinfeld and Mila Kunis wearing a t-shirt that shows the name “Kanye” with an image of a middle finger that has the Star of David in the center.
“… I also firmly believe that the potential for hate speech multiplied by A.I. is a far greater threat than any one person who takes accountability for it. We must call out the misuse of A.I., no matter its messaging, or we risk losing a hold on reality,” the actress told People Magazine.
Read more: Are business markets safe from Grok AI?
People are frequently turning to AI to speed things up, or create sensation, especially the media industry. Last year, a Wyoming reporter was caught using AI to create fake quotes and stories. Creating sensational stories with the help of AI has already proven dangerous. The anti-migrant violence in UK was born from online misinformation.
After the incident of three girls being tragically stabbed in the UK, rioters created AI-generated images that incited hatred and spread harmful stereotypes. As per The Guardian, far-right groups also made use of AI music generators to create songs with xenophobic content. This content spread across online apps like TikTok via efficient recommendation algorithms.
Last October, according to Wired, AI-powered search engines like Google, Microsoft, and Perplexity were found promoting scientific racism in search results.
Remember when Elon Musk’s xAI released Grok-2, an image generator using Flux with almost no safeguards? This feature allowed users to create uncensored deepfakes, like vice president Kamala Harris and Donald Trump posing as a couple, sparking deep concerns, is this unprecedented creative freedom, or a dangerous threat to democracy and the integrity of public discourse?
Deepfakes of Taylor Swift, female politicians, and children that went viral last year are forcing tech companies to sit up and take notice. Henry Ajder, a generative AI expert who has studied deepfakes for nearly a decade told The Algorithm, “We are at an inflection point where the pressure from lawmakers and awareness among consumers is so great that tech companies can’t ignore the problem anymore.”
“We are at an inflection point where the pressure from lawmakers and awareness among consumers is so great that tech companies can’t ignore the problem anymore.” — Henry Ajder, Gen AI expert
For example, Google said it is taking steps to keep explicit deepfakes from appearing in search results. Watermarks and protective shields haven’t actually worked so far. But regulation is being upped. For example, the UK banned both creation and distribution of nonconsensual explicit deepfakes. The EU has its AI Act and the US has been pushing for the Defiance Act.
Meanwhile, startups like Synthesia promise hyperrealistic deepfakes with full bodies that move and hands that wave. Deepfakes are just getting a whole lot more realistic. How will we stop the evil side of this?
AI generated fake news spread on social media is heightening the risks of bank runs, according to a new British study that says lenders must improve monitoring to detect when disinformation risks impact customer behavior. Other kinds of fraud are also rampant.
Read more: KIP unveils first truly autonomous self-learning Superior AI Agents
Also, Juniper Research predicts that the value of eCommerce fraud will rise from US$44.3 B in 2024 to US$107 B in 2029, a growth of 141%. All owing to AI, which is fueling the sophistication of attacks across the eCommerce ecosystem, with the use of deepfakes created using AI to defeat verification systems being a key threat. This threat, combined with rising levels of ‘friendly fraud’, where fraud is committed by the customer themselves, such as refund fraud, is increasingly threatening merchant profitability.
AI is helping fraudsters to stay ahead of security measures and commit sophisticated attacks on a larger scale. AI is also making higher quality attacks happen with an unprecedented frequency by creating credible messages and synthetic identities.
Meta’s AI chief, Yann LeCun, has urged that AI should be as open as the internet since eventually, all our interactions with the digital world are going to be mediated by AI assistants. LeCun explained that platforms like ChatGPT and Llama will constitute a repository of all human knowledge and culture, creating a shared infrastructure like the internet today.
He said that we cannot have a small number of AI assistants (OpenAI’s ChatGPT and alike) controlling the digital diet of every citizen across the world. “This will be extremely dangerous for diversity of thought, for democracy, for just about everything,” he added.
As AI becomes more and more human like, we must remember that it is still not human. As Microsoft’s Satya Nadella told Bloomberg Technology, AI is a software and it doesn’t display human intelligence.
“It has got intelligence, if you want to give it that moniker, but it’s not the same intelligence that I have,” he says.
Artificial Intelligence is everywhere now. It’s safe to say that organizations have either fully adopted…
With the rise of advanced AI models such as Grok AI developed by Elon Musk's…
The Tech Panda takes a look at recently launched gadgets & apps in the market.…
With the modern-day competitive business world, customers' engagement stands on the top agenda of every…
In a significant move to modernize airbase operations, the Indian Air Force (IAF) is going…
Mars is one of the hottest destinations in discussion for potential future settlement. That’s one…