How AI May Be Used to Create Custom Disinformation Ahead of 2024

“If I want to launch a disinformation campaign, I can fail 99 percent of the time. You fail all the time, but it doesn’t matter,” Farid says. “Every once in a while, the QAnon gets through. Most of your campaigns can fail, but the ones that don’t can wreak havoc.”

Farid says we saw during the 2016 election cycle how the recommendation algorithms on platforms like Facebook radicalized people and helped spread disinformation and conspiracy theories. In the lead-up to the 2024 US election, Facebook’s algorithm—itself a form of AI—will likely be recommending some AI-generated posts instead of only pushing content created entirely by human actors. We’ve reached the point where AI will be used to create disinformation that another AI then recommends to you.

“We’ve been pretty well tricked by very low-quality content. We are entering a period where we’re going to get higher-quality disinformation and propaganda,” Starbird says. “It’s going to be much easier to produce content that’s tailored for specific audiences than it ever was before. I think we’re just going to have to be aware that that’s here now.”

What can be done about this problem? Unfortunately, only so much. Diresta says people need to be made aware of these potential threats and be more careful about what content they engage with. She says you’ll want to check whether your source is a website or social media profile that was created very recently, for example. Farid says AI companies also need to be pressured to implement safeguards so there’s less disinformation being created overall.

The Biden administration recently struck a deal with some of the largest AI companies—ChatGPT maker OpenAI, Google, Amazon, Microsoft, and Meta—that encourages them to create specific guardrails for their AI tools, including external testing of AI tools and watermarking of content created by AI. These AI companies have also created a group focused on developing safety standards for AI tools, and Congress is debating how to regulate AI.

Despite such efforts, AI is accelerating faster than it’s being reined in, and Silicon Valley often fails to keep promises to only release safe, tested products. And even if some companies behave responsibly, that doesn’t mean all of the players in this space will act accordingly.

“This is the classic story of the last 20 years: Unleash technology, invade everybody’s privacy, wreak havoc, become trillion-dollar-valuation companies, and then say, ‘Well, yeah, some bad stuff happened,’” Farid says. “We’re sort of repeating the same mistakes, but now it’s supercharged because we’re releasing this stuff on the back of mobile devices, social media, and a mess that already exists.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Web Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – webtimes.uk. The content will be deleted within 24 hours.

Leave a Comment