Meta to Identify AI Images to Prevent Misinformation

(TheConservativeTimes.org) – Meta announced on Tuesday that it will be looking into images that have been potentially doctored by AI to help prevent the spread of misinformation and deepfakes, especially as we approach the upcoming election.

AI and deepfakes have become a huge issue on the internet leading to misinformation on many different occasions. Meta has said that it will expand its tools in order to identify all AI-generated images on Facebook, Instagram, and Threads. Meta has already been doing this but only with images that come from their platforms, whereas now they will be identified even if the images come from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

Nick Clegg, Meta’s president of global affairs, said that Meta will be labeling AI images originating from other sources and that they will “continue working on the problem in the coming months.”

Clegg also said that there is work that needs to be done in order to work with other AI companies to “align on common technical standards that signal when a piece of content has been created using AI.”

The problem with AI content is that sometimes it is easily identifiable, whereas other times it’s hard to identify whether or not an image is fake or has been doctored at all.

“We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks,” Clegg wrote.

They also wrote that advancements for detecting AI in video and audio are even more difficult. However, Meta says that there will be an area to indicate whether your content is made with AI or not, and if you upload an AI-generated image without labeling it, then the “company may apply penalties.”

Copyright 2024, TheConservativeTimes.org