Meta, the parent company of Facebook, has announced its intention to implement new technology capable of detecting and labeling images generated by artificial intelligence (AI) tools developed by other companies.
This technology, which is still under development, will be deployed on Meta’s platforms, including Facebook, Instagram, and Threads.
The initiative aims to expand Meta’s existing practice of labelling AI-generated images produced by its own systems. According to senior executive Sir Nick Clegg, Meta hopes this move will create momentum within the industry to address the issue of AI-generated fakery.
In a blog post authored by Sir Nick Clegg, Meta indicated plans to increase the labeling of AI-generated content in the coming months. However, experts have expressed scepticism about the effectiveness of such measures.
Prof. Soheil Feizi, director of the Reliable AI Lab at the University of Maryland, cautioned that such detection systems could be easily circumvented through simple image processing techniques, potentially leading to high rates of false positives.
Meta has acknowledged the limitations of its technology, particularly in detecting AI-generated audio and video content. Instead, the company plans to rely on user-generated labelling for such media and may impose penalties for non-compliance.
Furthermore, Sir Nick Clegg admitted the challenges of identifying text generated by AI tools like ChatGPT, conceding that it would be impractical to test for such content.
The announcement comes shortly after Meta’s Oversight Board criticised the company’s policy on manipulated media, describing it as “incoherent” and “lacking persuasive justification.” The criticism was in response to Meta’s handling of a video depicting US President Joe Biden, which was not removed despite being manipulated to depict inappropriate behaviour.
Sir Nick Clegg acknowledged the need to update Meta’s policy on manipulated media, particularly in light of the increasing prevalence of synthetic and hybrid content. Meta has already implemented a policy requiring political advertisements to disclose the use of digitally altered images or videos since January.