Table of Contents
Meta, the parent company of Facebook, Instagram, and Threads, has announced a new initiative to combat fake news and increase transparency on its social platforms. The company is developing tools that will enable it to identify and tag images generated by artificial intelligence (AI) as synthetic content.
Nick Clegg, president of global affairs at Meta, said in a statement that the move comes at a time when many countries around the world are holding elections and people want to know the difference between human and synthetic content. He added, “People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology.”
The rise and risks of AI-generated images
AI-generated images are images that are created or manipulated by algorithms that use deep learning, a branch of AI that mimics the human brain. These algorithms can produce realistic-looking images of faces, objects, landscapes, and even events that never happened.
According to some estimates, nearly 20 billion AI-generated images have been uploaded to the Internet since 2022. Some of these images are harmless or even beneficial, such as those used for entertainment, education, or art. However, some of them are harmful or malicious, such as those that impersonate public or private individuals without their consent or spread misleading information with political motives to distort the truth.
These images pose a serious threat to the credibility of online information and the privacy and reputation of individuals. They can also influence public opinion, manipulate emotions, and undermine trust in institutions and democracy.
The need for regulation and detection of AI-generated images
Recognizing the potential dangers of AI-generated images, some governments and regulators have taken steps to curb their misuse and protect online safety. For instance, Britain passed an online safety law in 2023 that makes the circulation of fake photos of a person without their consent a crime. The law also requires online platforms to remove harmful content and protect users from harm.
Similarly, US lawmakers have acknowledged the need for legislation to prevent the spread of fake news and protect internet users’ safety. They have proposed bills that would require social media platforms to take action against AI-generated images and videos that are deceptive or harmful.
However, regulation alone is not enough to address the challenge of AI-generated images. There is also a need for technology that can detect and label synthetic content, as well as educate and empower users to spot and report fake images.
Meta’s efforts to tag AI-generated images
Meta says it is working with industry partners to develop technology that can identify AI-generated images and tag them as synthetic content on its social platforms. The tags will serve as indicators of industry standards and will appear in all languages. The company hopes that this will help users make informed decisions about the content they see and share.
Meta admits that it is not yet able to identify all the content created by AI and that there will be those who try to bypass its tagging technology. However, the company says that it is committed to finding ways to monitor some of the uploaded content and also asks users to share information about content created through AI so that it can add a tagging label to it.
Meta’s initiative comes amid the growing sophistication of AI-generated images, which are sometimes hard to distinguish from real ones. For example, in January 2024, fake images of pop star Taylor Swift were uploaded to social media, which are believed to have been created using AI. The images showed her with different hairstyles, outfits, and facial expressions.
In Britain, a slideshow of eight images depicting Prince William and Prince Harry during the coronation of King Charles went viral on Facebook, receiving over 78,000 likes. One of the images showed an apparent emotional hug between the brothers, despite reports of a rift between them. None of the eight images were real.
Another fake image showed former US president Donald Trump, also created using AI, after he was accused of election fraud charges. The image showed him with a bruised face and a bandaged nose, implying that he was assaulted.
The future of AI-generated images
AI-generated images are becoming more prevalent and realistic, thanks to the advances in AI and computing power. They have the potential to create new forms of expression, communication, and creativity, as well as new challenges and risks for online information and safety.
Meta’s plan to tag AI-generated images on its platforms is a welcome step in the fight against fake news and the promotion of transparency. However, it is not a silver bullet that can solve all the problems posed by synthetic content. There is still a need for more research, regulation, education, and collaboration among stakeholders to ensure that AI-generated images are used for good and not evil.