AI-generated images are especially troubling when used to spread dangerous narratives on social media. OpenAI responsible for producing generative AI today released a tool that can detect whether an image is generated by DALL.E.
According to OpenAI this tool can detect images generated by DALL.E artificial intelligence with 98.8% accuracy. It however cannot be used to detect images generated by Midjourney or Stability.
OpenAI is also trying to include watermarks on audio generated by their AI. It is designed to be difficult to remove. There has already been a case of a professor being defamed for making racist statements in the United States in order to get him fired.
At the same time OpenAI officially joined the Coalition for Original Content and Authenticity (C2PA) which facilitates the detection of fake images online to combat the spread of inaccurate information. C2PA is now going from strength to strength having also been supported by OpenAi, Google, Adobe, Microsoft, Sony, Meta, BBC, ARM, Intel, TruPic and Twitter/X.