Image credit : Search Engine Journal
After the advent of ChatGPT, teachers have found themselves in a tight spot while finding out whether the homework produced by their students have AI support or not. Last week, OpenAI launched tips for tutors in a blog post to demonstrate how teachers have been leveraging ChatGPT as their teaching assistant. In an affiliated FAQ, OpenAI also formally admitted that AI detectors don’t work.
Do AI Content Detectors Work or Not?
In a section of the FAQ titled “Do AI detectors work?“, OpenAI writes, “In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.”
OpenAI acknowledged that no AI writing detection system, including their own, has demonstrated consistent and reliable capabilities to distinguish between content created by AI and content generated by humans.
These detectors frequently produce inaccurate results, primarily because the metrics used for detection have not been sufficiently validated. OpenAI’s own experimental AI Classifier, which was specifically designed for this task, exhibited a low accuracy rate of 26 percent before it was discontinued.
In addition, OpenAI noted that ChatGPT itself has no idea whether its answers are AI-generated.
Its response can be confusing if you ask whether a particular essay that it wrote was AI-generated or written by a human.
Amid the failure of automated AI detectors human discernment may play a key role in detecting the difference between AI and human generated responses.