Edit: Please refer to the title. I mention ONLY using the software specifically.
Why YSK: I work in education in an elevated role that works with multiple teachers, teams, admin, technology, and curriculum. I have had multiple meetings with companies such as Turnitin, GPTZero, etc., and none of them provide a 100% reliability in their AI detection process. I’ll explain why in a moment, but what does this mean? It means that a school that only uses AI Detection software to determine AI use will NEVER have enough proof to claim your work is AI generated.
On average, there is a 2% false positive rate with these programs. Even Turnitin’s software, which can cost schools thousands of dollars for AI detection, has a 2% false positive rate.
Why is this? It’s because these detection software programs use a syntactical approach to their detection. In other words, they look for patterns, word choices, and phrases that are consistent with what LLMs put out, and compare those to the writing that it is analyzing. This means that a person could use a similar writing style to LLMs and be flagged. Non-English speakers are especially susceptible to false positives due to this detection approach.
If a school has no other way to prove AI was used other than a report from an AI Detection program, fight it. Straight up. Look up the software they use, find the rate of error, and point out the syntactical system used and argue your case.
I’ll be honest though, most of the time, these programs do a pretty good job identifying AI use through syntax. But that rate of error is way too high for it to be the sole approach to combating unethical use.
It was enough for me to tell Turnitin, “we will not be paying an additional $6,000 for AI detection.”
Thought I would share this info with everyone because I would hate to see a hardworking student get screwed by faulty software.
TL;DR: AI detection software, even costly tools like Turnitin, isn’t 100% reliable, with a 2% false positive rate. These programs analyze writing patterns, which can mistakenly flag human work, especially from non-native speakers. Schools relying solely on AI detection to prove AI use are flawed. If accused, students should challenge the results, citing error rates and software limitations. While these tools can often detect AI, the risk of false positives is too high for them to be the only method used.
Edit: As an educator and instructional specialist, I regularly advise teachers to consider how they are checking progress in writing or projects throughout the process in order to actually see where students struggle. Teachers, especially in K-12, should never allow the final product to be the first time they see a student’s writing or learning.
I also advise teachers to do separate skills reflections after an assignment is turned in (in class and away from devices) for students to demonstrate their learning or explain their process.
This post is not designed to convince students to cheat, but I’ve worked with a fair number of teachers that would rather blindly use AI detection instead of using other measures to check for cheating. Students, don’t use ChatGPT as a task completer. Use it as a brainstorm partner. I love AI in education. It’s an amazing learning tool when used ethically.
70% of my essay is being detected as AI, depite not using any AI
byu/Affectionate-Oil2612 inmildlyinfuriating
Views: 176