- cross-posted to:
- hackernews
- cross-posted to:
- hackernews
Now, with the help of AI, it’s even easier to waste time of open source developers by creating fake security vulnerability reports.
Now, with the help of AI, it’s even easier to waste time of open source developers by creating fake security vulnerability reports.
Unfortunately, the methods of detecting AI generated text and training AI text generaters is basically identical. Any reliable method of detecting AI can therefore be used to improve its performance.
You can, at least, detect low grade attempts to use it. The default output has distinctive patterns. These can be detected. The problem is 2 fold. Firstly, some people write in the same way (the LLM is copying the amalgam, and they write close to that). Secondly, it’s fairly trivial to ask the LLM to change its writing style.
No matter your method, you need to accept a high rate of both false positives and negatives.