The official threat intelligence account for @Cloudflare. Follow for threat research, incident assessments, WAF rule updates for emerging threats, and more.
Cloudforce One conducted research into how linguistic deception and file structure can be used to bypass AI-driven code auditors across 18,400 API calls. The findings show that malicious detection rates drop when deceptive comments make up less than 1% of a file and that burying payloads in files larger than 3MB effectively blinds models to malicious intent. Read the full report here: