Anti-Detect Automation Detection v1 (Active after 2025 December 15 10:00 UTC)
Overview
ADA Detection v1 focuses on behavioral detection inside NST-Browser where traditional static signals are masked. This version uses a payload-based flow with strict human safety requirements.
For general challenge information, environment details, and plagiarism policies, please refer to the AAD README.
Target Frameworks
Participants must submit one detection script per framework:
automationnodriverplaywrightpatchrightpuppeteer
Missing scripts invalidate the submission.
Payload-Based Detection Flow
ADA v1 uses a runtime, payload-driven model:
- Scripts run in-page and confirm automation behavior.
- On confirmation, send a payload to local
/_payload. - Silence is mandatory during human interactions; any payload in a human session is a critical mistake.
Submission Format
Submissions must follow this structure:
{
"detection_files": [
{ "file_name": "nodriver.js", "content": "/* logic */" },
{ "file_name": "playwright.js", "content": "/* logic */" },
{ "file_name": "patchright.js", "content": "/* logic */" },
{ "file_name": "puppeteer.js", "content": "/* logic */" },
{ "file_name": "automation.js", "content": "/* logic */" },
]
}
Rules
- File names must match framework names exactly
- Each file detects only its own framework
- No extra files or outputs are allowed
Scoring System
AAD scoring is continuous, normalized, and strict, combining three main components before being normalized into a final score.
- Human Accuracy: This is the most critical component. Your submission must not flag real human users as bots or automation. You are allowed a maximum of 2 mistakes; exceeding this limit results in an immediate final score of 0.0 (human safety kill switch). For scoring, you start with 1.0 point, and each mistake reduces this component by 0.1.
Automation Accuracy: This will be determined by the output of automation.js. It will be marked as correct when it is false in human evaluation and true in any automation frameworks. Accuracy is determined by dividing the number of correct automation detections by the total session numbers.
- Framework Detection: Your submission earns points for correctly identifying the specific automation framework being used. For each framework, you are tested multiple times. You only earn 1 full point for a framework if you detect it perfectly in all of its runs. A single missed detection or a collision (reporting more than one framework) for a given framework will result in 0 points for that framework.
Finally, all the points are summed and normalized to produce your final score between 0.0 and 1.0 using the formula:
Similarity & Time Decay
- Similarity check: Submissions are compared against historical solutions; high similarity incurs penalties.
- Score decay: Scores decay over 15 days to incentivize refreshed heuristics.
Example
Assume:
- 4 frameworks
- Perfect human accuracy → 1.0
- Automation accuracy → 0.9
- 2 frameworks detected perfectly → 2.0 points
Any excessive human misclassification would reduce this to 0.0.
Submission Guide
To build and submit your solution, please follow the Building a Submission Commit guide.
Submission Templates
Templates and building instructions can be found in the ADA Detection repository.