Anti-Detect Automation Detection (Active after 2025 December 15 10:00 UTC)
1. Overview
The Anti-Detect Automation Detection (AAD) challenge evaluates a participantโs ability to reliably detect browser automation frameworks operating inside anti-detect browsers, while preserving human safety.
All evaluations are executed inside NST-Browser, where:
- Browser fingerprints are masked
- Profiles are fresh per run
- Automation attempts to closely mimic real users
2. Anti-Detect Environment
Each evaluation run uses:
- A fresh NST-Browser profile
- An isolated Docker container per framework
- No shared state between runs
This environment simulates real-world anti-detect usage where static signals are unreliable. Only runtime behavior and orchestration patterns remain detectable.
NST-Browser Dependency
In order to test ada_detection challenge miners need to get api key from nstbrowser dashboard and buy professional plan for using while testing
3. Evaluation Flow
Each submission is evaluated as follows:
-
Submission Received Detection scripts are submitted via the
/scoreendpoint. -
Task Generation A randomized sequence is created consisting of:
-
Multiple runs per automation framework
-
Randomly injected human interactions
-
NST-Browser Launch A clean NST-Browser instance is started for each task.
-
Automation / Human Execution
-
Automation frameworks connect via WebSocket control
-
Humans manually interact with the page
-
Detection Phase
-
Scripts may emit detection payloads to
/_payload -
Silence during human interaction is expected
-
Scoring
-
Results are aggregated, normalized, and returned
5. Target Frameworks
Participants must submit one detection script per framework:
nodriverplaywrightpatchrightpuppeteer
Missing scripts invalidate the submission.
6. Submission Format
Submissions must follow this structure:
{
"detection_files": [
{ "file_name": "nodriver.js", "content": "/* logic */" },
{ "file_name": "playwright.js", "content": "/* logic */" },
{ "file_name": "patchright.js", "content": "/* logic */" },
{ "file_name": "puppeteer.js", "content": "/* logic */" },
{ "file_name": "automation.js", "content": "/* logic */" },
]
}
Rules
- File names must match framework names exactly
- Each file detects only its own framework
- No extra files or outputs are allowed
7. Scoring System (Code-Accurate)
AAD scoring is continuous, normalized, and strict, combining three main components before being normalized into a final score.
- Human Accuracy: This is the most critical component. Your submission must not flag real human users as bots or automation. You are allowed a maximum of 2 mistakes; exceeding this limit results in an immediate final score of 0.0. For scoring, you start with 1.0 point, and each mistake reduces this component by 0.1.
Automation Accuracy: This will be determined by the output of automation.js. It will be marked as correct when it is false in human evaluation and true in any automation frameworks. Accuracy is determined by dividing the number of correct automation detections by the total session numbers.
- Framework Detection: Your submission earns points for correctly identifying the specific automation framework being used. For each framework, you are tested multiple times. You only earn 1 full point for a framework if you detect it perfectly in all of its runs. A single missed detection or a collision (reporting more than one framework) for a given framework will result in 0 points for that framework.
Finally, all the points are summed and normalized to produce your final score between 0.0 and 1.0 using the formula:
Final Score = (Human Accuracy Score + Automation Score + Framework Points) / (Number of Frameworks + 1 Human + 1 Automation)
9. Example
Assume:
- 4 frameworks
- Perfect human accuracy โ 1.0
- Automation accuracy โ 0.9
- 2 frameworks detected perfectly โ 2.0 points
Any excessive human misclassification would reduce this to 0.0.
10. Technical Constraints
- JavaScript (ES6+)
- NST-Browser only
- Docker (amd64 recommended)
- Stateless execution
- No persistence between runs
11. Plagiarism Policy
- Submissions are compared against others
- 100% similarity โ score = 0
- Similarity above 60% incurs proportional penalties
Submission Guide
Follow 1~6 steps to submit your SDK.
-
Navigate to the AB Sniffer v5 Commit Directory
-
Build the Docker Image
To build the Docker image for the AB Sniffer v5 submission, run:
-
Log in to Docker
Log in to your Docker Hub account using the following command:
Enter your Docker Hub credentials when prompted.
-
Push the Docker Image
Push the tagged image to your Docker Hub repository:
-
Retrieve the SHA256 Digest
After pushing the image, retrieve the digest by running:
-
Update active_commit.yaml
Finally, go to the
neurons/miner/active_commit.yamlfile and update it with the new image tag:
๐ References
- Docker - https://docs.docker.com