Skip to content

Validator Custom

Custom Setup for Specific Challenges

Some challenges include using LLM models to generate responses. We setup a vLLM server to serve the these challenges and use it in default so your validator can run the challenges without hosting one. In case you want to host your own vLLM server, you can follow the instructions below.

To set up the environment for the Response Quality Ranker challenges and Response Quality Adversarial V2-V3 challenges, you will need to create a vLLM server.

  1. Create a virtual environment and install the required dependencies:
python -m venv vllm
source vllm/bin/activate
pip install vllm==0.6.2
  1. Run the vLLM server with the appropriate model:
HF_TOKEN=<your-huggingface-token>
python -m vllm.entrypoints.openai.api_server --model unsloth/Meta-Llama-3.1-8B-Instruct --max-model-len 4096 --port <your-vllm-port> --gpu_memory_utilization <your-gpu-memory-utilization>
  1. Set the necessary environment variables in Dockerfile:
ENV VLLM_URL="http://127.0.0.1:8000/v1"
ENV VLLM_API_KEY="your-api-key"
ENV VLLM_MODEL="unsloth/Meta-Llama-3.1-8B-Instruct"
ENV OPENAI_API_KEY="your-api-key"

You can find these Dockerfile in each challenge folder in the challenge_pool folders: - redteam_core/challenge_pool/response_quality_adversarial_v4/Dockerfile - redteam_core/challenge_pool/toxic_response_adversarial/Dockerfile