LiteLLM Proxy - Locust Load Test
Locust Load Test LiteLLM Proxy
- Add
fake-openai-endpointto your proxy config.yaml and start your litellm proxy litellm provides a free hostedfake-openai-endpointyou can load test against
model_list:
- model_name: fake-openai-endpoint
litellm_params:
model: openai/fake
api_key: fake-key
api_base: https://exampleopenaiendpoint-production.up.railway.app/
pip install locustCreate a file called
locustfile.pyon your local machine. Copy the contents from the litellm load test located hereStart locust Run
locustin the same directory as yourlocustfile.pyfrom step 2locustOutput on terminal
[2024-03-15 07:19:58,893] Starting web interface at http://0.0.0.0:8089
[2024-03-15 07:19:58,898] Starting Locust 2.24.0Run Load test on locust
Head to the locust UI on http://0.0.0.0:8089
Set Users=100, Ramp Up Users=10, Host=Base URL of your LiteLLM Proxy
Expected Results
Expect to see the following response times for
/health/readinessMedian → /health/readiness is150msAvg → /health/readiness is
219ms