Articles on: VPS Hosting

How to Add a Custom AI Provider in OpenClaw

This guide walks you through adding a custom AI provider that is not listed in the standard setup-ai command using the openclaw configure interactive tool.


This method is useful for providers like:


  • NVIDIA NIM
  • Together AI
  • Deepseek
  • Fireworks AI
  • Any OpenAI-compatible endpoint


In this example, we will add NVIDIA NIM with the model nvidia/nemotron-3-super-120b-a12b.


Prerequisites


  • Active OpenClaw VPS - Your VPS is running and accessible.
  • SSH Access - You can connect to your VPS via SSH.
  • API Key - You have an API key from your custom provider (if required).
  • Provider Endpoint - You know the API endpoint URL.
  • Model Name - You know the exact model identifier.


Step-by-Step Guide


  1. SSH into Your VPS.


  1. Run the Configure Command:


openclaw configure


  1. When asked "Where will the Gateway run?", select Local (since this will be implemented on your VPS).


  1. When asked "Select sections to configure", select Model (because we want to add an AI provider/model).


  1. When asked "Model/auth provider", you may choose any model provider you want. For this guideline, you may select Custom Provider (since the provider is not listed in the default options).


  1. When asked "API Base URL", enter your provider's API endpoint URL.


  • For NVIDIA NIM:
https://integrate.api.nvidia.com/v1



  1. When asked "How do you want to provide this API key?", select Paste API key now.


  1. When asked "API Key (leave blank if not required)", paste your actual API key.


  • For NVIDIA, it typically starts with nvapi-



  1. When asked "Endpoint compatibility", select OpenAI-compatible (most custom providers use this format).


  1. When prompted for "Model ID", enter the exact model identifier provided by your AI provider.


⚠️ Since most providers offer multiple models, make sure you are using a valid one. If you encounter a verification failed error, you can run the following command to check which models are available and compatible:


curl <API Base URL>/models -H "Authorization: Bearer <API Key>" -s | jq '.data[].id'


  • <API Base URL> with your AI provider’s API endpoint
  • <API Key> with your actual API key from the provider dashboard


Example:

curl https://integrate.api.nvidia.com/v1/models -H "Authorization: Bearer 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" -s | jq '.data[].id'


  1. When asked "Endpoint ID", you can accept the auto-generated value or modify it.


  1. When asked "Model alias (optional)", you can enter a shorter, friendlier name for this model.

  1. Once completed, the system will display a success message confirming that your custom provider has been successfully configured. The full configuration result is shown below:


  1. In your chat interface (Telegram, WhatsApp, or web chat), type /new to start a fresh session with your new custom model.


You have complete add custom AI Provider.



Verification


Check the configuration file:


cat /opt/openclaw/data/.openclaw/openclaw.json | grep -A 20 "custom"



Need Further Assistance?


If you face any issues or need assistance, don’t hesitate to reach out — our support team is always ready to help!


🔧 Need help? Submit a Support Ticket

💬 Chat with us on Live Chat via our website

Updated on: 23/04/2026

Was this article helpful?

Share your feedback

Cancel

Thank you!