Custom Model Configs: Grok, Gemini, and Any OpenAI-Compatible Provider
OpenCaddis has always supported multiple model configurations in fabr.json — the FabrCore infrastructure was built for it. What it didn't have was a way to manage them from the UI. Today we're shipping an Additional Models section in the Settings page that lets you add Grok, Gemini, OpenRouter, or any OpenAI-compatible provider and assign them to individual agents.
What Changed
The backend already supported arbitrary named model configs. ModelConfigController serves any config from fabr.json by name, the Agent Config tab already loaded all config names into its dropdown, and agents reference configs via Args["ModelConfig"]. The missing piece was the Settings UI.
We added an Additional Models section to the Model Configuration tab — between the Embeddings section and the Save button. It lets you:
- Add custom model configs — click "Add Model" to create a new card
- Configure each model — name, provider, endpoint, API key, model identifier, timeout, max tokens
- Remove models — trash icon on any card
- Import/Export — custom models are included automatically
No backend changes were required. This is a pure UI addition that unlocks functionality the framework already had.
Supported Providers
| Provider | Endpoint | Notes |
|---|---|---|
| Azure | https://your-resource.services.ai.azure.com/... | Azure AI Foundry |
| OpenAI | (blank for default) | Standard OpenAI API |
| Grok | https://api.x.ai/v1 | xAI's Grok API |
| Gemini | https://generativelanguage.googleapis.com/v1beta/openai/ | Google's OpenAI-compatible endpoint |
| OpenRouter | https://openrouter.ai/api/v1 | Multi-provider router |
Selecting a provider auto-fills the endpoint URI. Any provider with an OpenAI-compatible API can be used by entering its endpoint manually.
How It Works End-to-End
The flow is straightforward: configure in settings, assign to an agent, and the agent resolves the model at runtime.
fabr.json
GetChatClient("grok")
The resulting fabr.json looks like this:
{
"ModelConfigurations": [
{ "Name": "default", "Provider": "Azure", "Model": "gpt-5-nano", ... },
{ "Name": "embeddings", "Provider": "Azure", ... },
{
"Name": "grok",
"Provider": "Grok",
"Uri": "https://api.x.ai/v1",
"Model": "grok-4-latest",
"ApiKeyAlias": "grok-key",
"TimeoutSeconds": 60
},
{
"Name": "gemini",
"Provider": "Gemini",
"Uri": "https://generativelanguage.googleapis.com/v1beta/openai/",
"Model": "gemini-2.5-flash",
"ApiKeyAlias": "gemini-key",
"TimeoutSeconds": 60
}
],
"ApiKeys": [
{ "Alias": "default-key", "Value": "..." },
{ "Alias": "grok-key", "Value": "..." },
{ "Alias": "gemini-key", "Value": "..." }
]
}
API Key Deduplication
A small detail that matters: when saving, the system deduplicates API keys by value. If default and embeddings share the same Azure key, only one default-key entry is stored and both configs reference it. This keeps fabr.json clean and avoids the confusion of multiple key entries with identical values.
Mix and Match
The real value is running different models for different agents simultaneously. Some practical combinations:
- Research agent on Grok — use xAI's model for web-connected research tasks
- Code agent on Azure GPT — keep your code assistant on a model you've tuned for code
- Fast triage agent on Gemini Flash — use a fast, cheap model for initial routing and classification
- Delegate agent on default — the router agent uses your primary model to decide where to send requests
Each agent calls GetChatClient("name") with whatever config name was assigned in the Agent Config tab. The FabrCore ModelConfigController resolves the provider, endpoint, and credentials at runtime.
Validation
The UI validates custom model names to prevent conflicts:
- Names must be lowercase alphanumeric with hyphens (e.g.
grok,claude-router) - Cannot use reserved names:
defaultorembeddings - Names must be unique across all custom models
- Model and API Key are required fields
Validation errors block saving and display an alert with the specific issue.
Try It
Pull the latest OpenCaddis, navigate to Settings > Model Configuration, and scroll down to Additional Models. Add a Grok or Gemini config, save, then assign it to an agent in the Agent Config tab. Full docs are on the Custom Models documentation page.
Builder of OpenCaddis and the FabrCore framework.