Skip to main content

Providers Overview

AISCouncil connects to large language model (LLM) providers directly from your browser. There is no proxy server in between -- your API keys and conversations go straight to the provider's API endpoint. This is the BYOK (Bring Your Own Key) model.

How It Works

  1. You obtain an API key from a provider (e.g., Anthropic, OpenAI, Google)
  2. You paste the key into AISCouncil's settings
  3. The key is stored locally in your browser (localStorage) -- it never touches our servers
  4. When you send a message, the browser calls the provider's API directly
  5. Responses stream back to you in real time via Server-Sent Events (SSE)
Key Security

API keys are stored exclusively in your browser's localStorage. They are never included in shared bot URLs, never sent to AISCouncil servers, and never logged. Only the provider you are chatting with receives your key.

Provider Comparison

Providers with Free Tiers

ProviderAPI Key RequiredNotable ModelsReasoningVision
Google GeminiYesGemini 2.5 Flash, 2.5 Pro, 3 Flash PreviewYesYes
OpenRouterYes300+ models (20+ free models from all providers)YesYes
GroqYesLlama 3.3 70B, DeepSeek R1 Distill, Compound BetaYesYes
CloudflareYesLlama 3.3 70B, Llama 4 Scout, Qwen 2.5, DeepSeek R1NoYes
OllamaNoAny model you install locallyVariesVaries

Pay-as-you-go Providers

ProviderNotable ModelsReasoningVision
AnthropicClaude Opus 4.6, Sonnet 4.5, Haiku 4.5YesYes
OpenAIGPT-5, GPT-4.1, o3, o4-miniYesYes
xAIGrok 4.1 Fast, Grok 4, Grok 3YesYes
DeepSeekDeepSeek Chat, R1, R1-0528YesNo
MistralMistral Large 3, Codestral, Devstral 2NoYes
QwenQwen3 Max, Qwen3 Coder, Qwen PlusNoNo
ZhipuGLM-5, GLM-4.7, GLM-4.6YesNo
PerplexitySonar Pro, Sonar Deep Research, R1-1776YesNo
CohereCommand R+, Command A, Command R7BNoNo
Together AILlama 3.3 70B, DeepSeek R1, Qwen 2.5 72BNoNo
Fireworks AILlama 3.3 70B, DeepSeek R1, Llama 4NoNo
AI21Jamba 2 Mini, Jamba 1.7 Large, Jamba 1.5NoNo
MoonshotKimi v1 128K, Kimi v1 32KNoNo
Meta LlamaLlama 4 Maverick, Llama 4 Scout, Llama 3.3NoNo
Start for Free

The fastest way to get started is with Google Gemini (free API key, no credit card), OpenRouter (20+ free models including DeepSeek R1, Qwen 3, and Llama 3.3), Groq (ultra-fast free inference), or Cloudflare Workers AI (10K neurons/day free). See Getting Started for a walkthrough.

Adding an API Key

  1. Open AISCouncil at aiscouncil.net
  2. Click the Settings gear icon in the sidebar
  3. Go to the AI Model tab
  4. Find the provider you want and paste your API key
  5. The key is saved immediately and persisted in your browser

You can also enter API keys during the first-run wizard when creating your first bot profile.

Per-Bot API Keys

Each bot profile can have its own API key that overrides the global key for that provider. This is useful if you have separate keys for different projects or billing accounts. Set a per-bot key in the bot's configuration panel (right sidebar).

How Provider Selection Works

When you create a bot profile, you choose a provider and a model. The provider determines which API endpoint is called, and the model determines which specific AI you are chatting with.

Models are loaded from the community model registry, which is updated independently of the app. New models appear automatically when the registry is refreshed (every 24 hours, or on page reload).

API Formats

Most providers use the OpenAI-compatible Chat Completions API format. Two exceptions:

FormatProvidersNotes
OpenAI-compatibleOpenAI, xAI, OpenRouter, DeepSeek, Mistral, Groq, Qwen, Zhipu, Moonshot, Cohere, AI21, Perplexity, Together AI, Fireworks AI, Meta Llama, Cloudflare, OllamaStandard POST /v1/chat/completions with Bearer auth
AnthropicAnthropicCustom Messages API with x-api-key header
GeminiGoogle GeminiNative generateContent API with ?key= query param

This difference is handled automatically -- you do not need to worry about API formats when using the app.

Reasoning / Thinking Support

Several providers support reasoning or "thinking" modes where the model shows its step-by-step thought process before answering:

ProviderFeature NameHow to Enable
AnthropicExtended ThinkingSet reasoning effort in config panel (budget tokens or preset)
Google GeminiThinking ConfigSet reasoning effort in config panel (budget tokens or preset)
OpenAI-compatibleReasoning EffortSet to low, medium, or high in config panel

Reasoning output appears in a collapsible "thinking" block above the model's response.

Custom Providers

You can add any OpenAI-compatible API endpoint as a custom provider:

  1. Open Settings > AI Model
  2. Scroll to Custom Providers
  3. Enter a name, API endpoint URL, and API key
  4. The custom provider appears in the provider dropdown when creating bot profiles

Custom providers are persisted in localStorage and support all standard features (streaming, tool calling, etc.) as long as the endpoint implements the OpenAI Chat Completions format.

Usage Tracking

AISCouncil tracks token usage per provider in Settings > Usage. You can see input tokens, output tokens, and estimated costs across all your chat sessions. This helps you monitor spending without needing to check each provider's dashboard separately.