Skip to content

AI Configuration

SetGet includes AI-powered features that help teams work faster — intelligent issue suggestions, automated summaries, content generation, and conversational assistance. The AI Configuration page in the Admin Panel lets you connect an LLM provider and control how AI features are made available across the instance.

Navigate to Admin Panel > AI Configuration or go directly to /backoffice/settings/ai.

How AI works in SetGet

SetGet's AI features communicate with a configurable large language model (LLM) through a standard API. The platform sends context-aware prompts (based on workspace data, project context, and user queries) to the LLM and presents the responses within the UI.

AI features include:

FeatureDescription
AI AssistantConversational thread-based assistant accessible from the sidebar
Issue SuggestionsAI-generated issue titles, descriptions, and property recommendations
Content GenerationDraft page content, comments, and descriptions using AI
Smart SummariesSummarize long threads, cycles, or module progress
Action SuggestionsAI recommends actions like assigning, labeling, or prioritizing

LLM provider configuration

SetGet supports any LLM provider that exposes an OpenAI-compatible API. This includes OpenAI, Anthropic (via compatible proxies), local models (Ollama, vLLM), and hosted alternatives.

Configuration fields

FieldDescriptionRequiredEnvironment Variable
Provider NameDisplay label for the provider (e.g., "OpenAI", "Verinova")YesSETGET_AI_PROVIDER_NAME
API Endpoint URLBase URL of the LLM APIYesSETGET_AI_API_ENDPOINT
API KeyAuthentication key for the LLM APIYesSETGET_AI_API_KEY
ModelModel identifier to use (e.g., gpt-4o, claude-sonnet-4-20250514)YesSETGET_AI_MODEL
Max TokensMaximum tokens per responseNoSETGET_AI_MAX_TOKENS
TemperatureCreativity parameter (0.0 = deterministic, 1.0 = creative)NoSETGET_AI_TEMPERATURE

Step-by-step setup

  1. Open Admin Panel > AI Configuration.
  2. Toggle Enable AI Features to on.
  3. Enter the API Endpoint URL (e.g., https://api.openai.com/v1 for OpenAI).
  4. Enter your API Key.
  5. Select or type the Model identifier.
  6. Optionally adjust Max Tokens and Temperature.
  7. Click Test Connection to verify the provider responds.
  8. Click Save to persist the configuration.

WARNING

The API key is stored encrypted on the server and is never exposed to the frontend. However, you should still rotate keys periodically and use keys with the minimum required permissions.

Provider examples

OpenAI

FieldValue
Provider NameOpenAI
API Endpoint URLhttps://api.openai.com/v1
API KeyYour OpenAI API key (starts with sk-)
Modelgpt-4o or gpt-4o-mini
Max Tokens4096
Temperature0.3

Self-hosted (Ollama)

FieldValue
Provider NameOllama
API Endpoint URLhttp://localhost:11434/v1
API Keyollama (placeholder, not validated)
Modelllama3.1 or your preferred model
Max Tokens4096
Temperature0.3

Custom / enterprise

FieldValue
Provider NameYour provider's name
API Endpoint URLYour provider's API base URL
API KeyYour API key or token
ModelConsult your provider's documentation

TIP

For air-gapped or high-security environments, use a self-hosted model via Ollama or vLLM. This keeps all data within your network and avoids sending workspace content to external APIs.

Workspace-level AI controls

After configuring the LLM provider at the instance level, you can control AI feature availability per workspace.

SettingDescriptionDefault
AI enabled globallyMaster switch for all AI featuresOff (until configured)
Per-workspace overrideAllow workspace admins to enable/disable AI for their workspaceEnabled
Workspace allowlistRestrict AI to specific workspaces onlyEmpty (all workspaces)
Workspace blocklistBlock AI for specific workspacesEmpty

Controlling AI by workspace

  1. Navigate to Admin Panel > AI Configuration > Workspace Controls.
  2. You will see a list of all workspaces with their current AI status.
  3. Toggle AI on or off for individual workspaces.
  4. Alternatively, use the allowlist or blocklist to manage access at scale.

WARNING

When AI is disabled for a workspace, all AI-related UI elements (assistant, suggestions, generation buttons) are hidden from users in that workspace. Existing AI threads and generated content remain accessible but new AI interactions are blocked.

AI usage monitoring

The AI Configuration page includes a usage dashboard that helps you track consumption and costs.

MetricDescription
Total requestsNumber of AI API calls in the selected period
Total tokens (input)Sum of prompt tokens sent to the LLM
Total tokens (output)Sum of completion tokens received from the LLM
Average response timeMean latency of AI API calls
Error ratePercentage of failed AI API calls
Top workspacesWorkspaces with the highest AI usage
Top usersUsers with the highest AI usage

Usage data can be filtered by date range and exported as CSV for billing or audit purposes.

Setting usage limits

To prevent unexpected costs, you can set usage limits:

LimitDescriptionDefault
Monthly token limitMaximum total tokens per monthUnlimited
Per-user daily limitMaximum AI requests per user per dayUnlimited
Per-workspace monthly limitMaximum tokens per workspace per monthUnlimited

When a limit is reached, AI features are temporarily disabled for the affected scope until the limit resets.

Test connection

The Test Connection button sends a minimal prompt to the configured LLM API and verifies:

  1. The endpoint URL is reachable.
  2. The API key is valid.
  3. The specified model is available.
  4. A response is received within the timeout period.

If the test fails, the error message indicates the issue (network error, authentication failure, model not found, etc.).

Security and privacy

ConcernMitigation
Workspace data sent to external APIUse self-hosted models for sensitive data
API key exposureKey is encrypted at rest and never sent to the browser
AI-generated content accuracyAI responses include a disclaimer; users must review before acting
Cost controlUse usage limits to cap spending

Context sent to the LLM

When a user interacts with AI features, SetGet sends contextual data to the LLM to produce relevant responses. Understanding what data leaves your instance is important for security and compliance.

Context typeWhen sentData included
User queryEvery AI interactionThe user's message or prompt
Workspace contextWorkspace-scoped queriesWorkspace name, description
Project contextProject-scoped queriesProject name, description, member list
Issue contextIssue-related queriesIssue title, description, status, priority, labels, assignees
Cycle/Module contextPlanning queriesCycle or module name, date range, progress
Page contextPage-related queriesPage title and content
Conversation historyFollow-up messagesPrevious messages in the AI thread

WARNING

If your organization handles sensitive data (healthcare, financial, classified), evaluate whether sending workspace content to an external LLM complies with your data handling policies. Self-hosted models eliminate this concern entirely.

Troubleshooting AI

ProblemCauseSolution
Test connection failsWrong endpoint URLVerify the API base URL format (e.g., must include /v1)
"Model not found" errorModel identifier incorrectCheck the model name against the provider's documentation
Slow responsesModel or network latencyTry a smaller/faster model, or use a closer endpoint
Responses cut offMax tokens too lowIncrease the Max Tokens setting
Incoherent responsesTemperature too highLower the Temperature setting (try 0.2-0.3)
429 rate limit errorsToo many concurrent requestsSet per-user daily limits or upgrade your API plan