Compare commits

...

53 Commits

Author SHA1 Message Date
Raunak Bhagat
6597e94ee8 fix Formik context ordering in Internals components
Internals components called useFormikContext but rendered ModalWrapper
(which now wraps Formik) as a child — meaning useFormikContext ran
outside the Formik context. Move ModalWrapper to the outer component
so Internals renders inside the Formik context.
2026-04-07 16:42:46 -07:00
Raunak Bhagat
ecb303a5fe absorb Formik into ModalWrapper
ModalWrapper now accepts initialValues, validationSchema, and onSubmit
as props and wraps children in Formik internally with validateOnMount
and enableReinitialize. Modals no longer need to import or configure
Formik directly.
2026-04-07 16:34:21 -07:00
Raunak Bhagat
c1de684cb5 Saving changes 2026-04-07 16:15:45 -07:00
Raunak Bhagat
a27ed40e23 promote buildInitialValues to useInitialValues hook
Absorbs test_model_name via useTestingModelFromLLMProvider so modals
no longer need to call the hook separately. Modals now only override
provider-specific fields like api_base defaults and custom_config.
2026-04-07 15:40:28 -07:00
Raunak Bhagat
81d556ff19 move is_auto_mode into buildInitialValues with default true
The auto-update toggle visibility is already controlled by
shouldShowAutoUpdateToggle on ModelSelectionField, so defaulting
to true is safe for all providers.
2026-04-07 15:28:42 -07:00
Raunak Bhagat
5ca110ca94 consolidate buildInitialValues and add useTestingModelFromLLMProvider
buildInitialValues now takes providerName and handles provider, api_key,
and api_base centrally. Modals only override api_base when they have a
provider-specific default URL.

Add useTestingModelFromLLMProvider hook to resolve the test model name
from existing provider config or wellKnownLLMProvider fallback,
replacing duplicated logic across all modals.
2026-04-07 15:25:52 -07:00
Raunak Bhagat
01cb644cb9 fix onboarding test_model_name propagation and unify validation schemas
Fix svc.ts to read test_model_name instead of default_model_name for
onboarding test requests and default model assignment. Fix is_auto_mode
in OpenAI/Anthropic/VertexAI to use the form value directly instead of
comparing model names against hardcoded constants.

Consolidate duplicated validation schema logic into a single
buildValidationSchema(isOnboarding, { apiKey?, apiBase?, extra? })
helper, replacing 12 copies of the same onboarding/admin branching
pattern.
2026-04-07 14:59:53 -07:00
Raunak Bhagat
cef966300e Fix docs 2026-04-07 14:18:54 -07:00
Raunak Bhagat
2f0c0d3074 Merge branch 'main' into fix/onboarding
# Conflicts:
#	web/src/sections/modals/llmConfig/OllamaModal.tsx
#	web/src/sections/modals/llmConfig/shared.tsx
2026-04-07 14:16:27 -07:00
Raunak Bhagat
5cea4297dc Edit spacing 2026-04-07 14:06:53 -07:00
Raunak Bhagat
3659925969 use shared APIBaseField and APIKeyField across all LLM modals
Replace inline api_base and api_key field implementations in
BifrostModal, OllamaModal, LMStudioForm, LiteLLMProxyModal,
OpenRouterModal, CustomModal, and VertexAIModal with the shared
APIBaseField and APIKeyField components from shared.tsx. Add a
name prop to APIKeyField for LM Studio's custom field name.
2026-04-07 14:02:44 -07:00
Wenxi
3a8ba15c8d refactor(ollama): manual fetch and fix ollama cloud base url (#9973) 2026-04-07 20:22:02 +00:00
Jessica Singh
67b7d115db fix(fe): use Modal.Footer for token rate limit modal button (#9978) 2026-04-07 20:18:01 +00:00
Raunak Bhagat
4514728a81 fix ModalWrapper prop mismatch: displayName → llmProvider 2026-04-07 12:54:45 -07:00
Jamison Lahman
0e6759135f chore(docker): docker bake cache-from :edge images (#9976) 2026-04-07 19:51:38 +00:00
acaprau
a95e2fd99a fix(indexing, powerpoint files): Patch markitdown _convert_chart_to_markdown to no-op (#9970) 2026-04-07 19:51:06 +00:00
Justin Tahara
10ad7f92da chore(mt): Update cloud tasks (#9967) 2026-04-07 19:48:30 +00:00
Raunak Bhagat
60cadfcc62 use LLMProviderName enum directly instead of local provider name constants
Remove redundant local constants (ANTHROPIC_PROVIDER_NAME, etc.) from
all modals and use LLMProviderName.X directly for providerEndpoint,
providerName in submit calls, and initial values.
2026-04-07 12:45:56 -07:00
Justin Tahara
f9f8f56ec1 fix(groups): Global Curator Permissions (#9974) 2026-04-07 19:44:07 +00:00
Raunak Bhagat
48af4045d1 move isTesting/isFormValid/isDirty/isSubmitting into ModalWrapper
ModalWrapper now reads isValid, dirty, isSubmitting from useFormikContext
and isTesting from Formik status. Removes 4 props from every modal's
ModalWrapper call and eliminates isTesting useState from all 11 modals.
submitLLMProvider receives setStatus instead of setIsTesting.
2026-04-07 12:37:59 -07:00
Jamison Lahman
91ed204f7a feat: generic OpenAI Compatible LLM Provider setup (#9968) 2026-04-07 19:17:57 +00:00
Nikolas Garza
e519490c85 docs(celery): add Prometheus metrics integration guide (#9969) 2026-04-07 19:15:13 +00:00
Raunak Bhagat
065b38f9b7 unify initialValues builders and rename internal-only form fields
- Replace buildDefaultInitialValues + buildOnboardingInitialValues with
  a single buildInitialValues(existingLlmProvider?) function
- Remove isOnboarding ternary from all modal initialValues
- Rename default_model_name → test_model_name (only used for test request)
- Rename selected_model_names → visible_model_names (only used to compute
  model_configurations[].is_visible)
- Strip both from the request body in submitLLMProvider — they were being
  sent to the backend which silently ignored them
- Rename buildDefaultValidationSchema → buildValidationSchema
2026-04-07 12:06:47 -07:00
Raunak Bhagat
713e7574ea restore ModelAccessField invocations removed by overzealous sed 2026-04-07 11:33:23 -07:00
Raunak Bhagat
173f65ec48 use useFormikContext instead of threading formikProps as props
ModelSelectionField, ModelAccessField, and all 6 Internals components
now pull from Formik context directly. Removes formikProps prop
threading through the entire modal component tree.

Also fix redundant double-imports from input-layouts — files with
`* as InputLayouts` now use InputLayouts.FieldSeparator etc. instead
of a separate named import line.
2026-04-07 11:28:41 -07:00
Raunak Bhagat
d9b122770c refactor: rename LLM modal components and move layout utilities
- Rename LLMConfigurationPage → LLMProviderConfigurationPage
- Rename ModelsField → ModelSelectionField
- Rename ModelsAccessField → ModelAccessField
- Rename LLMConfigurationModalWrapper → ModalWrapper
- Remove SingleDefaultModelField (flatten ModelSelectionField into all modals)
- Move FieldSeparator and FieldPadder (née FieldWrapper) to input-layouts.tsx
- Update all 11 modal files to import directly from input-layouts
2026-04-07 11:14:58 -07:00
Nikolas Garza
93251cf558 feat(chat): add multi-model response panels (#9855) 2026-04-07 16:08:58 +00:00
Jamison Lahman
c31338e9b7 fix: stop falsely rejecting owner-password-only PDFs as protected (#9953)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-07 04:11:46 +00:00
Raunak Bhagat
1c32a83dc2 fix: replace React context hover tracking with pure CSS (#9961) 2026-04-06 20:57:36 -07:00
Raunak Bhagat
4a2ff7e0ef fix: a proper revamp of "Custom LLM Configuration Models" (#9958) 2026-04-07 03:27:41 +00:00
Raunak Bhagat
c3f8fad729 refactor: conditionally render LLM modals instead of early-returning null (#9954) 2026-04-07 00:32:58 +00:00
Justin Tahara
d50a5e0e27 chore(helm): Bumping Python Sandbox to v0.3.2 (#9955) 2026-04-06 22:55:14 +00:00
Evan Lohn
697a679409 chore: context gitignore (#9949) 2026-04-06 22:44:23 +00:00
Raunak Bhagat
0c95650176 fix(llm-config): extract first-class fields from custom provider key-value list (#9945) 2026-04-06 22:00:44 +00:00
Raunak Bhagat
0d3a6b255b chore: update custom LLM modal descriptions (#9946) 2026-04-06 21:55:31 +00:00
Raunak Bhagat
01748efe6a refactor: clean up KeyValueInput and EmptyMessageCard (#9947) 2026-04-06 21:18:45 +00:00
dependabot[bot]
de6c4f4a51 chore(deps-dev): bump vite from 7.3.1 to 7.3.2 in /widget (#9950)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 14:22:24 -07:00
dependabot[bot]
689f61ce08 chore(deps-dev): bump vite from 6.4.1 to 6.4.2 in /web (#9944)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jamison Lahman <jamison@lahman.dev>
2026-04-06 20:23:33 +00:00
acaprau
dec836a172 chore(db): Add env var for multiple postgres hosts (#9942) 2026-04-06 19:52:04 +00:00
dependabot[bot]
b6e623ef5c chore(deps): bump actions/stale from 10.1.1 to 10.2.0 (#9936)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 12:45:26 -07:00
Wenxi
ec9e340656 fix: set correct ee mode for mcp server (#9933) 2026-04-06 17:44:42 +00:00
dependabot[bot]
885006cb7a chore(deps): bump softprops/action-gh-release from 2.2.2 to 2.6.1 (#9935)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 10:47:44 -07:00
dependabot[bot]
472073cac0 chore(deps): bump azure/setup-helm from 4.3.1 to 5.0.0 (#9934)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 10:46:39 -07:00
Evan Lohn
5e61659e3a chore: bump sleep time in flaky test (#9900) 2026-04-06 16:22:29 +00:00
Alex Kim
7b18949b63 feat(helm): add optional CA certificate update step to api-server startup (#9378)
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-04-06 15:51:21 +00:00
Wenxi
efe51c108e refactor: remove dead LLM provider code from chat page load path (#9925) 2026-04-06 04:33:57 +00:00
Nikolas Garza
c092d16c01 feat(chat): add multi-model selector and chat hook (#9854) 2026-04-05 23:01:32 +00:00
Nikolas Garza
da715eaa58 fix(federated): prevent masked credentials from corrupting stored secrets (#9868) 2026-04-05 22:41:39 +00:00
Wenxi
bb18d39765 chore: rm remnants of old kombu psql broker code (#9924) 2026-04-05 20:18:47 +00:00
Raunak Bhagat
abc2cd5572 refactor: flatten opal card layouts, add children to CardHeaderLayout (#9907) 2026-04-04 02:50:55 +00:00
Raunak Bhagat
a704acbf73 fix: Edit AccountPopover + Separator's appearances when folded (#9906) 2026-04-04 01:24:59 +00:00
Jamison Lahman
8737122133 Revert "chore(deps): bump litellm from 1.81.6 to 1.83.0 (#9898)" (#9908) 2026-04-03 18:06:54 -07:00
Raunak Bhagat
c5d7cfa896 refactor: rework admin sidebar footer (#9895) 2026-04-04 00:08:42 +00:00
112 changed files with 4160 additions and 3079 deletions

View File

@@ -228,7 +228,7 @@ jobs:
- name: Create GitHub Release
id: create-release
uses: softprops/action-gh-release@da05d552573ad5aba039eaac05058a918a7bf631 # ratchet:softprops/action-gh-release@v2
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # ratchet:softprops/action-gh-release@v2
with:
tag_name: ${{ steps.release-tag.outputs.tag }}
name: ${{ steps.release-tag.outputs.tag }}

View File

@@ -21,7 +21,7 @@ jobs:
persist-credentials: false
- name: Install Helm CLI
uses: azure/setup-helm@1a275c3b69536ee54be43f2070a358922e12c8d4 # ratchet:azure/setup-helm@v4
uses: azure/setup-helm@dda3372f752e03dde6b3237bc9431cdc2f7a02a2 # ratchet:azure/setup-helm@v5.0.0
with:
version: v3.12.1

View File

@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 45
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # ratchet:actions/stale@v10
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # ratchet:actions/stale@v10
with:
stale-issue-message: 'This issue is stale because it has been open 75 days with no activity. Remove stale label or comment or this will be closed in 15 days.'
stale-pr-message: 'This PR is stale because it has been open 75 days with no activity. Remove stale label or comment or this will be closed in 15 days.'

View File

@@ -36,7 +36,7 @@ jobs:
persist-credentials: false
- name: Set up Helm
uses: azure/setup-helm@1a275c3b69536ee54be43f2070a358922e12c8d4 # ratchet:azure/setup-helm@v4.3.1
uses: azure/setup-helm@dda3372f752e03dde6b3237bc9431cdc2f7a02a2 # ratchet:azure/setup-helm@v5.0.0
with:
version: v3.19.0

3
.gitignore vendored
View File

@@ -59,3 +59,6 @@ node_modules
# plans
plans/
# Added context for LLMs
onyx-llm-context/

View File

@@ -1,4 +1,4 @@
from typing import Any, Literal
from typing import Any
from onyx.db.engine.iam_auth import get_iam_auth_token
from onyx.configs.app_configs import USE_IAM_AUTH
from onyx.configs.app_configs import POSTGRES_HOST
@@ -19,7 +19,6 @@ from logging.config import fileConfig
from alembic import context
from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.sql.schema import SchemaItem
from onyx.configs.constants import SSL_CERT_FILE
from shared_configs.configs import (
MULTI_TENANT,
@@ -45,8 +44,6 @@ if config.config_file_name is not None and config.attributes.get(
target_metadata = [Base.metadata, ResultModelBase.metadata]
EXCLUDE_TABLES = {"kombu_queue", "kombu_message"}
logger = logging.getLogger(__name__)
ssl_context: ssl.SSLContext | None = None
@@ -56,25 +53,6 @@ if USE_IAM_AUTH:
ssl_context = ssl.create_default_context(cafile=SSL_CERT_FILE)
def include_object(
object: SchemaItem, # noqa: ARG001
name: str | None,
type_: Literal[
"schema",
"table",
"column",
"index",
"unique_constraint",
"foreign_key_constraint",
],
reflected: bool, # noqa: ARG001
compare_to: SchemaItem | None, # noqa: ARG001
) -> bool:
if type_ == "table" and name in EXCLUDE_TABLES:
return False
return True
def filter_tenants_by_range(
tenant_ids: list[str], start_range: int | None = None, end_range: int | None = None
) -> list[str]:
@@ -231,7 +209,6 @@ def do_run_migrations(
context.configure(
connection=connection,
target_metadata=target_metadata, # type: ignore
include_object=include_object,
version_table_schema=schema_name,
include_schemas=True,
compare_type=True,
@@ -405,7 +382,6 @@ def run_migrations_offline() -> None:
url=url,
target_metadata=target_metadata, # type: ignore
literal_binds=True,
include_object=include_object,
version_table_schema=schema,
include_schemas=True,
script_location=config.get_main_option("script_location"),
@@ -447,7 +423,6 @@ def run_migrations_offline() -> None:
url=url,
target_metadata=target_metadata, # type: ignore
literal_binds=True,
include_object=include_object,
version_table_schema=schema,
include_schemas=True,
script_location=config.get_main_option("script_location"),
@@ -490,7 +465,6 @@ def run_migrations_online() -> None:
context.configure(
connection=connection,
target_metadata=target_metadata, # type: ignore
include_object=include_object,
version_table_schema=schema_name,
include_schemas=True,
compare_type=True,

View File

@@ -1,11 +1,9 @@
import asyncio
from logging.config import fileConfig
from typing import Literal
from sqlalchemy import pool
from sqlalchemy.engine import Connection
from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.schema import SchemaItem
from alembic import context
from onyx.db.engine.sql_engine import build_connection_string
@@ -35,27 +33,6 @@ target_metadata = [PublicBase.metadata]
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
EXCLUDE_TABLES = {"kombu_queue", "kombu_message"}
def include_object(
object: SchemaItem, # noqa: ARG001
name: str | None,
type_: Literal[
"schema",
"table",
"column",
"index",
"unique_constraint",
"foreign_key_constraint",
],
reflected: bool, # noqa: ARG001
compare_to: SchemaItem | None, # noqa: ARG001
) -> bool:
if type_ == "table" and name in EXCLUDE_TABLES:
return False
return True
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
@@ -85,7 +62,6 @@ def do_run_migrations(connection: Connection) -> None:
context.configure(
connection=connection,
target_metadata=target_metadata, # type: ignore[arg-type]
include_object=include_object,
)
with context.begin_transaction():

View File

@@ -5,6 +5,7 @@ from celery import Task
from celery.exceptions import SoftTimeLimitExceeded
from redis.lock import Lock as RedisLock
from ee.onyx.server.tenants.product_gating import get_gated_tenants
from onyx.background.celery.apps.app_base import task_logger
from onyx.background.celery.tasks.beat_schedule import BEAT_EXPIRES_DEFAULT
from onyx.configs.constants import CELERY_GENERIC_BEAT_LOCK_TIMEOUT
@@ -30,6 +31,7 @@ def cloud_beat_task_generator(
queue: str = OnyxCeleryTask.DEFAULT,
priority: int = OnyxCeleryPriority.MEDIUM,
expires: int = BEAT_EXPIRES_DEFAULT,
skip_gated: bool = True,
) -> bool | None:
"""a lightweight task used to kick off individual beat tasks per tenant."""
time_start = time.monotonic()
@@ -48,20 +50,22 @@ def cloud_beat_task_generator(
last_lock_time = time.monotonic()
tenant_ids: list[str] = []
num_processed_tenants = 0
num_skipped_gated = 0
try:
tenant_ids = get_all_tenant_ids()
# NOTE: for now, we are running tasks for gated tenants, since we want to allow
# connector deletion to run successfully. The new plan is to continously prune
# the gated tenants set, so we won't have a build up of old, unused gated tenants.
# Keeping this around in case we want to revert to the previous behavior.
# gated_tenants = get_gated_tenants()
# Per-task control over whether gated tenants are included. Most periodic tasks
# do no useful work on gated tenants and just waste DB connections fanning out
# to ~10k+ inactive tenants. A small number of cleanup tasks (connector deletion,
# checkpoint/index attempt cleanup) need to run on gated tenants and pass
# `skip_gated=False` from the beat schedule.
gated_tenants: set[str] = get_gated_tenants() if skip_gated else set()
for tenant_id in tenant_ids:
# Same comment here as the above NOTE
# if tenant_id in gated_tenants:
# continue
if tenant_id in gated_tenants:
num_skipped_gated += 1
continue
current_time = time.monotonic()
if current_time - last_lock_time >= (CELERY_GENERIC_BEAT_LOCK_TIMEOUT / 4):
@@ -104,6 +108,7 @@ def cloud_beat_task_generator(
f"cloud_beat_task_generator finished: "
f"task={task_name} "
f"num_processed_tenants={num_processed_tenants} "
f"num_skipped_gated={num_skipped_gated} "
f"num_tenants={len(tenant_ids)} "
f"elapsed={time_elapsed:.2f}"
)

View File

@@ -1,6 +1,7 @@
# Overview of Onyx Background Jobs
The background jobs take care of:
1. Pulling/Indexing documents (from connectors)
2. Updating document metadata (from connectors)
3. Cleaning up checkpoints and logic around indexing work (indexing indexing checkpoints and index attempt metadata)
@@ -9,37 +10,41 @@ The background jobs take care of:
## Worker → Queue Mapping
| Worker | File | Queues |
|--------|------|--------|
| Primary | `apps/primary.py` | `celery` |
| Light | `apps/light.py` | `vespa_metadata_sync`, `connector_deletion`, `doc_permissions_upsert`, `checkpoint_cleanup`, `index_attempt_cleanup` |
| Heavy | `apps/heavy.py` | `connector_pruning`, `connector_doc_permissions_sync`, `connector_external_group_sync`, `csv_generation`, `sandbox` |
| Docprocessing | `apps/docprocessing.py` | `docprocessing` |
| Docfetching | `apps/docfetching.py` | `connector_doc_fetching` |
| User File Processing | `apps/user_file_processing.py` | `user_file_processing`, `user_file_project_sync`, `user_file_delete` |
| Monitoring | `apps/monitoring.py` | `monitoring` |
| Background (consolidated) | `apps/background.py` | All queues above except `celery` |
| Worker | File | Queues |
| ------------------------- | ------------------------------ | -------------------------------------------------------------------------------------------------------------------- |
| Primary | `apps/primary.py` | `celery` |
| Light | `apps/light.py` | `vespa_metadata_sync`, `connector_deletion`, `doc_permissions_upsert`, `checkpoint_cleanup`, `index_attempt_cleanup` |
| Heavy | `apps/heavy.py` | `connector_pruning`, `connector_doc_permissions_sync`, `connector_external_group_sync`, `csv_generation`, `sandbox` |
| Docprocessing | `apps/docprocessing.py` | `docprocessing` |
| Docfetching | `apps/docfetching.py` | `connector_doc_fetching` |
| User File Processing | `apps/user_file_processing.py` | `user_file_processing`, `user_file_project_sync`, `user_file_delete` |
| Monitoring | `apps/monitoring.py` | `monitoring` |
| Background (consolidated) | `apps/background.py` | All queues above except `celery` |
## Non-Worker Apps
| App | File | Purpose |
|-----|------|---------|
| **Beat** | `beat.py` | Celery beat scheduler with `DynamicTenantScheduler` that generates per-tenant periodic task schedules |
| **Client** | `client.py` | Minimal app for task submission from non-worker processes (e.g., API server) |
| App | File | Purpose |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------- |
| **Beat** | `beat.py` | Celery beat scheduler with `DynamicTenantScheduler` that generates per-tenant periodic task schedules |
| **Client** | `client.py` | Minimal app for task submission from non-worker processes (e.g., API server) |
### Shared Module
`app_base.py` provides:
- `TenantAwareTask` - Base task class that sets tenant context
- Signal handlers for logging, cleanup, and lifecycle events
- Readiness probes and health checks
## Worker Details
### Primary (Coordinator and task dispatcher)
It is the single worker which handles tasks from the default celery queue. It is a singleton worker ensured by the `PRIMARY_WORKER` Redis lock
which it touches every `CELERY_PRIMARY_WORKER_LOCK_TIMEOUT / 8` seconds (using Celery Bootsteps)
On startup:
- waits for redis, postgres, document index to all be healthy
- acquires the singleton lock
- cleans all the redis states associated with background jobs
@@ -47,34 +52,34 @@ On startup:
Then it cycles through its tasks as scheduled by Celery Beat:
| Task | Frequency | Description |
|------|-----------|-------------|
| `check_for_indexing` | 15s | Scans for connectors needing indexing → dispatches to `DOCFETCHING` queue |
| `check_for_vespa_sync_task` | 20s | Finds stale documents/document sets → dispatches sync tasks to `VESPA_METADATA_SYNC` queue |
| `check_for_pruning` | 20s | Finds connectors due for pruning → dispatches to `CONNECTOR_PRUNING` queue |
| `check_for_connector_deletion` | 20s | Processes deletion requests → dispatches to `CONNECTOR_DELETION` queue |
| `check_for_user_file_processing` | 20s | Checks for user uploads → dispatches to `USER_FILE_PROCESSING` queue |
| `check_for_checkpoint_cleanup` | 1h | Cleans up old indexing checkpoints |
| `check_for_index_attempt_cleanup` | 30m | Cleans up old index attempts |
| `kombu_message_cleanup_task` | periodic | Cleans orphaned Kombu messages from DB (Kombu being the messaging framework used by Celery) |
| `celery_beat_heartbeat` | 1m | Heartbeat for Beat watchdog |
| Task | Frequency | Description |
| --------------------------------- | --------- | ------------------------------------------------------------------------------------------ |
| `check_for_indexing` | 15s | Scans for connectors needing indexing → dispatches to `DOCFETCHING` queue |
| `check_for_vespa_sync_task` | 20s | Finds stale documents/document sets → dispatches sync tasks to `VESPA_METADATA_SYNC` queue |
| `check_for_pruning` | 20s | Finds connectors due for pruning → dispatches to `CONNECTOR_PRUNING` queue |
| `check_for_connector_deletion` | 20s | Processes deletion requests → dispatches to `CONNECTOR_DELETION` queue |
| `check_for_user_file_processing` | 20s | Checks for user uploads → dispatches to `USER_FILE_PROCESSING` queue |
| `check_for_checkpoint_cleanup` | 1h | Cleans up old indexing checkpoints |
| `check_for_index_attempt_cleanup` | 30m | Cleans up old index attempts |
| `celery_beat_heartbeat` | 1m | Heartbeat for Beat watchdog |
Watchdog is a separate Python process managed by supervisord which runs alongside celery workers. It checks the ONYX_CELERY_BEAT_HEARTBEAT_KEY in
Redis to ensure Celery Beat is not dead. Beat schedules the celery_beat_heartbeat for Primary to touch the key and share that it's still alive.
See supervisord.conf for watchdog config.
### Light
Fast and short living tasks that are not resource intensive. High concurrency:
Can have 24 concurrent workers, each with a prefetch of 8 for a total of 192 tasks in flight at once.
Tasks it handles:
- Syncs access/permissions, document sets, boosts, hidden state
- Deletes documents that are marked for deletion in Postgres
- Cleanup of checkpoints and index attempts
### Heavy
Long running, resource intensive tasks, handles pruning and sandbox operations. Low concurrency - max concurrency of 4 with 1 prefetch.
Does not interact with the Document Index, it handles the syncs with external systems. Large volume API calls to handle pruning and fetching permissions, etc.
@@ -83,16 +88,24 @@ Generates CSV exports which may take a long time with significant data in Postgr
Sandbox (new feature) for running Next.js, Python virtual env, OpenCode AI Agent, and access to knowledge files
### Docprocessing, Docfetching, User File Processing
Docprocessing and Docfetching are for indexing documents:
- Docfetching runs connectors to pull documents from external APIs (Google Drive, Confluence, etc.), stores batches to file storage, and dispatches docprocessing tasks
- Docprocessing retrieves batches, runs the indexing pipeline (chunking, embedding), and indexes into the Document Index
User Files come from uploads directly via the input bar
Docprocessing and Docfetching are for indexing documents:
- Docfetching runs connectors to pull documents from external APIs (Google Drive, Confluence, etc.), stores batches to file storage, and dispatches docprocessing tasks
- Docprocessing retrieves batches, runs the indexing pipeline (chunking, embedding), and indexes into the Document Index
- User Files come from uploads directly via the input bar
### Monitoring
Observability and metrics collections:
- Queue lengths, connector success/failure, lconnector latencies
- Queue lengths, connector success/failure, connector latencies
- Memory of supervisor managed processes (workers, beat, slack)
- Cloud and multitenant specific monitorings
## Prometheus Metrics
Workers can expose Prometheus metrics via a standalone HTTP server. Currently docfetching and docprocessing have push-based task lifecycle metrics; the monitoring worker runs pull-based collectors for queue depth and connector health.
For the full metric reference, integration guide, and PromQL examples, see [`docs/METRICS.md`](../../../docs/METRICS.md#celery-worker-metrics).

View File

@@ -317,7 +317,6 @@ celery_app.autodiscover_tasks(
"onyx.background.celery.tasks.docprocessing",
"onyx.background.celery.tasks.evals",
"onyx.background.celery.tasks.hierarchyfetching",
"onyx.background.celery.tasks.periodic",
"onyx.background.celery.tasks.pruning",
"onyx.background.celery.tasks.shared",
"onyx.background.celery.tasks.vespa",

View File

@@ -75,6 +75,8 @@ beat_task_templates: list[dict] = [
"options": {
"priority": OnyxCeleryPriority.LOW,
"expires": BEAT_EXPIRES_DEFAULT,
# Run on gated tenants too — they may still have stale checkpoints to clean.
"skip_gated": False,
},
},
{
@@ -84,6 +86,8 @@ beat_task_templates: list[dict] = [
"options": {
"priority": OnyxCeleryPriority.MEDIUM,
"expires": BEAT_EXPIRES_DEFAULT,
# Run on gated tenants too — they may still have stale index attempts.
"skip_gated": False,
},
},
{
@@ -93,6 +97,8 @@ beat_task_templates: list[dict] = [
"options": {
"priority": OnyxCeleryPriority.MEDIUM,
"expires": BEAT_EXPIRES_DEFAULT,
# Gated tenants may still have connectors awaiting deletion.
"skip_gated": False,
},
},
{
@@ -266,7 +272,7 @@ def make_cloud_generator_task(task: dict[str, Any]) -> dict[str, Any]:
cloud_task["kwargs"] = {}
cloud_task["kwargs"]["task_name"] = task["task"]
optional_fields = ["queue", "priority", "expires"]
optional_fields = ["queue", "priority", "expires", "skip_gated"]
for field in optional_fields:
if field in task["options"]:
cloud_task["kwargs"][field] = task["options"][field]
@@ -359,7 +365,13 @@ if not MULTI_TENANT:
]
)
tasks_to_schedule.extend(beat_task_templates)
# `skip_gated` is a cloud-only hint consumed by `cloud_beat_task_generator`. Strip
# it before extending the self-hosted schedule so it doesn't leak into apply_async
# as an unrecognised option on every fired task message.
for _template in beat_task_templates:
_self_hosted_template = copy.deepcopy(_template)
_self_hosted_template["options"].pop("skip_gated", None)
tasks_to_schedule.append(_self_hosted_template)
def generate_cloud_tasks(

View File

@@ -1,138 +0,0 @@
#####
# Periodic Tasks
#####
import json
from typing import Any
from celery import shared_task
from celery.contrib.abortable import AbortableTask # type: ignore
from celery.exceptions import TaskRevokedError
from sqlalchemy import inspect
from sqlalchemy import text
from sqlalchemy.orm import Session
from onyx.background.celery.apps.app_base import task_logger
from onyx.configs.app_configs import JOB_TIMEOUT
from onyx.configs.constants import OnyxCeleryTask
from onyx.configs.constants import PostgresAdvisoryLocks
from onyx.db.engine.sql_engine import get_session_with_current_tenant
@shared_task(
name=OnyxCeleryTask.KOMBU_MESSAGE_CLEANUP_TASK,
soft_time_limit=JOB_TIMEOUT,
bind=True,
base=AbortableTask,
)
def kombu_message_cleanup_task(self: Any, tenant_id: str) -> int: # noqa: ARG001
"""Runs periodically to clean up the kombu_message table"""
# we will select messages older than this amount to clean up
KOMBU_MESSAGE_CLEANUP_AGE = 7 # days
KOMBU_MESSAGE_CLEANUP_PAGE_LIMIT = 1000
ctx = {}
ctx["last_processed_id"] = 0
ctx["deleted"] = 0
ctx["cleanup_age"] = KOMBU_MESSAGE_CLEANUP_AGE
ctx["page_limit"] = KOMBU_MESSAGE_CLEANUP_PAGE_LIMIT
with get_session_with_current_tenant() as db_session:
# Exit the task if we can't take the advisory lock
result = db_session.execute(
text("SELECT pg_try_advisory_lock(:id)"),
{"id": PostgresAdvisoryLocks.KOMBU_MESSAGE_CLEANUP_LOCK_ID.value},
).scalar()
if not result:
return 0
while True:
if self.is_aborted():
raise TaskRevokedError("kombu_message_cleanup_task was aborted.")
b = kombu_message_cleanup_task_helper(ctx, db_session)
if not b:
break
db_session.commit()
if ctx["deleted"] > 0:
task_logger.info(
f"Deleted {ctx['deleted']} orphaned messages from kombu_message."
)
return ctx["deleted"]
def kombu_message_cleanup_task_helper(ctx: dict, db_session: Session) -> bool:
"""
Helper function to clean up old messages from the `kombu_message` table that are no longer relevant.
This function retrieves messages from the `kombu_message` table that are no longer visible and
older than a specified interval. It checks if the corresponding task_id exists in the
`celery_taskmeta` table. If the task_id does not exist, the message is deleted.
Args:
ctx (dict): A context dictionary containing configuration parameters such as:
- 'cleanup_age' (int): The age in days after which messages are considered old.
- 'page_limit' (int): The maximum number of messages to process in one batch.
- 'last_processed_id' (int): The ID of the last processed message to handle pagination.
- 'deleted' (int): A counter to track the number of deleted messages.
db_session (Session): The SQLAlchemy database session for executing queries.
Returns:
bool: Returns True if there are more rows to process, False if not.
"""
inspector = inspect(db_session.bind)
if not inspector:
return False
# With the move to redis as celery's broker and backend, kombu tables may not even exist.
# We can fail silently.
if not inspector.has_table("kombu_message"):
return False
query = text(
"""
SELECT id, timestamp, payload
FROM kombu_message WHERE visible = 'false'
AND timestamp < CURRENT_TIMESTAMP - INTERVAL :interval_days
AND id > :last_processed_id
ORDER BY id
LIMIT :page_limit
"""
)
kombu_messages = db_session.execute(
query,
{
"interval_days": f"{ctx['cleanup_age']} days",
"page_limit": ctx["page_limit"],
"last_processed_id": ctx["last_processed_id"],
},
).fetchall()
if len(kombu_messages) == 0:
return False
for msg in kombu_messages:
payload = json.loads(msg[2])
task_id = payload["headers"]["id"]
# Check if task_id exists in celery_taskmeta
task_exists = db_session.execute(
text("SELECT 1 FROM celery_taskmeta WHERE task_id = :task_id"),
{"task_id": task_id},
).fetchone()
# If task_id does not exist, delete the message
if not task_exists:
result = db_session.execute(
text("DELETE FROM kombu_message WHERE id = :message_id"),
{"message_id": msg[0]},
)
if result.rowcount > 0: # type: ignore
ctx["deleted"] += 1
ctx["last_processed_id"] = msg[0]
return True

View File

@@ -379,6 +379,14 @@ POSTGRES_HOST = os.environ.get("POSTGRES_HOST") or "127.0.0.1"
POSTGRES_PORT = os.environ.get("POSTGRES_PORT") or "5432"
POSTGRES_DB = os.environ.get("POSTGRES_DB") or "postgres"
AWS_REGION_NAME = os.environ.get("AWS_REGION_NAME") or "us-east-2"
# Comma-separated replica / multi-host list. If unset, defaults to POSTGRES_HOST
# only.
_POSTGRES_HOSTS_STR = os.environ.get("POSTGRES_HOSTS", "").strip()
POSTGRES_HOSTS: list[str] = (
[h.strip() for h in _POSTGRES_HOSTS_STR.split(",") if h.strip()]
if _POSTGRES_HOSTS_STR
else [POSTGRES_HOST]
)
POSTGRES_API_SERVER_POOL_SIZE = int(
os.environ.get("POSTGRES_API_SERVER_POOL_SIZE") or 40

View File

@@ -12,6 +12,11 @@ SLACK_USER_TOKEN_PREFIX = "xoxp-"
SLACK_BOT_TOKEN_PREFIX = "xoxb-"
ONYX_EMAILABLE_LOGO_MAX_DIM = 512
# The mask_string() function in encryption.py uses "•" (U+2022 BULLET) to mask secrets.
MASK_CREDENTIAL_CHAR = "\u2022"
# Pattern produced by mask_string for strings >= 14 chars: "abcd...wxyz" (exactly 11 chars)
MASK_CREDENTIAL_LONG_RE = re.compile(r"^.{4}\.{3}.{4}$")
SOURCE_TYPE = "source_type"
# stored in the `metadata` of a chunk. Used to signify that this chunk should
# not be used for QA. For example, Google Drive file types which can't be parsed
@@ -391,10 +396,6 @@ class MilestoneRecordType(str, Enum):
REQUESTED_CONNECTOR = "requested_connector"
class PostgresAdvisoryLocks(Enum):
KOMBU_MESSAGE_CLEANUP_LOCK_ID = auto()
class OnyxCeleryQueues:
# "celery" is the default queue defined by celery and also the queue
# we are running in the primary worker to run system tasks
@@ -577,7 +578,6 @@ class OnyxCeleryTask:
MONITOR_PROCESS_MEMORY = "monitor_process_memory"
CELERY_BEAT_HEARTBEAT = "celery_beat_heartbeat"
KOMBU_MESSAGE_CLEANUP_TASK = "kombu_message_cleanup_task"
CONNECTOR_PERMISSION_SYNC_GENERATOR_TASK = (
"connector_permission_sync_generator_task"
)

View File

@@ -8,6 +8,8 @@ from sqlalchemy.orm import selectinload
from sqlalchemy.orm import Session
from onyx.configs.constants import FederatedConnectorSource
from onyx.configs.constants import MASK_CREDENTIAL_CHAR
from onyx.configs.constants import MASK_CREDENTIAL_LONG_RE
from onyx.db.engine.sql_engine import get_session_with_current_tenant
from onyx.db.models import DocumentSet
from onyx.db.models import FederatedConnector
@@ -45,6 +47,23 @@ def fetch_all_federated_connectors_parallel() -> list[FederatedConnector]:
return fetch_all_federated_connectors(db_session)
def _reject_masked_credentials(credentials: dict[str, Any]) -> None:
"""Raise if any credential string value contains mask placeholder characters.
mask_string() has two output formats:
- Short strings (< 14 chars): "••••••••••••" (U+2022 BULLET)
- Long strings (>= 14 chars): "abcd...wxyz" (first4 + "..." + last4)
Both must be rejected.
"""
for key, val in credentials.items():
if isinstance(val, str) and (
MASK_CREDENTIAL_CHAR in val or MASK_CREDENTIAL_LONG_RE.match(val)
):
raise ValueError(
f"Credential field '{key}' contains masked placeholder characters. Please provide the actual credential value."
)
def validate_federated_connector_credentials(
source: FederatedConnectorSource,
credentials: dict[str, Any],
@@ -66,6 +85,8 @@ def create_federated_connector(
config: dict[str, Any] | None = None,
) -> FederatedConnector:
"""Create a new federated connector with credential and config validation."""
_reject_masked_credentials(credentials)
# Validate credentials before creating
if not validate_federated_connector_credentials(source, credentials):
raise ValueError(
@@ -277,6 +298,8 @@ def update_federated_connector(
)
if credentials is not None:
_reject_masked_credentials(credentials)
# Validate credentials before updating
if not validate_federated_connector_credentials(
federated_connector.source, credentials

View File

@@ -236,14 +236,15 @@ def upsert_llm_provider(
db_session.add(existing_llm_provider)
# Filter out empty strings and None values from custom_config to allow
# providers like Bedrock to fall back to IAM roles when credentials are not provided
# providers like Bedrock to fall back to IAM roles when credentials are not provided.
# NOTE: An empty dict ({}) is preserved as-is — it signals that the provider was
# created via the custom modal and must be reopened with CustomModal, not a
# provider-specific modal. Only None means "no custom config at all".
custom_config = llm_provider_upsert_request.custom_config
if custom_config:
custom_config = {
k: v for k, v in custom_config.items() if v is not None and v.strip() != ""
}
# Set to None if the dict is empty after filtering
custom_config = custom_config or None
api_base = llm_provider_upsert_request.api_base or None
existing_llm_provider.provider = llm_provider_upsert_request.provider
@@ -303,16 +304,7 @@ def upsert_llm_provider(
).delete(synchronize_session="fetch")
db_session.flush()
# Import here to avoid circular imports
from onyx.llm.utils import get_max_input_tokens
for model_config in llm_provider_upsert_request.model_configurations:
max_input_tokens = model_config.max_input_tokens
if max_input_tokens is None:
max_input_tokens = get_max_input_tokens(
model_name=model_config.name,
model_provider=llm_provider_upsert_request.provider,
)
supported_flows = [LLMModelFlowType.CHAT]
if model_config.supports_image_input:
@@ -325,7 +317,7 @@ def upsert_llm_provider(
model_configuration_id=existing.id,
supported_flows=supported_flows,
is_visible=model_config.is_visible,
max_input_tokens=max_input_tokens,
max_input_tokens=model_config.max_input_tokens,
display_name=model_config.display_name,
)
else:
@@ -335,7 +327,7 @@ def upsert_llm_provider(
model_name=model_config.name,
supported_flows=supported_flows,
is_visible=model_config.is_visible,
max_input_tokens=max_input_tokens,
max_input_tokens=model_config.max_input_tokens,
display_name=model_config.display_name,
)

View File

@@ -52,9 +52,21 @@ KNOWN_OPENPYXL_BUGS = [
def get_markitdown_converter() -> "MarkItDown":
global _MARKITDOWN_CONVERTER
from markitdown import MarkItDown
if _MARKITDOWN_CONVERTER is None:
from markitdown import MarkItDown
# Patch this function to effectively no-op because we were seeing this
# module take an inordinate amount of time to convert charts to markdown,
# making some powerpoint files with many or complicated charts nearly
# unindexable.
from markitdown.converters._pptx_converter import PptxConverter
setattr(
PptxConverter,
"_convert_chart_to_markdown",
lambda self, chart: "\n\n[chart omitted]\n\n", # noqa: ARG005
)
_MARKITDOWN_CONVERTER = MarkItDown(enable_plugins=False)
return _MARKITDOWN_CONVERTER
@@ -205,18 +217,26 @@ def read_pdf_file(
try:
pdf_reader = PdfReader(file)
if pdf_reader.is_encrypted and pdf_pass is not None:
if pdf_reader.is_encrypted:
# Try the explicit password first, then fall back to an empty
# string. Owner-password-only PDFs (permission restrictions but
# no open password) decrypt successfully with "".
# See https://github.com/onyx-dot-app/onyx/issues/9754
passwords = [p for p in [pdf_pass, ""] if p is not None]
decrypt_success = False
try:
decrypt_success = pdf_reader.decrypt(pdf_pass) != 0
except Exception:
logger.error("Unable to decrypt pdf")
for pw in passwords:
try:
if pdf_reader.decrypt(pw) != 0:
decrypt_success = True
break
except Exception:
pass
if not decrypt_success:
logger.error(
"Encrypted PDF could not be decrypted, returning empty text."
)
return "", metadata, []
elif pdf_reader.is_encrypted:
logger.warning("No Password for an encrypted PDF, returning empty text.")
return "", metadata, []
# Basic PDF metadata
if pdf_reader.metadata is not None:

View File

@@ -33,8 +33,20 @@ def is_pdf_protected(file: IO[Any]) -> bool:
with preserve_position(file):
reader = PdfReader(file)
if not reader.is_encrypted:
return False
return bool(reader.is_encrypted)
# PDFs with only an owner password (permission restrictions like
# print/copy disabled) use an empty user password — any viewer can open
# them without prompting. decrypt("") returns 0 only when a real user
# password is required. See https://github.com/onyx-dot-app/onyx/issues/9754
try:
return reader.decrypt("") == 0
except Exception:
logger.exception(
"Failed to evaluate PDF encryption; treating as password protected"
)
return True
def is_docx_protected(file: IO[Any]) -> bool:

View File

@@ -26,6 +26,7 @@ class LlmProviderNames(str, Enum):
MISTRAL = "mistral"
LITELLM_PROXY = "litellm_proxy"
BIFROST = "bifrost"
OPENAI_COMPATIBLE = "openai_compatible"
def __str__(self) -> str:
"""Needed so things like:
@@ -46,6 +47,7 @@ WELL_KNOWN_PROVIDER_NAMES = [
LlmProviderNames.LM_STUDIO,
LlmProviderNames.LITELLM_PROXY,
LlmProviderNames.BIFROST,
LlmProviderNames.OPENAI_COMPATIBLE,
]
@@ -64,6 +66,7 @@ PROVIDER_DISPLAY_NAMES: dict[str, str] = {
LlmProviderNames.LM_STUDIO: "LM Studio",
LlmProviderNames.LITELLM_PROXY: "LiteLLM Proxy",
LlmProviderNames.BIFROST: "Bifrost",
LlmProviderNames.OPENAI_COMPATIBLE: "OpenAI Compatible",
"groq": "Groq",
"anyscale": "Anyscale",
"deepseek": "DeepSeek",
@@ -116,6 +119,7 @@ AGGREGATOR_PROVIDERS: set[str] = {
LlmProviderNames.AZURE,
LlmProviderNames.LITELLM_PROXY,
LlmProviderNames.BIFROST,
LlmProviderNames.OPENAI_COMPATIBLE,
}
# Model family name mappings for display name generation

View File

@@ -327,12 +327,19 @@ class LitellmLLM(LLM):
):
model_kwargs[VERTEX_LOCATION_KWARG] = "global"
# Bifrost: OpenAI-compatible proxy that expects model names in
# provider/model format (e.g. "anthropic/claude-sonnet-4-6").
# We route through LiteLLM's openai provider with the Bifrost base URL,
# and ensure /v1 is appended.
if model_provider == LlmProviderNames.BIFROST:
# Bifrost and OpenAI-compatible: OpenAI-compatible proxies that send
# model names directly to the endpoint. We route through LiteLLM's
# openai provider with the server's base URL, and ensure /v1 is appended.
if model_provider in (
LlmProviderNames.BIFROST,
LlmProviderNames.OPENAI_COMPATIBLE,
):
self._custom_llm_provider = "openai"
# LiteLLM's OpenAI client requires an api_key to be set.
# Many OpenAI-compatible servers don't need auth, so supply a
# placeholder to prevent LiteLLM from raising AuthenticationError.
if not self._api_key:
model_kwargs.setdefault("api_key", "not-needed")
if self._api_base is not None:
base = self._api_base.rstrip("/")
self._api_base = base if base.endswith("/v1") else f"{base}/v1"
@@ -449,17 +456,20 @@ class LitellmLLM(LLM):
optional_kwargs: dict[str, Any] = {}
# Model name
is_bifrost = self._model_provider == LlmProviderNames.BIFROST
is_openai_compatible_proxy = self._model_provider in (
LlmProviderNames.BIFROST,
LlmProviderNames.OPENAI_COMPATIBLE,
)
model_provider = (
f"{self.config.model_provider}/responses"
if is_openai_model # Uses litellm's completions -> responses bridge
else self.config.model_provider
)
if is_bifrost:
# Bifrost expects model names in provider/model format
# (e.g. "anthropic/claude-sonnet-4-6") sent directly to its
# OpenAI-compatible endpoint. We use custom_llm_provider="openai"
# so LiteLLM doesn't try to route based on the provider prefix.
if is_openai_compatible_proxy:
# OpenAI-compatible proxies (Bifrost, generic OpenAI-compatible
# servers) expect model names sent directly to their endpoint.
# We use custom_llm_provider="openai" so LiteLLM doesn't try
# to route based on the provider prefix.
model = self.config.deployment_name or self.config.model_name
else:
model = f"{model_provider}/{self.config.deployment_name or self.config.model_name}"
@@ -550,7 +560,10 @@ class LitellmLLM(LLM):
if structured_response_format:
optional_kwargs["response_format"] = structured_response_format
if not (is_claude_model or is_ollama or is_mistral) or is_bifrost:
if (
not (is_claude_model or is_ollama or is_mistral)
or is_openai_compatible_proxy
):
# Litellm bug: tool_choice is dropped silently if not specified here for OpenAI
# However, this param breaks Anthropic and Mistral models,
# so it must be conditionally included unless the request is

View File

@@ -15,6 +15,8 @@ LITELLM_PROXY_PROVIDER_NAME = "litellm_proxy"
BIFROST_PROVIDER_NAME = "bifrost"
OPENAI_COMPATIBLE_PROVIDER_NAME = "openai_compatible"
# Providers that use optional Bearer auth from custom_config
PROVIDERS_WITH_SPECIAL_API_KEY_HANDLING: dict[str, str] = {
LlmProviderNames.OLLAMA_CHAT: OLLAMA_API_KEY_CONFIG_KEY,

View File

@@ -19,6 +19,7 @@ from onyx.llm.well_known_providers.constants import BIFROST_PROVIDER_NAME
from onyx.llm.well_known_providers.constants import LITELLM_PROXY_PROVIDER_NAME
from onyx.llm.well_known_providers.constants import LM_STUDIO_PROVIDER_NAME
from onyx.llm.well_known_providers.constants import OLLAMA_PROVIDER_NAME
from onyx.llm.well_known_providers.constants import OPENAI_COMPATIBLE_PROVIDER_NAME
from onyx.llm.well_known_providers.constants import OPENAI_PROVIDER_NAME
from onyx.llm.well_known_providers.constants import OPENROUTER_PROVIDER_NAME
from onyx.llm.well_known_providers.constants import VERTEXAI_PROVIDER_NAME
@@ -51,6 +52,7 @@ def _get_provider_to_models_map() -> dict[str, list[str]]:
OPENROUTER_PROVIDER_NAME: [], # Dynamic - fetched from OpenRouter API
LITELLM_PROXY_PROVIDER_NAME: [], # Dynamic - fetched from LiteLLM proxy API
BIFROST_PROVIDER_NAME: [], # Dynamic - fetched from Bifrost API
OPENAI_COMPATIBLE_PROVIDER_NAME: [], # Dynamic - fetched from OpenAI-compatible API
}
@@ -336,6 +338,7 @@ def get_provider_display_name(provider_name: str) -> str:
VERTEXAI_PROVIDER_NAME: "Google Vertex AI",
OPENROUTER_PROVIDER_NAME: "OpenRouter",
LITELLM_PROXY_PROVIDER_NAME: "LiteLLM Proxy",
OPENAI_COMPATIBLE_PROVIDER_NAME: "OpenAI Compatible",
}
if provider_name in _ONYX_PROVIDER_DISPLAY_NAMES:

View File

@@ -6,6 +6,7 @@ from onyx.configs.app_configs import MCP_SERVER_ENABLED
from onyx.configs.app_configs import MCP_SERVER_HOST
from onyx.configs.app_configs import MCP_SERVER_PORT
from onyx.utils.logger import setup_logger
from onyx.utils.variable_functionality import set_is_ee_based_on_env_variable
logger = setup_logger()
@@ -16,6 +17,7 @@ def main() -> None:
logger.info("MCP server is disabled (MCP_SERVER_ENABLED=false)")
return
set_is_ee_based_on_env_variable()
logger.info(f"Starting MCP server on {MCP_SERVER_HOST}:{MCP_SERVER_PORT}")
from onyx.mcp_server.api import mcp_app

View File

@@ -74,6 +74,8 @@ from onyx.server.manage.llm.models import ModelConfigurationUpsertRequest
from onyx.server.manage.llm.models import OllamaFinalModelResponse
from onyx.server.manage.llm.models import OllamaModelDetails
from onyx.server.manage.llm.models import OllamaModelsRequest
from onyx.server.manage.llm.models import OpenAICompatibleFinalModelResponse
from onyx.server.manage.llm.models import OpenAICompatibleModelsRequest
from onyx.server.manage.llm.models import OpenRouterFinalModelResponse
from onyx.server.manage.llm.models import OpenRouterModelDetails
from onyx.server.manage.llm.models import OpenRouterModelsRequest
@@ -1575,3 +1577,95 @@ def _get_bifrost_models_response(api_base: str, api_key: str | None = None) -> d
source_name="Bifrost",
api_key=api_key,
)
@admin_router.post("/openai-compatible/available-models")
def get_openai_compatible_server_available_models(
request: OpenAICompatibleModelsRequest,
_: User = Depends(current_admin_user),
db_session: Session = Depends(get_session),
) -> list[OpenAICompatibleFinalModelResponse]:
"""Fetch available models from a generic OpenAI-compatible /v1/models endpoint."""
response_json = _get_openai_compatible_server_response(
api_base=request.api_base, api_key=request.api_key
)
models = response_json.get("data", [])
if not isinstance(models, list) or len(models) == 0:
raise OnyxError(
OnyxErrorCode.VALIDATION_ERROR,
"No models found from your OpenAI-compatible endpoint",
)
results: list[OpenAICompatibleFinalModelResponse] = []
for model in models:
try:
model_id = model.get("id", "")
model_name = model.get("name", model_id)
if not model_id:
continue
# Skip embedding models
if is_embedding_model(model_id):
continue
results.append(
OpenAICompatibleFinalModelResponse(
name=model_id,
display_name=model_name,
max_input_tokens=model.get("context_length"),
supports_image_input=infer_vision_support(model_id),
supports_reasoning=is_reasoning_model(model_id, model_name),
)
)
except Exception as e:
logger.warning(
"Failed to parse OpenAI-compatible model entry",
extra={"error": str(e), "item": str(model)[:1000]},
)
if not results:
raise OnyxError(
OnyxErrorCode.VALIDATION_ERROR,
"No compatible models found from OpenAI-compatible endpoint",
)
sorted_results = sorted(results, key=lambda m: m.name.lower())
# Sync new models to DB if provider_name is specified
if request.provider_name:
_sync_fetched_models(
db_session=db_session,
provider_name=request.provider_name,
models=[
SyncModelEntry(
name=r.name,
display_name=r.display_name,
max_input_tokens=r.max_input_tokens,
supports_image_input=r.supports_image_input,
)
for r in sorted_results
],
source_label="OpenAI Compatible",
)
return sorted_results
def _get_openai_compatible_server_response(
api_base: str, api_key: str | None = None
) -> dict:
"""Perform GET to an OpenAI-compatible /v1/models and return parsed JSON."""
cleaned_api_base = api_base.strip().rstrip("/")
# Ensure we hit /v1/models
if cleaned_api_base.endswith("/v1"):
url = f"{cleaned_api_base}/models"
else:
url = f"{cleaned_api_base}/v1/models"
return _get_openai_compatible_models_response(
url=url,
source_name="OpenAI Compatible",
api_key=api_key,
)

View File

@@ -79,7 +79,9 @@ class LLMProviderDescriptor(BaseModel):
provider=provider,
provider_display_name=get_provider_display_name(provider),
model_configurations=filter_model_configurations(
llm_provider_model.model_configurations, provider
llm_provider_model.model_configurations,
provider,
use_stored_display_name=llm_provider_model.custom_config is not None,
),
)
@@ -156,7 +158,9 @@ class LLMProviderView(LLMProvider):
personas=personas,
deployment_name=llm_provider_model.deployment_name,
model_configurations=filter_model_configurations(
llm_provider_model.model_configurations, provider
llm_provider_model.model_configurations,
provider,
use_stored_display_name=llm_provider_model.custom_config is not None,
),
)
@@ -198,13 +202,13 @@ class ModelConfigurationView(BaseModel):
cls,
model_configuration_model: "ModelConfigurationModel",
provider_name: str,
use_stored_display_name: bool = False,
) -> "ModelConfigurationView":
# For dynamic providers (OpenRouter, Bedrock, Ollama), use the display_name
# stored in DB from the source API. Skip LiteLLM parsing entirely.
# For dynamic providers (OpenRouter, Bedrock, Ollama) and custom-config
# providers, use the display_name stored in DB. Skip LiteLLM parsing.
if (
provider_name in DYNAMIC_LLM_PROVIDERS
and model_configuration_model.display_name
):
provider_name in DYNAMIC_LLM_PROVIDERS or use_stored_display_name
) and model_configuration_model.display_name:
# Extract vendor from model name for grouping (e.g., "Anthropic", "OpenAI")
vendor = extract_vendor_from_model_name(
model_configuration_model.name, provider_name
@@ -464,3 +468,18 @@ class BifrostFinalModelResponse(BaseModel):
max_input_tokens: int | None
supports_image_input: bool
supports_reasoning: bool
# OpenAI Compatible dynamic models fetch
class OpenAICompatibleModelsRequest(BaseModel):
api_base: str
api_key: str | None = None
provider_name: str | None = None # Optional: to save models to existing provider
class OpenAICompatibleFinalModelResponse(BaseModel):
name: str # Model ID (e.g. "meta-llama/Llama-3-8B-Instruct")
display_name: str # Human-readable name from API
max_input_tokens: int | None
supports_image_input: bool
supports_reasoning: bool

View File

@@ -26,6 +26,7 @@ DYNAMIC_LLM_PROVIDERS = frozenset(
LlmProviderNames.OLLAMA_CHAT,
LlmProviderNames.LM_STUDIO,
LlmProviderNames.BIFROST,
LlmProviderNames.OPENAI_COMPATIBLE,
}
)
@@ -308,12 +309,15 @@ def should_filter_as_dated_duplicate(
def filter_model_configurations(
model_configurations: list,
provider: str,
use_stored_display_name: bool = False,
) -> list:
"""Filter out obsolete and dated duplicate models from configurations.
Args:
model_configurations: List of ModelConfiguration DB models
provider: The provider name (e.g., "openai", "anthropic")
use_stored_display_name: If True, prefer the display_name stored in the
DB over LiteLLM enrichments. Set for custom-config providers.
Returns:
List of ModelConfigurationView objects with obsolete/duplicate models removed
@@ -333,7 +337,9 @@ def filter_model_configurations(
if should_filter_as_dated_duplicate(model_configuration.name, all_model_names):
continue
filtered_configs.append(
ModelConfigurationView.from_model(model_configuration, provider)
ModelConfigurationView.from_model(
model_configuration, provider, use_stored_display_name
)
)
return filtered_configs

View File

@@ -461,7 +461,7 @@ lazy-imports==1.0.1
# via onyx
legacy-cgi==2.6.4 ; python_full_version >= '3.13'
# via ddtrace
litellm==1.83.0
litellm==1.81.6
# via onyx
locket==1.0.0
# via

View File

@@ -219,7 +219,7 @@ kiwisolver==1.4.9
# via matplotlib
kubernetes==31.0.0
# via onyx
litellm==1.83.0
litellm==1.81.6
# via onyx
mako==1.2.4
# via alembic

View File

@@ -154,7 +154,7 @@ jsonschema-specifications==2025.9.1
# via jsonschema
kubernetes==31.0.0
# via onyx
litellm==1.83.0
litellm==1.81.6
# via onyx
markupsafe==3.0.3
# via jinja2

View File

@@ -189,7 +189,7 @@ kombu==5.5.4
# via celery
kubernetes==31.0.0
# via onyx
litellm==1.83.0
litellm==1.81.6
# via onyx
markupsafe==3.0.3
# via jinja2

View File

@@ -186,7 +186,7 @@ class TestDocumentIndexNew:
)
document_index.index(chunks=[pre_chunk], indexing_metadata=pre_metadata)
time.sleep(1)
time.sleep(2)
# Now index a batch with the existing doc and a new doc.
chunks = [

View File

@@ -0,0 +1,58 @@
import pytest
from onyx.configs.constants import MASK_CREDENTIAL_CHAR
from onyx.db.federated import _reject_masked_credentials
class TestRejectMaskedCredentials:
"""Verify that masked credential values are never accepted for DB writes.
mask_string() has two output formats:
- Short strings (< 14 chars): "••••••••••••" (U+2022 BULLET)
- Long strings (>= 14 chars): "abcd...wxyz" (first4 + "..." + last4)
_reject_masked_credentials must catch both.
"""
def test_rejects_fully_masked_value(self) -> None:
masked = MASK_CREDENTIAL_CHAR * 12 # "••••••••••••"
with pytest.raises(ValueError, match="masked placeholder"):
_reject_masked_credentials({"client_id": masked})
def test_rejects_long_string_masked_value(self) -> None:
"""mask_string returns 'first4...last4' for long strings — the real
format used for OAuth credentials like client_id and client_secret."""
with pytest.raises(ValueError, match="masked placeholder"):
_reject_masked_credentials({"client_id": "1234...7890"})
def test_rejects_when_any_field_is_masked(self) -> None:
"""Even if client_id is real, a masked client_secret must be caught."""
with pytest.raises(ValueError, match="client_secret"):
_reject_masked_credentials(
{
"client_id": "1234567890.1234567890",
"client_secret": MASK_CREDENTIAL_CHAR * 12,
}
)
def test_accepts_real_credentials(self) -> None:
# Should not raise
_reject_masked_credentials(
{
"client_id": "1234567890.1234567890",
"client_secret": "test_client_secret_value",
}
)
def test_accepts_empty_dict(self) -> None:
# Should not raise — empty credentials are handled elsewhere
_reject_masked_credentials({})
def test_ignores_non_string_values(self) -> None:
# Non-string values (None, bool, int) should pass through
_reject_masked_credentials(
{
"client_id": "real_value",
"redirect_uri": None,
"some_flag": True,
}
)

View File

@@ -0,0 +1,76 @@
%PDF-1.3
%<25><><EFBFBD><EFBFBD>
1 0 obj
<<
/Producer <1083d595b1>
>>
endobj
2 0 obj
<<
/Type /Pages
/Count 1
/Kids [ 4 0 R ]
>>
endobj
3 0 obj
<<
/Type /Catalog
/Pages 2 0 R
>>
endobj
4 0 obj
<<
/Type /Page
/Resources <<
/Font <<
/F1 <<
/Type /Font
/Subtype /Type1
/BaseFont /Helvetica
>>
>>
>>
/MediaBox [ 0.0 0.0 200 200 ]
/Contents 5 0 R
/Parent 2 0 R
>>
endobj
5 0 obj
<<
/Length 42
>>
stream
,N<><6~<7E>)<29><><EFBFBD><EFBFBD><EFBFBD>u<EFBFBD> <0C><><EFBFBD>Zc'<27><>>8g<38><67><EFBFBD>n<EFBFBD><6E><EFBFBD><EFBFBD><EFBFBD>9"
endstream
endobj
6 0 obj
<<
/V 2
/R 3
/Length 128
/P 4294967292
/Filter /Standard
/O <6a340a292629053da84a6d8b19a5d505953b8b3fdac3d2d389fde0e354528d44>
/U <d6f0dc91c7b9de264a8d708515468e6528bf4e5e4e758a4164004e56fffa0108>
>>
endobj
xref
0 7
0000000000 65535 f
0000000015 00000 n
0000000059 00000 n
0000000118 00000 n
0000000167 00000 n
0000000348 00000 n
0000000440 00000 n
trailer
<<
/Size 7
/Root 3 0 R
/Info 1 0 R
/ID [ <6364336635356135633239323638353039306635656133623165313637366430> <6364336635356135633239323638353039306635656133623165313637366430> ]
/Encrypt 6 0 R
>>
startxref
655
%%EOF

View File

@@ -54,6 +54,12 @@ class TestReadPdfFile:
text, _, _ = read_pdf_file(_load("encrypted.pdf"), pdf_pass="wrong")
assert text == ""
def test_owner_password_only_pdf_extracts_text(self) -> None:
"""A PDF encrypted with only an owner password (no user password)
should still yield its text content. Regression for #9754."""
text, _, _ = read_pdf_file(_load("owner_protected.pdf"))
assert "Hello World" in text
def test_empty_pdf(self) -> None:
text, _, _ = read_pdf_file(_load("empty.pdf"))
assert text.strip() == ""
@@ -117,6 +123,12 @@ class TestIsPdfProtected:
def test_protected_pdf(self) -> None:
assert is_pdf_protected(_load("encrypted.pdf")) is True
def test_owner_password_only_is_not_protected(self) -> None:
"""A PDF with only an owner password (permission restrictions) but no
user password should NOT be considered protected — any viewer can open
it without prompting for a password."""
assert is_pdf_protected(_load("owner_protected.pdf")) is False
def test_preserves_file_position(self) -> None:
pdf = _load("simple.pdf")
pdf.seek(42)

View File

@@ -0,0 +1,79 @@
import io
from pptx import Presentation # type: ignore[import-untyped]
from pptx.chart.data import CategoryChartData # type: ignore[import-untyped]
from pptx.enum.chart import XL_CHART_TYPE # type: ignore[import-untyped]
from pptx.util import Inches # type: ignore[import-untyped]
from onyx.file_processing.extract_file_text import pptx_to_text
def _make_pptx_with_chart() -> io.BytesIO:
"""Create an in-memory pptx with one text slide and one chart slide."""
prs = Presentation()
# Slide 1: text only
slide1 = prs.slides.add_slide(prs.slide_layouts[1])
slide1.shapes.title.text = "Introduction"
slide1.placeholders[1].text = "This is the first slide."
# Slide 2: chart
slide2 = prs.slides.add_slide(prs.slide_layouts[5]) # Blank layout
chart_data = CategoryChartData()
chart_data.categories = ["Q1", "Q2", "Q3"]
chart_data.add_series("Revenue", (100, 200, 300))
slide2.shapes.add_chart(
XL_CHART_TYPE.COLUMN_CLUSTERED,
Inches(1),
Inches(1),
Inches(6),
Inches(4),
chart_data,
)
buf = io.BytesIO()
prs.save(buf)
buf.seek(0)
return buf
def _make_pptx_without_chart() -> io.BytesIO:
"""Create an in-memory pptx with a single text-only slide."""
prs = Presentation()
slide = prs.slides.add_slide(prs.slide_layouts[1])
slide.shapes.title.text = "Hello World"
slide.placeholders[1].text = "Some content here."
buf = io.BytesIO()
prs.save(buf)
buf.seek(0)
return buf
class TestPptxToText:
def test_chart_is_omitted(self) -> None:
# Precondition
pptx_file = _make_pptx_with_chart()
# Under test
result = pptx_to_text(pptx_file)
# Postcondition
assert "Introduction" in result
assert "first slide" in result
assert "[chart omitted]" in result
# The actual chart data should NOT appear in the output.
assert "Revenue" not in result
assert "Q1" not in result
def test_text_only_pptx(self) -> None:
# Precondition
pptx_file = _make_pptx_without_chart()
# Under test
result = pptx_to_text(pptx_file)
# Postcondition
assert "Hello World" in result
assert "Some content" in result
assert "[chart omitted]" not in result

View File

@@ -19,6 +19,6 @@ dependencies:
version: 5.4.0
- name: code-interpreter
repository: https://onyx-dot-app.github.io/python-sandbox/
version: 0.3.1
digest: sha256:4965b6ea3674c37163832a2192cd3bc8004f2228729fca170af0b9f457e8f987
generated: "2026-03-02T15:29:39.632344-08:00"
version: 0.3.2
digest: sha256:74908ea45ace2b4be913ff762772e6d87e40bab64e92c6662aa51730eaeb9d87
generated: "2026-04-06T15:34:02.597166-07:00"

View File

@@ -5,7 +5,7 @@ home: https://www.onyx.app/
sources:
- "https://github.com/onyx-dot-app/onyx"
type: application
version: 0.4.39
version: 0.4.40
appVersion: latest
annotations:
category: Productivity
@@ -45,6 +45,6 @@ dependencies:
repository: https://charts.min.io/
condition: minio.enabled
- name: code-interpreter
version: 0.3.1
version: 0.3.2
repository: https://onyx-dot-app.github.io/python-sandbox/
condition: codeInterpreter.enabled

View File

@@ -67,6 +67,9 @@ spec:
- "/bin/sh"
- "-c"
- |
{{- if .Values.api.runUpdateCaCertificates }}
update-ca-certificates &&
{{- end }}
alembic upgrade head &&
echo "Starting Onyx Api Server" &&
uvicorn onyx.main:app --host {{ .Values.global.host }} --port {{ .Values.api.containerPorts.server }}

View File

@@ -504,6 +504,18 @@ api:
tolerations: []
affinity: {}
# Run update-ca-certificates before starting the server.
# Useful when mounting custom CA certificates via volumes/volumeMounts.
# NOTE: Requires the container to run as root (runAsUser: 0).
# CA certificate files must be mounted under /usr/local/share/ca-certificates/
# with a .crt extension (e.g. /usr/local/share/ca-certificates/my-ca.crt).
# NOTE: Python HTTP clients (requests, httpx) use certifi's bundle by default
# and will not pick up the system CA store automatically. Set the following
# environment variables via configMap values (loaded through envFrom) to make them use the updated system bundle:
# REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
# SSL_CERT_FILE: /etc/ssl/certs/ca-certificates.crt
runUpdateCaCertificates: false
######################################################################
#

View File

@@ -30,7 +30,10 @@ target "backend" {
context = "backend"
dockerfile = "Dockerfile"
cache-from = ["type=registry,ref=${BACKEND_REPOSITORY}:latest"]
cache-from = [
"type=registry,ref=${BACKEND_REPOSITORY}:latest",
"type=registry,ref=${BACKEND_REPOSITORY}:edge",
]
cache-to = ["type=inline"]
tags = ["${BACKEND_REPOSITORY}:${TAG}"]
@@ -40,7 +43,10 @@ target "web" {
context = "web"
dockerfile = "Dockerfile"
cache-from = ["type=registry,ref=${WEB_SERVER_REPOSITORY}:latest"]
cache-from = [
"type=registry,ref=${WEB_SERVER_REPOSITORY}:latest",
"type=registry,ref=${WEB_SERVER_REPOSITORY}:edge",
]
cache-to = ["type=inline"]
tags = ["${WEB_SERVER_REPOSITORY}:${TAG}"]
@@ -51,7 +57,10 @@ target "model-server" {
dockerfile = "Dockerfile.model_server"
cache-from = ["type=registry,ref=${MODEL_SERVER_REPOSITORY}:latest"]
cache-from = [
"type=registry,ref=${MODEL_SERVER_REPOSITORY}:latest",
"type=registry,ref=${MODEL_SERVER_REPOSITORY}:edge",
]
cache-to = ["type=inline"]
tags = ["${MODEL_SERVER_REPOSITORY}:${TAG}"]
@@ -73,7 +82,10 @@ target "cli" {
context = "cli"
dockerfile = "Dockerfile"
cache-from = ["type=registry,ref=${CLI_REPOSITORY}:latest"]
cache-from = [
"type=registry,ref=${CLI_REPOSITORY}:latest",
"type=registry,ref=${CLI_REPOSITORY}:edge",
]
cache-to = ["type=inline"]
tags = ["${CLI_REPOSITORY}:${TAG}"]

View File

@@ -6,11 +6,11 @@ All Prometheus metrics live in the `backend/onyx/server/metrics/` package. Follo
### 1. Choose the right file (or create a new one)
| File | Purpose |
|------|---------|
| `metrics/slow_requests.py` | Slow request counter + callback |
| `metrics/postgres_connection_pool.py` | SQLAlchemy connection pool metrics |
| `metrics/prometheus_setup.py` | FastAPI instrumentator config (orchestrator) |
| File | Purpose |
| ------------------------------------- | -------------------------------------------- |
| `metrics/slow_requests.py` | Slow request counter + callback |
| `metrics/postgres_connection_pool.py` | SQLAlchemy connection pool metrics |
| `metrics/prometheus_setup.py` | FastAPI instrumentator config (orchestrator) |
If your metric is a standalone concern (e.g. cache hit rates, queue depths), create a new file under `metrics/` and keep one metric concept per file.
@@ -30,6 +30,7 @@ _my_counter = Counter(
```
**Naming conventions:**
- Prefix all metric names with `onyx_`
- Counters: `_total` suffix (e.g. `onyx_api_slow_requests_total`)
- Histograms: `_seconds` or `_bytes` suffix for durations/sizes
@@ -107,26 +108,26 @@ These metrics are exposed at `GET /metrics` on the API server.
### Built-in (via `prometheus-fastapi-instrumentator`)
| Metric | Type | Labels | Description |
|--------|------|--------|-------------|
| `http_requests_total` | Counter | `method`, `status`, `handler` | Total request count |
| `http_request_duration_highr_seconds` | Histogram | _(none)_ | High-resolution latency (many buckets, no labels) |
| `http_request_duration_seconds` | Histogram | `method`, `handler` | Latency by handler (custom buckets for P95/P99) |
| `http_request_size_bytes` | Summary | `handler` | Incoming request content length |
| `http_response_size_bytes` | Summary | `handler` | Outgoing response content length |
| `http_requests_inprogress` | Gauge | `method`, `handler` | Currently in-flight requests |
| Metric | Type | Labels | Description |
| ------------------------------------- | --------- | ----------------------------- | ------------------------------------------------- |
| `http_requests_total` | Counter | `method`, `status`, `handler` | Total request count |
| `http_request_duration_highr_seconds` | Histogram | _(none)_ | High-resolution latency (many buckets, no labels) |
| `http_request_duration_seconds` | Histogram | `method`, `handler` | Latency by handler (custom buckets for P95/P99) |
| `http_request_size_bytes` | Summary | `handler` | Incoming request content length |
| `http_response_size_bytes` | Summary | `handler` | Outgoing response content length |
| `http_requests_inprogress` | Gauge | `method`, `handler` | Currently in-flight requests |
### Custom (via `onyx.server.metrics`)
| Metric | Type | Labels | Description |
|--------|------|--------|-------------|
| Metric | Type | Labels | Description |
| ------------------------------ | ------- | ----------------------------- | ---------------------------------------------------------------- |
| `onyx_api_slow_requests_total` | Counter | `method`, `handler`, `status` | Requests exceeding `SLOW_REQUEST_THRESHOLD_SECONDS` (default 1s) |
### Configuration
| Env Var | Default | Description |
|---------|---------|-------------|
| `SLOW_REQUEST_THRESHOLD_SECONDS` | `1.0` | Duration threshold for slow request counting |
| Env Var | Default | Description |
| -------------------------------- | ------- | -------------------------------------------- |
| `SLOW_REQUEST_THRESHOLD_SECONDS` | `1.0` | Duration threshold for slow request counting |
### Instrumentator Settings
@@ -141,44 +142,188 @@ These metrics provide visibility into SQLAlchemy connection pool state across al
### Pool State (via custom Prometheus collector — snapshot on each scrape)
| Metric | Type | Labels | Description |
|--------|------|--------|-------------|
| `onyx_db_pool_checked_out` | Gauge | `engine` | Currently checked-out connections |
| `onyx_db_pool_checked_in` | Gauge | `engine` | Idle connections available in the pool |
| `onyx_db_pool_overflow` | Gauge | `engine` | Current overflow connections beyond `pool_size` |
| `onyx_db_pool_size` | Gauge | `engine` | Configured pool size (constant) |
| Metric | Type | Labels | Description |
| -------------------------- | ----- | -------- | ----------------------------------------------- |
| `onyx_db_pool_checked_out` | Gauge | `engine` | Currently checked-out connections |
| `onyx_db_pool_checked_in` | Gauge | `engine` | Idle connections available in the pool |
| `onyx_db_pool_overflow` | Gauge | `engine` | Current overflow connections beyond `pool_size` |
| `onyx_db_pool_size` | Gauge | `engine` | Configured pool size (constant) |
### Pool Lifecycle (via SQLAlchemy pool event listeners)
| Metric | Type | Labels | Description |
|--------|------|--------|-------------|
| `onyx_db_pool_checkout_total` | Counter | `engine` | Total connection checkouts from the pool |
| `onyx_db_pool_checkin_total` | Counter | `engine` | Total connection checkins to the pool |
| `onyx_db_pool_connections_created_total` | Counter | `engine` | Total new database connections created |
| `onyx_db_pool_invalidations_total` | Counter | `engine` | Total connection invalidations |
| `onyx_db_pool_checkout_timeout_total` | Counter | `engine` | Total connection checkout timeouts |
| Metric | Type | Labels | Description |
| ---------------------------------------- | ------- | -------- | ---------------------------------------- |
| `onyx_db_pool_checkout_total` | Counter | `engine` | Total connection checkouts from the pool |
| `onyx_db_pool_checkin_total` | Counter | `engine` | Total connection checkins to the pool |
| `onyx_db_pool_connections_created_total` | Counter | `engine` | Total new database connections created |
| `onyx_db_pool_invalidations_total` | Counter | `engine` | Total connection invalidations |
| `onyx_db_pool_checkout_timeout_total` | Counter | `engine` | Total connection checkout timeouts |
### Per-Endpoint Attribution (via pool events + endpoint context middleware)
| Metric | Type | Labels | Description |
|--------|------|--------|-------------|
| `onyx_db_connections_held_by_endpoint` | Gauge | `handler`, `engine` | DB connections currently held, by endpoint |
| `onyx_db_connection_hold_seconds` | Histogram | `handler`, `engine` | Duration a DB connection is held by an endpoint |
| Metric | Type | Labels | Description |
| -------------------------------------- | --------- | ------------------- | ----------------------------------------------- |
| `onyx_db_connections_held_by_endpoint` | Gauge | `handler`, `engine` | DB connections currently held, by endpoint |
| `onyx_db_connection_hold_seconds` | Histogram | `handler`, `engine` | Duration a DB connection is held by an endpoint |
Engine label values: `sync` (main read-write), `async` (async sessions), `readonly` (read-only user).
Connections from background tasks (Celery) or boot-time warmup appear as `handler="unknown"`.
## Celery Worker Metrics
Celery workers expose metrics via a standalone Prometheus HTTP server (separate from the API server's `/metrics` endpoint). Each worker type runs its own server on a dedicated port.
### Metrics Server (`onyx.server.metrics.metrics_server`)
| Env Var | Default | Description |
| ---------------------------- | ------------------- | ----------------------------------------------------- |
| `PROMETHEUS_METRICS_PORT` | _(per worker type)_ | Override the default port for this worker |
| `PROMETHEUS_METRICS_ENABLED` | `true` | Set to `false` to disable the metrics server entirely |
Default ports:
| Worker | Port |
| --------------- | ---- |
| `docfetching` | 9092 |
| `docprocessing` | 9093 |
| `monitoring` | 9096 |
Workers without a default port and no `PROMETHEUS_METRICS_PORT` env var will skip starting the server.
### Generic Task Lifecycle Metrics (`onyx.server.metrics.celery_task_metrics`)
Push-based metrics that fire on Celery signals for all tasks on the worker.
| Metric | Type | Labels | Description |
| ----------------------------------- | --------- | ------------------------------- | ----------------------------------------------------------------------------- |
| `onyx_celery_task_started_total` | Counter | `task_name`, `queue` | Total tasks started |
| `onyx_celery_task_completed_total` | Counter | `task_name`, `queue`, `outcome` | Total tasks completed (`outcome`: `success` or `failure`) |
| `onyx_celery_task_duration_seconds` | Histogram | `task_name`, `queue` | Task execution duration. Buckets: 1, 5, 15, 30, 60, 120, 300, 600, 1800, 3600 |
| `onyx_celery_tasks_active` | Gauge | `task_name`, `queue` | Currently executing tasks |
| `onyx_celery_task_retried_total` | Counter | `task_name`, `queue` | Total task retries |
| `onyx_celery_task_revoked_total` | Counter | `task_name` | Total tasks revoked (cancelled) |
| `onyx_celery_task_rejected_total` | Counter | `task_name` | Total tasks rejected by worker |
Stale start-time entries (tasks killed via SIGTERM/OOM where `task_postrun` never fires) are evicted after 1 hour.
### Per-Connector Indexing Metrics (`onyx.server.metrics.indexing_task_metrics`)
Enriches docfetching and docprocessing tasks with connector-level labels. Silently no-ops for all other tasks.
| Metric | Type | Labels | Description |
| ------------------------------------- | --------- | ----------------------------------------------------------- | ---------------------------------------- |
| `onyx_indexing_task_started_total` | Counter | `task_name`, `source`, `tenant_id`, `cc_pair_id` | Indexing tasks started per connector |
| `onyx_indexing_task_completed_total` | Counter | `task_name`, `source`, `tenant_id`, `cc_pair_id`, `outcome` | Indexing tasks completed per connector |
| `onyx_indexing_task_duration_seconds` | Histogram | `task_name`, `source`, `tenant_id` | Indexing task duration by connector type |
`connector_name` is intentionally excluded from these push-based counters to avoid unbounded cardinality (it's a free-form user string). The pull-based collectors on the monitoring worker include it since they have bounded cardinality (one series per connector).
### Pull-Based Collectors (`onyx.server.metrics.indexing_pipeline`)
Registered only in the **Monitoring** worker. Collectors query Redis/Postgres at scrape time with a 30-second TTL cache.
| Metric | Type | Labels | Description |
| ------------------------------------ | ----- | ------- | ----------------------------------- |
| `onyx_queue_depth` | Gauge | `queue` | Celery queue length |
| `onyx_queue_unacked` | Gauge | `queue` | Unacknowledged messages per queue |
| `onyx_queue_oldest_task_age_seconds` | Gauge | `queue` | Age of the oldest task in the queue |
Plus additional connector health, index attempt, and worker heartbeat metrics — see `indexing_pipeline.py` for the full list.
### Adding Metrics to a Worker
Currently only the docfetching and docprocessing workers have push-based task metrics wired up. To add metrics to another worker (e.g. heavy, light, primary):
**1. Import and call the generic handlers from the worker's signal handlers:**
```python
from onyx.server.metrics.celery_task_metrics import (
on_celery_task_prerun,
on_celery_task_postrun,
on_celery_task_retry,
on_celery_task_revoked,
on_celery_task_rejected,
)
@signals.task_prerun.connect
def on_task_prerun(sender, task_id, task, args, kwargs, **kwds):
app_base.on_task_prerun(sender, task_id, task, args, kwargs, **kwds)
on_celery_task_prerun(task_id, task)
```
Do the same for `task_postrun`, `task_retry`, `task_revoked`, and `task_rejected` — see `apps/docfetching.py` for the complete example.
**2. Start the metrics server on `worker_ready`:**
```python
from onyx.server.metrics.metrics_server import start_metrics_server
@worker_ready.connect
def on_worker_ready(sender, **kwargs):
start_metrics_server("your_worker_type")
app_base.on_worker_ready(sender, **kwargs)
```
Add a default port for your worker type in `metrics_server.py`'s `_DEFAULT_PORTS` dict, or set `PROMETHEUS_METRICS_PORT` in the environment.
**3. (Optional) Add domain-specific enrichment:**
If your tasks need richer labels beyond `task_name`/`queue`, create a new module in `server/metrics/` following `indexing_task_metrics.py`:
- Define Counters/Histograms with your domain labels
- Write `on_<domain>_task_prerun` / `on_<domain>_task_postrun` handlers that filter by task name and no-op for others
- Call them from the worker's signal handlers alongside the generic ones
**Cardinality warning:** Never use user-defined free-form strings as metric labels — they create unbounded cardinality. Use IDs or enum values. If you need free-form labels, use pull-based collectors (monitoring worker) where cardinality is naturally bounded.
### Current Worker Integration Status
| Worker | Generic Task Metrics | Domain Metrics | Metrics Server |
| -------------------- | -------------------- | -------------- | ------------------------------------ |
| Docfetching | ✓ | ✓ (indexing) | ✓ (port 9092) |
| Docprocessing | ✓ | ✓ (indexing) | ✓ (port 9093) |
| Monitoring | — | — | ✓ (port 9096, pull-based collectors) |
| Primary | — | — | — |
| Light | — | — | — |
| Heavy | — | — | — |
| User File Processing | — | — | — |
| KG Processing | — | — | — |
### Example PromQL Queries (Celery)
```promql
# Task completion rate by worker queue
sum by (queue) (rate(onyx_celery_task_completed_total[5m]))
# P95 task duration for pruning tasks
histogram_quantile(0.95,
sum by (le) (rate(onyx_celery_task_duration_seconds_bucket{task_name=~".*pruning.*"}[5m])))
# Task failure rate
sum by (task_name) (rate(onyx_celery_task_completed_total{outcome="failure"}[5m]))
/ sum by (task_name) (rate(onyx_celery_task_completed_total[5m]))
# Active tasks per queue
sum by (queue) (onyx_celery_tasks_active)
# Indexing throughput by source type
sum by (source) (rate(onyx_indexing_task_completed_total{outcome="success"}[5m]))
# Queue depth — are tasks backing up?
onyx_queue_depth > 100
```
## OpenSearch Search Metrics
These metrics track OpenSearch search latency and throughput. Collected via `onyx.server.metrics.opensearch_search`.
| Metric | Type | Labels | Description |
|--------|------|--------|-------------|
| Metric | Type | Labels | Description |
| ------------------------------------------------ | --------- | ------------- | --------------------------------------------------------------------------- |
| `onyx_opensearch_search_client_duration_seconds` | Histogram | `search_type` | Client-side end-to-end latency (network + serialization + server execution) |
| `onyx_opensearch_search_server_duration_seconds` | Histogram | `search_type` | Server-side execution time from OpenSearch `took` field |
| `onyx_opensearch_search_total` | Counter | `search_type` | Total search requests sent to OpenSearch |
| `onyx_opensearch_searches_in_progress` | Gauge | `search_type` | Currently in-flight OpenSearch searches |
| `onyx_opensearch_search_server_duration_seconds` | Histogram | `search_type` | Server-side execution time from OpenSearch `took` field |
| `onyx_opensearch_search_total` | Counter | `search_type` | Total search requests sent to OpenSearch |
| `onyx_opensearch_searches_in_progress` | Gauge | `search_type` | Currently in-flight OpenSearch searches |
Search type label values: See `OpenSearchSearchType`.

View File

@@ -12,7 +12,7 @@ dependencies = [
"cohere==5.6.1",
"fastapi==0.133.1",
"google-genai==1.52.0",
"litellm==1.83.0",
"litellm==1.81.6",
"openai==2.14.0",
"pydantic==2.11.7",
"prometheus_client>=0.21.1",
@@ -70,6 +70,10 @@ backend = [
"lazy_imports==1.0.1",
"lxml==5.3.0",
"Mako==1.2.4",
# NOTE: Do not update without understanding the patching behavior in
# get_markitdown_converter in
# backend/onyx/file_processing/extract_file_text.py and what impacts
# updating might have on this behavior.
"markitdown[pdf, docx, pptx, xlsx, xls]==0.1.2",
"mcp[cli]==1.26.0",
"msal==1.34.0",

8
uv.lock generated
View File

@@ -3134,7 +3134,7 @@ wheels = [
[[package]]
name = "litellm"
version = "1.83.0"
version = "1.81.6"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
@@ -3150,9 +3150,9 @@ dependencies = [
{ name = "tiktoken" },
{ name = "tokenizers" },
]
sdist = { url = "https://files.pythonhosted.org/packages/22/92/6ce9737554994ca8e536e5f4f6a87cc7c4774b656c9eb9add071caf7d54b/litellm-1.83.0.tar.gz", hash = "sha256:860bebc76c4bb27b4cf90b4a77acd66dba25aced37e3db98750de8a1766bfb7a", size = 17333062, upload-time = "2026-03-31T05:08:25.331Z" }
sdist = { url = "https://files.pythonhosted.org/packages/2e/f3/194a2dca6cb3eddb89f4bc2920cf5e27542256af907c23be13c61fe7e021/litellm-1.81.6.tar.gz", hash = "sha256:f02b503dfb7d66d1c939f82e4db21aeec1d6e2ed1fe3f5cd02aaec3f792bc4ae", size = 13878107, upload-time = "2026-02-01T04:02:27.36Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/19/2c/a670cc050fcd6f45c6199eb99e259c73aea92edba8d5c2fc1b3686d36217/litellm-1.83.0-py3-none-any.whl", hash = "sha256:88c536d339248f3987571493015784671ba3f193a328e1ea6780dbebaa2094a8", size = 15610306, upload-time = "2026-03-31T05:08:21.987Z" },
{ url = "https://files.pythonhosted.org/packages/e6/05/3516cc7386b220d388aa0bd833308c677e94eceb82b2756dd95e06f6a13f/litellm-1.81.6-py3-none-any.whl", hash = "sha256:573206ba194d49a1691370ba33f781671609ac77c35347f8a0411d852cf6341a", size = 12224343, upload-time = "2026-02-01T04:02:23.704Z" },
]
[[package]]
@@ -4443,7 +4443,7 @@ requires-dist = [
{ name = "langchain-core", marker = "extra == 'backend'", specifier = "==1.2.22" },
{ name = "langfuse", marker = "extra == 'backend'", specifier = "==3.10.0" },
{ name = "lazy-imports", marker = "extra == 'backend'", specifier = "==1.0.1" },
{ name = "litellm", specifier = "==1.83.0" },
{ name = "litellm", specifier = "==1.81.6" },
{ name = "lxml", marker = "extra == 'backend'", specifier = "==5.3.0" },
{ name = "mako", marker = "extra == 'backend'", specifier = "==1.2.4" },
{ name = "manygo", marker = "extra == 'dev'", specifier = "==0.2.0" },

View File

@@ -1,18 +1,22 @@
import { Card } from "@opal/components/cards/card/components";
import { Content } from "@opal/layouts";
import { Content, SizePreset } from "@opal/layouts";
import { SvgEmpty } from "@opal/icons";
import type { IconFunctionComponent, PaddingVariants } from "@opal/types";
import type {
IconFunctionComponent,
PaddingVariants,
RichStr,
} from "@opal/types";
// ---------------------------------------------------------------------------
// Types
// ---------------------------------------------------------------------------
type EmptyMessageCardProps = {
type EmptyMessageCardBaseProps = {
/** Icon displayed alongside the title. */
icon?: IconFunctionComponent;
/** Primary message text. */
title: string;
title: string | RichStr;
/** Padding preset for the card. @default "md" */
padding?: PaddingVariants;
@@ -21,16 +25,30 @@ type EmptyMessageCardProps = {
ref?: React.Ref<HTMLDivElement>;
};
type EmptyMessageCardProps =
| (EmptyMessageCardBaseProps & {
/** @default "secondary" */
sizePreset?: "secondary";
})
| (EmptyMessageCardBaseProps & {
sizePreset: "main-ui";
/** Description text. Only supported when `sizePreset` is `"main-ui"`. */
description?: string | RichStr;
});
// ---------------------------------------------------------------------------
// EmptyMessageCard
// ---------------------------------------------------------------------------
function EmptyMessageCard({
icon = SvgEmpty,
title,
padding = "md",
ref,
}: EmptyMessageCardProps) {
function EmptyMessageCard(props: EmptyMessageCardProps) {
const {
sizePreset = "secondary",
icon = SvgEmpty,
title,
padding = "md",
ref,
} = props;
return (
<Card
ref={ref}
@@ -39,13 +57,23 @@ function EmptyMessageCard({
padding={padding}
rounding="md"
>
<Content
icon={icon}
title={title}
sizePreset="secondary"
variant="body"
prominence="muted"
/>
{sizePreset === "secondary" ? (
<Content
icon={icon}
title={title}
sizePreset="secondary"
variant="body"
prominence="muted"
/>
) : (
<Content
icon={icon}
title={title}
description={"description" in props ? props.description : undefined}
sizePreset={sizePreset}
variant="section"
/>
)}
</Card>
);
}

View File

@@ -1,48 +1,34 @@
"use client";
import "@opal/core/animations/styles.css";
import React, { createContext, useContext, useState, useCallback } from "react";
import React from "react";
import { cn } from "@opal/utils";
import type { WithoutStyles, ExtremaSizeVariants } from "@opal/types";
import { widthVariants } from "@opal/shared";
// ---------------------------------------------------------------------------
// Context-per-group registry
// ---------------------------------------------------------------------------
/**
* Lazily-created map of group names to React contexts.
*
* Each group gets its own `React.Context<boolean | null>` so that a
* `Hoverable.Item` only re-renders when its *own* group's hover state
* changes — not when any unrelated group changes.
*
* The default value is `null` (no provider found), which lets
* `Hoverable.Item` distinguish "no Root ancestor" from "Root says
* not hovered" and throw when `group` was explicitly specified.
*/
const contextMap = new Map<string, React.Context<boolean | null>>();
function getOrCreateContext(group: string): React.Context<boolean | null> {
let ctx = contextMap.get(group);
if (!ctx) {
ctx = createContext<boolean | null>(null);
ctx.displayName = `HoverableContext(${group})`;
contextMap.set(group, ctx);
}
return ctx;
}
// ---------------------------------------------------------------------------
// Types
// ---------------------------------------------------------------------------
type HoverableInteraction = "rest" | "hover";
interface HoverableRootProps
extends WithoutStyles<React.HTMLAttributes<HTMLDivElement>> {
children: React.ReactNode;
group: string;
/** Width preset. @default "auto" */
widthVariant?: ExtremaSizeVariants;
/**
* JS-controllable interaction state override.
*
* - `"rest"` (default): items are shown/hidden by CSS `:hover`.
* - `"hover"`: forces items visible regardless of hover state. Useful when
* a hoverable action opens a modal — set `interaction="hover"` while the
* modal is open so the user can see which element they're interacting with.
*
* @default "rest"
*/
interaction?: HoverableInteraction;
/** Ref forwarded to the root `<div>`. */
ref?: React.Ref<HTMLDivElement>;
}
@@ -65,12 +51,10 @@ interface HoverableItemProps
/**
* Hover-tracking container for a named group.
*
* Wraps children in a `<div>` that tracks mouse-enter / mouse-leave and
* provides the hover state via a per-group React context.
*
* Nesting works because each `Hoverable.Root` creates a **new** context
* provider that shadows the parent — so an inner `Hoverable.Item group="b"`
* reads from the inner provider, not the outer `group="a"` provider.
* Uses a `data-hover-group` attribute and CSS `:hover` to control
* descendant `Hoverable.Item` visibility. No React state or context
* the browser natively removes `:hover` when modals/portals steal
* pointer events, preventing stale hover state.
*
* @example
* ```tsx
@@ -87,70 +71,20 @@ function HoverableRoot({
group,
children,
widthVariant = "full",
interaction = "rest",
ref,
onMouseEnter: consumerMouseEnter,
onMouseLeave: consumerMouseLeave,
onFocusCapture: consumerFocusCapture,
onBlurCapture: consumerBlurCapture,
...props
}: HoverableRootProps) {
const [hovered, setHovered] = useState(false);
const [focused, setFocused] = useState(false);
const onMouseEnter = useCallback(
(e: React.MouseEvent<HTMLDivElement>) => {
setHovered(true);
consumerMouseEnter?.(e);
},
[consumerMouseEnter]
);
const onMouseLeave = useCallback(
(e: React.MouseEvent<HTMLDivElement>) => {
setHovered(false);
consumerMouseLeave?.(e);
},
[consumerMouseLeave]
);
const onFocusCapture = useCallback(
(e: React.FocusEvent<HTMLDivElement>) => {
setFocused(true);
consumerFocusCapture?.(e);
},
[consumerFocusCapture]
);
const onBlurCapture = useCallback(
(e: React.FocusEvent<HTMLDivElement>) => {
if (
!(e.relatedTarget instanceof Node) ||
!e.currentTarget.contains(e.relatedTarget)
) {
setFocused(false);
}
consumerBlurCapture?.(e);
},
[consumerBlurCapture]
);
const active = hovered || focused;
const GroupContext = getOrCreateContext(group);
return (
<GroupContext.Provider value={active}>
<div
{...props}
ref={ref}
className={cn(widthVariants[widthVariant])}
onMouseEnter={onMouseEnter}
onMouseLeave={onMouseLeave}
onFocusCapture={onFocusCapture}
onBlurCapture={onBlurCapture}
>
{children}
</div>
</GroupContext.Provider>
<div
{...props}
ref={ref}
className={cn(widthVariants[widthVariant])}
data-hover-group={group}
data-interaction={interaction !== "rest" ? interaction : undefined}
>
{children}
</div>
);
}
@@ -162,13 +96,10 @@ function HoverableRoot({
* An element whose visibility is controlled by hover state.
*
* **Local mode** (`group` omitted): the item handles hover on its own
* element via CSS `:hover`. This is the core abstraction.
* element via CSS `:hover`.
*
* **Group mode** (`group` provided): visibility is driven by a matching
* `Hoverable.Root` ancestor's hover state via React context. If no
* matching Root is found, an error is thrown.
*
* Uses data-attributes for variant styling (see `styles.css`).
* **Group mode** (`group` provided): visibility is driven by CSS `:hover`
* on the nearest `Hoverable.Root` ancestor via `[data-hover-group]:hover`.
*
* @example
* ```tsx
@@ -184,8 +115,6 @@ function HoverableRoot({
* </Hoverable.Item>
* </Hoverable.Root>
* ```
*
* @throws If `group` is specified but no matching `Hoverable.Root` ancestor exists.
*/
function HoverableItem({
group,
@@ -194,17 +123,6 @@ function HoverableItem({
ref,
...props
}: HoverableItemProps) {
const contextValue = useContext(
group ? getOrCreateContext(group) : NOOP_CONTEXT
);
if (group && contextValue === null) {
throw new Error(
`Hoverable.Item group="${group}" has no matching Hoverable.Root ancestor. ` +
`Either wrap it in <Hoverable.Root group="${group}"> or remove the group prop for local hover.`
);
}
const isLocal = group === undefined;
return (
@@ -213,9 +131,6 @@ function HoverableItem({
ref={ref}
className={cn("hoverable-item")}
data-hoverable-variant={variant}
data-hoverable-active={
isLocal ? undefined : contextValue ? "true" : undefined
}
data-hoverable-local={isLocal ? "true" : undefined}
>
{children}
@@ -223,9 +138,6 @@ function HoverableItem({
);
}
/** Stable context used when no group is specified (local mode). */
const NOOP_CONTEXT = createContext<boolean | null>(null);
// ---------------------------------------------------------------------------
// Compound export
// ---------------------------------------------------------------------------
@@ -233,18 +145,16 @@ const NOOP_CONTEXT = createContext<boolean | null>(null);
/**
* Hoverable compound component for hover-to-reveal patterns.
*
* Provides two sub-components:
* Entirely CSS-driven — no React state or context. The browser's native
* `:hover` pseudo-class handles all state, which means hover is
* automatically cleared when modals/portals steal pointer events.
*
* - `Hoverable.Root` — A container that tracks hover state for a named group
* and provides it via React context.
* - `Hoverable.Root` — Container with `data-hover-group`. CSS `:hover`
* on this element reveals descendant `Hoverable.Item` elements.
*
* - `Hoverable.Item` — The core abstraction. On its own (no `group`), it
* applies local CSS `:hover` for the variant effect. When `group` is
* specified, it reads hover state from the nearest matching
* `Hoverable.Root` — and throws if no matching Root is found.
*
* Supports nesting: a child `Hoverable.Root` shadows the parent's context,
* so each group's items only respond to their own root's hover.
* - `Hoverable.Item` — Hidden by default. In group mode, revealed when
* the ancestor Root is hovered. In local mode (no `group`), revealed
* when the item itself is hovered.
*
* @example
* ```tsx
@@ -276,4 +186,5 @@ export {
type HoverableRootProps,
type HoverableItemProps,
type HoverableItemVariant,
type HoverableInteraction,
};

View File

@@ -7,8 +7,20 @@
opacity: 0;
}
/* Group mode — Root controls visibility via React context */
.hoverable-item[data-hoverable-variant="opacity-on-hover"][data-hoverable-active="true"] {
/* Group mode — Root :hover controls descendant item visibility via CSS.
Exclude local-mode items so they aren't revealed by an ancestor root. */
[data-hover-group]:hover
.hoverable-item[data-hoverable-variant="opacity-on-hover"]:not(
[data-hoverable-local]
) {
opacity: 1;
}
/* Interaction override — force items visible via JS */
[data-hover-group][data-interaction="hover"]
.hoverable-item[data-hoverable-variant="opacity-on-hover"]:not(
[data-hoverable-local]
) {
opacity: 1;
}
@@ -17,7 +29,16 @@
opacity: 1;
}
/* Focus — item (or a focusable descendant) receives keyboard focus */
/* Group focus — any focusable descendant of the Root receives keyboard focus,
revealing all group items (same behavior as hover). */
[data-hover-group]:focus-within
.hoverable-item[data-hoverable-variant="opacity-on-hover"]:not(
[data-hoverable-local]
) {
opacity: 1;
}
/* Local focus — item (or a focusable descendant) receives keyboard focus */
.hoverable-item[data-hoverable-variant="opacity-on-hover"]:has(:focus-visible) {
opacity: 1;
}

View File

@@ -1,5 +1,5 @@
import type { Meta, StoryObj } from "@storybook/react";
import { CardHeaderLayout } from "@opal/layouts";
import { Card } from "@opal/layouts";
import { Button } from "@opal/components";
import {
SvgArrowExchange,
@@ -18,14 +18,14 @@ const withTooltipProvider: Decorator = (Story) => (
);
const meta = {
title: "Layouts/CardHeaderLayout",
component: CardHeaderLayout,
title: "Layouts/Card.Header",
component: Card.Header,
tags: ["autodocs"],
decorators: [withTooltipProvider],
parameters: {
layout: "centered",
},
} satisfies Meta<typeof CardHeaderLayout>;
} satisfies Meta<typeof Card.Header>;
export default meta;
@@ -38,7 +38,7 @@ type Story = StoryObj<typeof meta>;
export const Default: Story = {
render: () => (
<div className="w-[28rem] border rounded-16">
<CardHeaderLayout
<Card.Header
sizePreset="main-ui"
variant="section"
icon={SvgGlobe}
@@ -57,7 +57,7 @@ export const Default: Story = {
export const WithBothSlots: Story = {
render: () => (
<div className="w-[28rem] border rounded-16">
<CardHeaderLayout
<Card.Header
sizePreset="main-ui"
variant="section"
icon={SvgGlobe}
@@ -92,7 +92,7 @@ export const WithBothSlots: Story = {
export const RightChildrenOnly: Story = {
render: () => (
<div className="w-[28rem] border rounded-16">
<CardHeaderLayout
<Card.Header
sizePreset="main-ui"
variant="section"
icon={SvgGlobe}
@@ -111,7 +111,7 @@ export const RightChildrenOnly: Story = {
export const NoRightChildren: Story = {
render: () => (
<div className="w-[28rem] border rounded-16">
<CardHeaderLayout
<Card.Header
sizePreset="main-ui"
variant="section"
icon={SvgGlobe}
@@ -125,7 +125,7 @@ export const NoRightChildren: Story = {
export const LongContent: Story = {
render: () => (
<div className="w-[28rem] border rounded-16">
<CardHeaderLayout
<Card.Header
sizePreset="main-ui"
variant="section"
icon={SvgGlobe}

View File

@@ -0,0 +1,116 @@
# Card
**Import:** `import { Card } from "@opal/layouts";`
A namespace of card layout primitives. Each sub-component handles a specific region of a card.
## Card.Header
A card header layout that pairs a [`Content`](../content/README.md) block with a right-side column and an optional full-width children slot.
### Why Card.Header?
[`ContentAction`](../content-action/README.md) provides a single `rightChildren` slot. Card headers typically need two distinct right-side regions — a primary action on top and secondary actions on the bottom. `Card.Header` provides this with `rightChildren` and `bottomRightChildren` slots, plus a `children` slot for full-width content below the header row (e.g., search bars, expandable tool lists).
### Props
Inherits **all** props from [`Content`](../content/README.md) (icon, title, description, sizePreset, variant, editable, onTitleChange, suffix, etc.) plus:
| Prop | Type | Default | Description |
|---|---|---|---|
| `rightChildren` | `ReactNode` | `undefined` | Content rendered to the right of the Content block (top of right column). |
| `bottomRightChildren` | `ReactNode` | `undefined` | Content rendered below `rightChildren` in the same column. Laid out as `flex flex-row`. |
| `children` | `ReactNode` | `undefined` | Content rendered below the full header row, spanning the entire width. |
### Layout Structure
```
+---------------------------------------------------------+
| [Content (p-2, self-start)] [rightChildren] |
| icon + title + description [bottomRightChildren] |
+---------------------------------------------------------+
| [children — full width] |
+---------------------------------------------------------+
```
- Outer wrapper: `flex flex-col w-full`
- Header row: `flex flex-row items-stretch w-full`
- Content area: `flex-1 min-w-0 self-start p-2` — top-aligned with fixed padding
- Right column: `flex flex-col items-end shrink-0` — no padding, no gap
- `bottomRightChildren` wrapper: `flex flex-row` — lays children out horizontally
- `children` wrapper: `w-full` — only rendered when children are provided
### Usage
#### Card with primary and secondary actions
```tsx
import { Card } from "@opal/layouts";
import { Button } from "@opal/components";
import { SvgGlobe, SvgSettings, SvgUnplug, SvgCheckSquare } from "@opal/icons";
<Card.Header
icon={SvgGlobe}
title="Google Search"
description="Web search provider"
sizePreset="main-ui"
variant="section"
rightChildren={
<Button icon={SvgCheckSquare} variant="action" prominence="tertiary">
Current Default
</Button>
}
bottomRightChildren={
<>
<Button icon={SvgUnplug} size="sm" prominence="tertiary" tooltip="Disconnect" />
<Button icon={SvgSettings} size="sm" prominence="tertiary" tooltip="Edit" />
</>
}
/>
```
#### Card with only a connect action
```tsx
<Card.Header
icon={SvgCloud}
title="OpenAI"
description="Not configured"
sizePreset="main-ui"
variant="section"
rightChildren={
<Button rightIcon={SvgArrowExchange} prominence="tertiary">
Connect
</Button>
}
/>
```
#### Card with expandable children
```tsx
<Card.Header
icon={SvgServer}
title="MCP Server"
description="12 tools available"
sizePreset="main-ui"
variant="section"
rightChildren={<Button icon={SvgSettings} prominence="tertiary" />}
>
<SearchBar placeholder="Search tools..." />
</Card.Header>
```
#### No right children
```tsx
<Card.Header
icon={SvgInfo}
title="Section Header"
description="Description text"
sizePreset="main-content"
variant="section"
/>
```
When both `rightChildren` and `bottomRightChildren` are omitted and no `children` are provided, the component renders only the padded `Content`.

View File

@@ -4,16 +4,23 @@ import { Content, type ContentProps } from "@opal/layouts/content/components";
// Types
// ---------------------------------------------------------------------------
type CardHeaderLayoutProps = ContentProps & {
type CardHeaderProps = ContentProps & {
/** Content rendered to the right of the Content block. */
rightChildren?: React.ReactNode;
/** Content rendered below `rightChildren` in the same column. */
bottomRightChildren?: React.ReactNode;
/**
* Content rendered below the header row, full-width.
* Use for expandable sections, search bars, or any content
* that should appear beneath the icon/title/actions row.
*/
children?: React.ReactNode;
};
// ---------------------------------------------------------------------------
// CardHeaderLayout
// Card.Header
// ---------------------------------------------------------------------------
/**
@@ -24,9 +31,12 @@ type CardHeaderLayoutProps = ContentProps & {
* `rightChildren` on top, `bottomRightChildren` below with no
* padding or gap between them.
*
* The optional `children` slot renders below the full header row,
* spanning the entire width.
*
* @example
* ```tsx
* <CardHeaderLayout
* <Card.Header
* icon={SvgGlobe}
* title="Google"
* description="Search engine"
@@ -42,32 +52,42 @@ type CardHeaderLayoutProps = ContentProps & {
* />
* ```
*/
function CardHeaderLayout({
function Header({
rightChildren,
bottomRightChildren,
children,
...contentProps
}: CardHeaderLayoutProps) {
}: CardHeaderProps) {
const hasRight = rightChildren || bottomRightChildren;
return (
<div className="flex flex-row items-stretch w-full">
<div className="flex-1 min-w-0 self-start p-2">
<Content {...contentProps} />
</div>
{hasRight && (
<div className="flex flex-col items-end shrink-0">
{rightChildren && <div className="flex-1">{rightChildren}</div>}
{bottomRightChildren && (
<div className="flex flex-row">{bottomRightChildren}</div>
)}
<div className="flex flex-col w-full">
<div className="flex flex-row items-stretch w-full">
<div className="flex-1 min-w-0 self-start p-2">
<Content {...contentProps} />
</div>
)}
{hasRight && (
<div className="flex flex-col items-end shrink-0">
{rightChildren && <div className="flex-1">{rightChildren}</div>}
{bottomRightChildren && (
<div className="flex flex-row">{bottomRightChildren}</div>
)}
</div>
)}
</div>
{children && <div className="w-full">{children}</div>}
</div>
);
}
// ---------------------------------------------------------------------------
// Card namespace
// ---------------------------------------------------------------------------
const Card = { Header };
// ---------------------------------------------------------------------------
// Exports
// ---------------------------------------------------------------------------
export { CardHeaderLayout, type CardHeaderLayoutProps };
export { Card, type CardHeaderProps };

View File

@@ -1,94 +0,0 @@
# CardHeaderLayout
**Import:** `import { CardHeaderLayout, type CardHeaderLayoutProps } from "@opal/layouts";`
A card header layout that pairs a [`Content`](../../content/README.md) block with a right-side column of vertically stacked children.
## Why CardHeaderLayout?
[`ContentAction`](../../content-action/README.md) provides a single `rightChildren` slot. Card headers typically need two distinct right-side regions — a primary action on top and secondary actions on the bottom. `CardHeaderLayout` provides this with `rightChildren` and `bottomRightChildren` slots, with no padding or gap between them so the caller has full control over spacing.
## Props
Inherits **all** props from [`Content`](../../content/README.md) (icon, title, description, sizePreset, variant, etc.) plus:
| Prop | Type | Default | Description |
|---|---|---|---|
| `rightChildren` | `ReactNode` | `undefined` | Content rendered to the right of the Content block (top of right column). |
| `bottomRightChildren` | `ReactNode` | `undefined` | Content rendered below `rightChildren` in the same column. Laid out as `flex flex-row`. |
## Layout Structure
```
┌──────────────────────────────────────────────────────┐
│ [Content (p-2, self-start)] [rightChildren] │
│ icon + title + description [bottomRightChildren] │
└──────────────────────────────────────────────────────┘
```
- Outer wrapper: `flex flex-row items-stretch w-full`
- Content area: `flex-1 min-w-0 self-start p-2` — top-aligned with fixed padding
- Right column: `flex flex-col items-end justify-between shrink-0` — no padding, no gap
- `bottomRightChildren` wrapper: `flex flex-row` — lays children out horizontally
The right column uses `justify-between` so when both slots are present, `rightChildren` sits at the top and `bottomRightChildren` at the bottom.
## Usage
### Card with primary and secondary actions
```tsx
import { CardHeaderLayout } from "@opal/layouts";
import { Button } from "@opal/components";
import { SvgGlobe, SvgSettings, SvgUnplug, SvgCheckSquare } from "@opal/icons";
<CardHeaderLayout
icon={SvgGlobe}
title="Google Search"
description="Web search provider"
sizePreset="main-ui"
variant="section"
rightChildren={
<Button icon={SvgCheckSquare} variant="action" prominence="tertiary">
Current Default
</Button>
}
bottomRightChildren={
<>
<Button icon={SvgUnplug} size="sm" prominence="tertiary" tooltip="Disconnect" />
<Button icon={SvgSettings} size="sm" prominence="tertiary" tooltip="Edit" />
</>
}
/>
```
### Card with only a connect action
```tsx
<CardHeaderLayout
icon={SvgCloud}
title="OpenAI"
description="Not configured"
sizePreset="main-ui"
variant="section"
rightChildren={
<Button rightIcon={SvgArrowExchange} prominence="tertiary">
Connect
</Button>
}
/>
```
### No right children
```tsx
<CardHeaderLayout
icon={SvgInfo}
title="Section Header"
description="Description text"
sizePreset="main-content"
variant="section"
/>
```
When both `rightChildren` and `bottomRightChildren` are omitted, the component renders only the padded `Content`.

View File

@@ -12,11 +12,8 @@ export {
type ContentActionProps,
} from "@opal/layouts/content-action/components";
/* CardHeaderLayout */
export {
CardHeaderLayout,
type CardHeaderLayoutProps,
} from "@opal/layouts/cards/header-layout/components";
/* Card */
export { Card, type CardHeaderProps } from "@opal/layouts/cards/components";
/* IllustrationContent */
export {

6
web/package-lock.json generated
View File

@@ -18122,9 +18122,9 @@
}
},
"node_modules/vite": {
"version": "6.4.1",
"resolved": "https://registry.npmjs.org/vite/-/vite-6.4.1.tgz",
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
"version": "6.4.2",
"resolved": "https://registry.npmjs.org/vite/-/vite-6.4.2.tgz",
"integrity": "sha512-2N/55r4JDJ4gdrCvGgINMy+HH3iRpNIz8K6SFwVsA+JbQScLiC+clmAxBgwiSPgcG9U15QmvqCGWzMbqda5zGQ==",
"dev": true,
"license": "MIT",
"peer": true,

View File

@@ -1 +1 @@
export { default } from "@/refresh-pages/admin/LLMConfigurationPage";
export { default } from "@/refresh-pages/admin/LLMProviderConfigurationPage";

View File

@@ -32,8 +32,10 @@ import {
OpenRouterFetchParams,
LiteLLMProxyFetchParams,
BifrostFetchParams,
OpenAICompatibleFetchParams,
OpenAICompatibleModelResponse,
} from "@/interfaces/llm";
import { SvgAws, SvgBifrost, SvgOpenrouter } from "@opal/icons";
import { SvgAws, SvgBifrost, SvgOpenrouter, SvgPlug } from "@opal/icons";
// Aggregator providers that host models from multiple vendors
export const AGGREGATOR_PROVIDERS = new Set([
@@ -44,6 +46,7 @@ export const AGGREGATOR_PROVIDERS = new Set([
"lm_studio",
"litellm_proxy",
"bifrost",
"openai_compatible",
"vertex_ai",
]);
@@ -82,6 +85,7 @@ export const getProviderIcon = (
openrouter: SvgOpenrouter,
litellm_proxy: LiteLLMIcon,
bifrost: SvgBifrost,
openai_compatible: SvgPlug,
vertex_ai: GeminiIcon,
};
@@ -411,6 +415,64 @@ export const fetchBifrostModels = async (
}
};
/**
* Fetches models from a generic OpenAI-compatible server.
* Uses snake_case params to match API structure.
*/
export const fetchOpenAICompatibleModels = async (
params: OpenAICompatibleFetchParams
): Promise<{ models: ModelConfiguration[]; error?: string }> => {
const apiBase = params.api_base;
if (!apiBase) {
return { models: [], error: "API Base is required" };
}
try {
const response = await fetch(
"/api/admin/llm/openai-compatible/available-models",
{
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
api_base: apiBase,
api_key: params.api_key,
provider_name: params.provider_name,
}),
signal: params.signal,
}
);
if (!response.ok) {
let errorMessage = "Failed to fetch models";
try {
const errorData = await response.json();
errorMessage = errorData.detail || errorData.message || errorMessage;
} catch {
// ignore JSON parsing errors
}
return { models: [], error: errorMessage };
}
const data: OpenAICompatibleModelResponse[] = await response.json();
const models: ModelConfiguration[] = data.map((modelData) => ({
name: modelData.name,
display_name: modelData.display_name,
is_visible: true,
max_input_tokens: modelData.max_input_tokens,
supports_image_input: modelData.supports_image_input,
supports_reasoning: modelData.supports_reasoning,
}));
return { models };
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : "Unknown error";
return { models: [], error: errorMessage };
}
};
/**
* Fetches LiteLLM Proxy models directly without any form state dependencies.
* Uses snake_case params to match API structure.
@@ -531,6 +593,13 @@ export const fetchModels = async (
provider_name: formValues.name,
signal,
});
case LLMProviderName.OPENAI_COMPATIBLE:
return fetchOpenAICompatibleModels({
api_base: formValues.api_base,
api_key: formValues.api_key,
provider_name: formValues.name,
signal,
});
default:
return { models: [], error: `Unknown provider: ${providerName}` };
}
@@ -545,6 +614,7 @@ export function canProviderFetchModels(providerName?: string) {
case LLMProviderName.OPENROUTER:
case LLMProviderName.LITELLM_PROXY:
case LLMProviderName.BIFROST:
case LLMProviderName.OPENAI_COMPATIBLE:
return true;
default:
return false;

View File

@@ -64,50 +64,50 @@ export default function CreateRateLimitModal({
title="Create a Token Rate Limit"
onClose={() => setIsOpen(false)}
/>
<Modal.Body>
<Formik
initialValues={{
enabled: true,
period_hours: "",
token_budget: "",
target_scope: forSpecificScope || Scope.GLOBAL,
user_group_id: forSpecificUserGroup,
}}
validationSchema={Yup.object().shape({
period_hours: Yup.number()
.required("Time Window is a required field")
.min(1, "Time Window must be at least 1 hour"),
token_budget: Yup.number()
.required("Token Budget is a required field")
.min(1, "Token Budget must be at least 1"),
target_scope: Yup.string().required(
"Target Scope is a required field"
),
user_group_id: Yup.string().test(
"user_group_id",
"User Group is a required field",
(value, context) => {
return (
context.parent.target_scope !== "user_group" ||
(context.parent.target_scope === "user_group" &&
value !== undefined)
);
}
),
})}
onSubmit={async (values, formikHelpers) => {
formikHelpers.setSubmitting(true);
onSubmit(
values.target_scope,
Number(values.period_hours),
Number(values.token_budget),
Number(values.user_group_id)
);
return formikHelpers.setSubmitting(false);
}}
>
{({ isSubmitting, values, setFieldValue }) => (
<Form className="overflow-visible px-2">
<Formik
initialValues={{
enabled: true,
period_hours: "",
token_budget: "",
target_scope: forSpecificScope || Scope.GLOBAL,
user_group_id: forSpecificUserGroup,
}}
validationSchema={Yup.object().shape({
period_hours: Yup.number()
.required("Time Window is a required field")
.min(1, "Time Window must be at least 1 hour"),
token_budget: Yup.number()
.required("Token Budget is a required field")
.min(1, "Token Budget must be at least 1"),
target_scope: Yup.string().required(
"Target Scope is a required field"
),
user_group_id: Yup.string().test(
"user_group_id",
"User Group is a required field",
(value, context) => {
return (
context.parent.target_scope !== "user_group" ||
(context.parent.target_scope === "user_group" &&
value !== undefined)
);
}
),
})}
onSubmit={async (values, formikHelpers) => {
formikHelpers.setSubmitting(true);
onSubmit(
values.target_scope,
Number(values.period_hours),
Number(values.token_budget),
Number(values.user_group_id)
);
return formikHelpers.setSubmitting(false);
}}
>
{({ isSubmitting, values, setFieldValue }) => (
<Form className="flex flex-col h-full min-h-0 overflow-visible">
<Modal.Body>
{!forSpecificScope && (
<SelectorFormField
name="target_scope"
@@ -147,13 +147,15 @@ export default function CreateRateLimitModal({
type="number"
placeholder=""
/>
</Modal.Body>
<Modal.Footer>
<Button disabled={isSubmitting} type="submit">
Create
</Button>
</Form>
)}
</Formik>
</Modal.Body>
</Modal.Footer>
</Form>
)}
</Formik>
</Modal.Content>
</Modal>
);

View File

@@ -0,0 +1,126 @@
"use client";
import { useCallback } from "react";
import { Button } from "@opal/components";
import { Text } from "@opal/components";
import { ContentAction } from "@opal/layouts";
import { SvgEyeOff, SvgX } from "@opal/icons";
import { getProviderIcon } from "@/app/admin/configuration/llm/utils";
import AgentMessage, {
AgentMessageProps,
} from "@/app/app/message/messageComponents/AgentMessage";
import { cn } from "@/lib/utils";
import { markdown } from "@opal/utils";
export interface MultiModelPanelProps {
/** Provider name for icon lookup */
provider: string;
/** Model name for icon lookup and display */
modelName: string;
/** Display-friendly model name */
displayName: string;
/** Whether this panel is the preferred/selected response */
isPreferred: boolean;
/** Whether this panel is currently hidden */
isHidden: boolean;
/** Whether this is a non-preferred panel in selection mode (pushed off-screen) */
isNonPreferredInSelection: boolean;
/** Callback when user clicks this panel to select as preferred */
onSelect: () => void;
/** Callback to hide/show this panel */
onToggleVisibility: () => void;
/** Props to pass through to AgentMessage */
agentMessageProps: AgentMessageProps;
}
/**
* A single model's response panel within the multi-model view.
*
* Renders in two states:
* - **Hidden** — compact header strip only (provider icon + strikethrough name + show button).
* - **Visible** — full header plus `AgentMessage` body. Clicking anywhere on a
* visible non-preferred panel marks it as preferred.
*
* The `isNonPreferredInSelection` flag disables pointer events on the body and
* hides the footer so the panel acts as a passive comparison surface.
*/
export default function MultiModelPanel({
provider,
modelName,
displayName,
isPreferred,
isHidden,
isNonPreferredInSelection,
onSelect,
onToggleVisibility,
agentMessageProps,
}: MultiModelPanelProps) {
const ProviderIcon = getProviderIcon(provider, modelName);
const handlePanelClick = useCallback(() => {
if (!isHidden && !isPreferred) onSelect();
}, [isHidden, isPreferred, onSelect]);
const header = (
<div
className={cn(
"rounded-12",
isPreferred ? "bg-background-tint-02" : "bg-background-tint-00"
)}
>
<ContentAction
sizePreset="main-ui"
variant="body"
paddingVariant="lg"
icon={ProviderIcon}
title={isHidden ? markdown(`~~${displayName}~~`) : displayName}
rightChildren={
<div className="flex items-center gap-1 px-2">
{isPreferred && (
<span className="text-action-link-05 shrink-0">
<Text font="secondary-body" color="inherit" nowrap>
Preferred Response
</Text>
</span>
)}
{!isPreferred && (
<Button
prominence="tertiary"
icon={isHidden ? SvgEyeOff : SvgX}
size="md"
onClick={(e) => {
e.stopPropagation();
onToggleVisibility();
}}
tooltip={isHidden ? "Show response" : "Hide response"}
/>
)}
</div>
}
/>
</div>
);
// Hidden/collapsed panel — just the header row
if (isHidden) {
return header;
}
return (
<div
className={cn(
"flex flex-col gap-3 min-w-0 rounded-16 transition-colors",
!isPreferred && "cursor-pointer hover:bg-background-tint-02"
)}
onClick={handlePanelClick}
>
{header}
<div className={cn(isNonPreferredInSelection && "pointer-events-none")}>
<AgentMessage
{...agentMessageProps}
hideFooter={isNonPreferredInSelection}
/>
</div>
</div>
);
}

View File

@@ -0,0 +1,372 @@
"use client";
import { useState, useCallback, useMemo, useEffect, useRef } from "react";
import { FullChatState } from "@/app/app/message/messageComponents/interfaces";
import { Message } from "@/app/app/interfaces";
import { LlmManager } from "@/lib/hooks";
import { RegenerationFactory } from "@/app/app/message/messageComponents/AgentMessage";
import MultiModelPanel from "@/app/app/message/MultiModelPanel";
import { MultiModelResponse } from "@/app/app/message/interfaces";
import { cn } from "@/lib/utils";
export interface MultiModelResponseViewProps {
responses: MultiModelResponse[];
chatState: FullChatState;
llmManager: LlmManager | null;
onRegenerate?: RegenerationFactory;
parentMessage?: Message | null;
otherMessagesCanSwitchTo?: number[];
onMessageSelection?: (nodeId: number) => void;
/** Called whenever the set of hidden panel indices changes */
onHiddenPanelsChange?: (hidden: Set<number>) => void;
}
// How many pixels of a non-preferred panel are visible at the viewport edge
const PEEK_W = 64;
// Uniform panel width used in the selection-mode carousel
const SELECTION_PANEL_W = 400;
// Compact width for hidden panels in the carousel track
const HIDDEN_PANEL_W = 220;
// Generation-mode panel widths (from Figma)
const GEN_PANEL_W_2 = 640; // 2 panels side-by-side
const GEN_PANEL_W_3 = 436; // 3 panels side-by-side
// Gap between panels — matches CSS gap-6 (24px)
const PANEL_GAP = 24;
// Minimum panel width before horizontal scroll kicks in
const MIN_PANEL_W = 300;
/**
* Renders N model responses side-by-side with two layout modes:
*
* **Generation mode** — equal-width panels in a horizontally-scrollable row.
* Panel width is determined by the number of visible (non-hidden) panels.
*
* **Selection mode** — activated when the user clicks a panel to mark it as
* preferred. All panels (including hidden ones) sit in a fixed-width carousel
* track. A CSS `translateX` transform slides the track so the preferred panel
* is centered in the viewport; the other panels peek in from the edges through
* a mask gradient. Non-preferred visible panels are height-capped to the
* preferred panel's measured height, dimmed at 50% opacity, and receive a
* bottom fade-out overlay.
*
* Hidden panels render as a compact header-only strip at `HIDDEN_PANEL_W` in
* both modes and are excluded from layout width calculations.
*/
export default function MultiModelResponseView({
responses,
chatState,
llmManager,
onRegenerate,
parentMessage,
otherMessagesCanSwitchTo,
onMessageSelection,
onHiddenPanelsChange,
}: MultiModelResponseViewProps) {
const [preferredIndex, setPreferredIndex] = useState<number | null>(null);
const [hiddenPanels, setHiddenPanels] = useState<Set<number>>(new Set());
// Controls animation: false = panels at start position, true = panels at peek position
const [selectionEntered, setSelectionEntered] = useState(false);
// Measures the overflow-hidden carousel container for responsive preferred-panel sizing.
const [trackContainerW, setTrackContainerW] = useState(0);
const roRef = useRef<ResizeObserver | null>(null);
const trackContainerRef = useCallback((el: HTMLDivElement | null) => {
if (roRef.current) {
roRef.current.disconnect();
roRef.current = null;
}
if (!el) return;
const ro = new ResizeObserver(([entry]) => {
setTrackContainerW(entry?.contentRect.width ?? 0);
});
ro.observe(el);
setTrackContainerW(el.offsetWidth);
roRef.current = ro;
}, []);
// Measures the preferred panel's height to cap non-preferred panels in selection mode.
const [preferredPanelHeight, setPreferredPanelHeight] = useState<
number | null
>(null);
const preferredRoRef = useRef<ResizeObserver | null>(null);
// Tracks which non-preferred panels overflow the preferred height cap
const [overflowingPanels, setOverflowingPanels] = useState<Set<number>>(
new Set()
);
const preferredPanelRef = useCallback((el: HTMLDivElement | null) => {
if (preferredRoRef.current) {
preferredRoRef.current.disconnect();
preferredRoRef.current = null;
}
if (!el) {
setPreferredPanelHeight(null);
return;
}
const ro = new ResizeObserver(([entry]) => {
setPreferredPanelHeight(entry?.contentRect.height ?? 0);
});
ro.observe(el);
setPreferredPanelHeight(el.offsetHeight);
preferredRoRef.current = ro;
}, []);
const isGenerating = useMemo(
() => responses.some((r) => r.isGenerating),
[responses]
);
// Non-hidden responses — used for layout width decisions and selection-mode gating
const visibleResponses = useMemo(
() => responses.filter((r) => !hiddenPanels.has(r.modelIndex)),
[responses, hiddenPanels]
);
const toggleVisibility = useCallback(
(modelIndex: number) => {
setHiddenPanels((prev) => {
const next = new Set(prev);
if (next.has(modelIndex)) {
next.delete(modelIndex);
} else {
// Don't hide the last visible panel
const visibleCount = responses.length - next.size;
if (visibleCount <= 1) return prev;
next.add(modelIndex);
}
onHiddenPanelsChange?.(next);
return next;
});
},
[responses.length, onHiddenPanelsChange]
);
const handleSelectPreferred = useCallback(
(modelIndex: number) => {
if (isGenerating) return;
setPreferredIndex(modelIndex);
const response = responses.find((r) => r.modelIndex === modelIndex);
if (!response) return;
if (onMessageSelection) {
onMessageSelection(response.nodeId);
}
},
[isGenerating, responses, onMessageSelection]
);
// Clear preferred selection when generation starts
useEffect(() => {
if (isGenerating) {
setPreferredIndex(null);
}
}, [isGenerating]);
// Find preferred panel position — used for both the selection guard and carousel layout
const preferredIdx = responses.findIndex(
(r) => r.modelIndex === preferredIndex
);
// Selection mode when preferred is set, found in responses, not generating, and at least 2 visible panels
const showSelectionMode =
preferredIndex !== null &&
preferredIdx !== -1 &&
!isGenerating &&
visibleResponses.length > 1;
// Trigger the slide-out animation one frame after entering selection mode
useEffect(() => {
if (!showSelectionMode) {
setSelectionEntered(false);
return;
}
const raf = requestAnimationFrame(() => setSelectionEntered(true));
return () => cancelAnimationFrame(raf);
}, [showSelectionMode]);
// Build panel props — isHidden reflects actual hidden state
const buildPanelProps = useCallback(
(response: MultiModelResponse, isNonPreferred: boolean) => ({
provider: response.provider,
modelName: response.modelName,
displayName: response.displayName,
isPreferred: preferredIndex === response.modelIndex,
isHidden: hiddenPanels.has(response.modelIndex),
isNonPreferredInSelection: isNonPreferred,
onSelect: () => handleSelectPreferred(response.modelIndex),
onToggleVisibility: () => toggleVisibility(response.modelIndex),
agentMessageProps: {
rawPackets: response.packets,
packetCount: response.packetCount,
chatState,
nodeId: response.nodeId,
messageId: response.messageId,
currentFeedback: response.currentFeedback,
llmManager,
otherMessagesCanSwitchTo,
onMessageSelection,
onRegenerate,
parentMessage,
},
}),
[
preferredIndex,
hiddenPanels,
handleSelectPreferred,
toggleVisibility,
chatState,
llmManager,
otherMessagesCanSwitchTo,
onMessageSelection,
onRegenerate,
parentMessage,
]
);
if (showSelectionMode) {
// ── Selection Layout (transform-based carousel) ──
//
// All panels (including hidden) sit in the track at their original A/B/C positions.
// Hidden panels use HIDDEN_PANEL_W; non-preferred use SELECTION_PANEL_W;
// preferred uses dynamicPrefW (up to GEN_PANEL_W_2).
const n = responses.length;
const dynamicPrefW =
trackContainerW > 0
? Math.min(trackContainerW - 2 * (PEEK_W + PANEL_GAP), GEN_PANEL_W_2)
: GEN_PANEL_W_2;
const selectionWidths = responses.map((r, i) => {
if (hiddenPanels.has(r.modelIndex)) return HIDDEN_PANEL_W;
if (i === preferredIdx) return dynamicPrefW;
return SELECTION_PANEL_W;
});
const panelLeftEdges = selectionWidths.reduce<number[]>((acc, w, i) => {
acc.push(i === 0 ? 0 : acc[i - 1]! + selectionWidths[i - 1]! + PANEL_GAP);
return acc;
}, []);
const preferredCenterInTrack =
panelLeftEdges[preferredIdx]! + selectionWidths[preferredIdx]! / 2;
// Start position: hidden panels at HIDDEN_PANEL_W, visible at SELECTION_PANEL_W
const uniformTrackW =
responses.reduce(
(sum, r) =>
sum +
(hiddenPanels.has(r.modelIndex) ? HIDDEN_PANEL_W : SELECTION_PANEL_W),
0
) +
(n - 1) * PANEL_GAP;
const trackTransform = selectionEntered
? `translateX(${trackContainerW / 2 - preferredCenterInTrack}px)`
: `translateX(${(trackContainerW - uniformTrackW) / 2}px)`;
return (
<div
ref={trackContainerRef}
className="w-full overflow-hidden"
style={{
maskImage: `linear-gradient(to right, transparent 0px, black ${PEEK_W}px, black calc(100% - ${PEEK_W}px), transparent 100%)`,
WebkitMaskImage: `linear-gradient(to right, transparent 0px, black ${PEEK_W}px, black calc(100% - ${PEEK_W}px), transparent 100%)`,
}}
>
<div
className="flex items-start"
style={{
gap: `${PANEL_GAP}px`,
transition: selectionEntered
? "transform 0.45s cubic-bezier(0.2, 0, 0, 1)"
: "none",
transform: trackTransform,
}}
>
{responses.map((r, i) => {
const isHidden = hiddenPanels.has(r.modelIndex);
const isPref = r.modelIndex === preferredIndex;
const isNonPref = !isHidden && !isPref;
const finalW = selectionWidths[i]!;
const startW = isHidden ? HIDDEN_PANEL_W : SELECTION_PANEL_W;
const capped = isNonPref && preferredPanelHeight != null;
const overflows = capped && overflowingPanels.has(r.modelIndex);
return (
<div
key={r.modelIndex}
ref={(el) => {
if (isPref) preferredPanelRef(el);
if (capped && el) {
const doesOverflow = el.scrollHeight > el.clientHeight;
setOverflowingPanels((prev) => {
const had = prev.has(r.modelIndex);
if (doesOverflow === had) return prev;
const next = new Set(prev);
if (doesOverflow) next.add(r.modelIndex);
else next.delete(r.modelIndex);
return next;
});
}
}}
style={{
width: `${selectionEntered ? finalW : startW}px`,
flexShrink: 0,
transition: selectionEntered
? "width 0.45s cubic-bezier(0.2, 0, 0, 1)"
: "none",
maxHeight: capped ? preferredPanelHeight : undefined,
overflow: capped ? "hidden" : undefined,
position: capped ? "relative" : undefined,
}}
>
<div className={cn(isNonPref && "opacity-50")}>
<MultiModelPanel {...buildPanelProps(r, isNonPref)} />
</div>
{overflows && (
<div
className="absolute inset-x-0 bottom-0 h-24 pointer-events-none"
style={{
background:
"linear-gradient(to top, var(--background-tint-01) 0%, transparent 100%)",
}}
/>
)}
</div>
);
})}
</div>
</div>
);
}
// ── Generation Layout (equal panels side-by-side) ──
// Panel width based on number of visible (non-hidden) panels.
const panelWidth =
visibleResponses.length <= 2 ? GEN_PANEL_W_2 : GEN_PANEL_W_3;
return (
<div className="overflow-x-auto">
<div className="flex gap-6 items-start w-fit mx-auto">
{responses.map((r) => {
const isHidden = hiddenPanels.has(r.modelIndex);
return (
<div
key={r.modelIndex}
style={
isHidden
? {
width: HIDDEN_PANEL_W,
minWidth: HIDDEN_PANEL_W,
maxWidth: HIDDEN_PANEL_W,
flexShrink: 0,
overflow: "hidden" as const,
}
: { width: panelWidth, minWidth: MIN_PANEL_W }
}
>
<MultiModelPanel {...buildPanelProps(r, false)} />
</div>
);
})}
</div>
</div>
);
}

View File

@@ -0,0 +1,16 @@
import { Packet } from "@/app/app/services/streamingModels";
import { FeedbackType } from "@/app/app/interfaces";
export interface MultiModelResponse {
modelIndex: number;
provider: string;
modelName: string;
displayName: string;
packets: Packet[];
packetCount: number;
nodeId: number;
messageId?: number;
currentFeedback?: FeedbackType | null;
isGenerating?: boolean;
}

View File

@@ -49,6 +49,8 @@ export interface AgentMessageProps {
parentMessage?: Message | null;
// Duration in seconds for processing this message (agent messages only)
processingDurationSeconds?: number;
/** Hide the feedback/toolbar footer (used in multi-model non-preferred panels) */
hideFooter?: boolean;
}
// TODO: Consider more robust comparisons:
@@ -76,7 +78,8 @@ function arePropsEqual(
prev.parentMessage?.messageId === next.parentMessage?.messageId &&
prev.llmManager?.isLoadingProviders ===
next.llmManager?.isLoadingProviders &&
prev.processingDurationSeconds === next.processingDurationSeconds
prev.processingDurationSeconds === next.processingDurationSeconds &&
prev.hideFooter === next.hideFooter
// Skip: chatState.regenerate, chatState.setPresentingDocument,
// most of llmManager, onMessageSelection (function/object props)
);
@@ -95,6 +98,7 @@ const AgentMessage = React.memo(function AgentMessage({
onRegenerate,
parentMessage,
processingDurationSeconds,
hideFooter,
}: AgentMessageProps) {
const markdownRef = useRef<HTMLDivElement>(null);
const finalAnswerRef = useRef<HTMLDivElement>(null);
@@ -326,7 +330,7 @@ const AgentMessage = React.memo(function AgentMessage({
</div>
{/* Feedback buttons - only show when streaming and rendering complete */}
{isComplete && (
{isComplete && !hideFooter && (
<MessageToolbar
nodeId={nodeId}
messageId={messageId}

View File

@@ -16,7 +16,7 @@ import Text from "@/refresh-components/texts/Text";
import SidebarWrapper from "@/sections/sidebar/SidebarWrapper";
import SidebarBody from "@/sections/sidebar/SidebarBody";
import SidebarSection from "@/sections/sidebar/SidebarSection";
import UserAvatarPopover from "@/sections/sidebar/UserAvatarPopover";
import AccountPopover from "@/sections/sidebar/AccountPopover";
import Popover, { PopoverMenu } from "@/refresh-components/Popover";
import IconButton from "@/refresh-components/buttons/IconButton";
import ButtonRenaming from "@/refresh-components/buttons/ButtonRenaming";
@@ -398,7 +398,7 @@ const MemoizedBuildSidebarInner = memo(
() => (
<div>
{backToChatButton}
<UserAvatarPopover folded={folded} />
<AccountPopover folded={folded} />
</div>
),
[folded, backToChatButton]

View File

@@ -24,7 +24,6 @@ import {
} from "@/app/craft/onboarding/constants";
import { LLMProviderDescriptor } from "@/interfaces/llm";
import { LLM_PROVIDERS_ADMIN_URL } from "@/lib/llmConfig/constants";
import { buildOnboardingInitialValues as buildInitialValues } from "@/sections/modals/llmConfig/utils";
import { testApiKeyHelper } from "@/sections/modals/llmConfig/svc";
import OnboardingInfoPages from "@/app/craft/onboarding/components/OnboardingInfoPages";
import OnboardingUserInfo from "@/app/craft/onboarding/components/OnboardingUserInfo";
@@ -221,10 +220,8 @@ export default function BuildOnboardingModal({
setConnectionStatus("testing");
setErrorMessage("");
const baseValues = buildInitialValues();
const providerName = `build-mode-${currentProviderConfig.providerName}`;
const payload = {
...baseValues,
name: providerName,
provider: currentProviderConfig.providerName,
api_key: apiKey,

View File

@@ -133,7 +133,7 @@ async function createFederatedConnector(
async function updateFederatedConnector(
id: number,
credentials: CredentialForm,
credentials: CredentialForm | null,
config?: ConfigForm
): Promise<{ success: boolean; message: string }> {
try {
@@ -143,7 +143,7 @@ async function updateFederatedConnector(
"Content-Type": "application/json",
},
body: JSON.stringify({
credentials,
credentials: credentials ?? undefined,
config: config || {},
}),
});
@@ -201,7 +201,9 @@ export function FederatedConnectorForm({
const isEditMode = connectorId !== undefined;
const [formState, setFormState] = useState<FormState>({
credentials: preloadedConnectorData?.credentials || {},
// In edit mode, don't populate credentials with masked values from the API.
// Masked values (e.g. "••••••••••••") would be saved back and corrupt the real credentials.
credentials: isEditMode ? {} : preloadedConnectorData?.credentials || {},
config: preloadedConnectorData?.config || {},
schema: preloadedCredentialSchema?.credentials || null,
configurationSchema: null,
@@ -209,6 +211,7 @@ export function FederatedConnectorForm({
configurationSchemaError: null,
connectorError: null,
});
const [credentialsModified, setCredentialsModified] = useState(false);
const [isSubmitting, setIsSubmitting] = useState(false);
const [submitMessage, setSubmitMessage] = useState<string | null>(null);
const [submitSuccess, setSubmitSuccess] = useState<boolean | null>(null);
@@ -333,6 +336,7 @@ export function FederatedConnectorForm({
}
const handleCredentialChange = (key: string, value: string) => {
setCredentialsModified(true);
setFormState((prev) => ({
...prev,
credentials: {
@@ -354,6 +358,11 @@ export function FederatedConnectorForm({
const handleValidateCredentials = async () => {
if (!formState.schema) return;
if (isEditMode && !credentialsModified) {
setSubmitMessage("Enter new credential values before validating.");
setSubmitSuccess(false);
return;
}
setIsValidating(true);
setSubmitMessage(null);
@@ -411,8 +420,10 @@ export function FederatedConnectorForm({
setSubmitSuccess(null);
try {
// Validate required fields
if (formState.schema) {
const shouldValidateCredentials = !isEditMode || credentialsModified;
// Validate required fields (skip for credentials in edit mode when unchanged)
if (formState.schema && shouldValidateCredentials) {
const missingRequired = Object.entries(formState.schema)
.filter(
([key, field]) => field.required && !formState.credentials[key]
@@ -442,16 +453,20 @@ export function FederatedConnectorForm({
}
setConfigValidationErrors({});
// Validate credentials before creating/updating
const validation = await validateCredentials(
connector,
formState.credentials
);
if (!validation.success) {
setSubmitMessage(`Credential validation failed: ${validation.message}`);
setSubmitSuccess(false);
setIsSubmitting(false);
return;
// Validate credentials before creating/updating (skip in edit mode when unchanged)
if (shouldValidateCredentials) {
const validation = await validateCredentials(
connector,
formState.credentials
);
if (!validation.success) {
setSubmitMessage(
`Credential validation failed: ${validation.message}`
);
setSubmitSuccess(false);
setIsSubmitting(false);
return;
}
}
// Create or update the connector
@@ -459,7 +474,7 @@ export function FederatedConnectorForm({
isEditMode && connectorId
? await updateFederatedConnector(
connectorId,
formState.credentials,
credentialsModified ? formState.credentials : null,
formState.config
)
: await createFederatedConnector(
@@ -538,14 +553,16 @@ export function FederatedConnectorForm({
id={fieldKey}
type={fieldSpec.secret ? "password" : "text"}
placeholder={
fieldSpec.example
? String(fieldSpec.example)
: fieldSpec.description
isEditMode && !credentialsModified
? "•••••••• (leave blank to keep current value)"
: fieldSpec.example
? String(fieldSpec.example)
: fieldSpec.description
}
value={formState.credentials[fieldKey] || ""}
onChange={(e) => handleCredentialChange(fieldKey, e.target.value)}
className="w-96"
required={fieldSpec.required}
required={!isEditMode && fieldSpec.required}
/>
</div>
))}

View File

@@ -1,25 +1,10 @@
"use client";
import {
WellKnownLLMProviderDescriptor,
LLMProviderDescriptor,
} from "@/interfaces/llm";
import React, {
createContext,
useContext,
useState,
useEffect,
useCallback,
} from "react";
import { useUser } from "@/providers/UserProvider";
import { LLMProviderDescriptor } from "@/interfaces/llm";
import React, { createContext, useContext, useCallback } from "react";
import { useLLMProviders } from "@/hooks/useLLMProviders";
import { useLLMProviderOptions } from "@/lib/hooks/useLLMProviderOptions";
import { testDefaultProvider as testDefaultProviderSvc } from "@/lib/llmConfig/svc";
interface ProviderContextType {
shouldShowConfigurationNeeded: boolean;
providerOptions: WellKnownLLMProviderDescriptor[];
refreshProviderInfo: () => Promise<void>;
// Expose configured provider instances for components that need it (e.g., onboarding)
llmProviders: LLMProviderDescriptor[] | undefined;
isLoadingProviders: boolean;
hasProviders: boolean;
@@ -29,79 +14,26 @@ const ProviderContext = createContext<ProviderContextType | undefined>(
undefined
);
const DEFAULT_LLM_PROVIDER_TEST_COMPLETE_KEY = "defaultLlmProviderTestComplete";
function checkDefaultLLMProviderTestComplete() {
if (typeof window === "undefined") return true;
return (
localStorage.getItem(DEFAULT_LLM_PROVIDER_TEST_COMPLETE_KEY) === "true"
);
}
function setDefaultLLMProviderTestComplete() {
if (typeof window === "undefined") return;
localStorage.setItem(DEFAULT_LLM_PROVIDER_TEST_COMPLETE_KEY, "true");
}
export function ProviderContextProvider({
children,
}: {
children: React.ReactNode;
}) {
const { user } = useUser();
// Use SWR hooks instead of raw fetch
const {
llmProviders,
isLoading: isLoadingProviders,
refetch: refetchProviders,
} = useLLMProviders();
const { llmProviderOptions: providerOptions, refetch: refetchOptions } =
useLLMProviderOptions();
const [defaultCheckSuccessful, setDefaultCheckSuccessful] =
useState<boolean>(true);
// Test the default provider - only runs if test hasn't passed yet
const testDefaultProvider = useCallback(async () => {
const shouldCheck =
!checkDefaultLLMProviderTestComplete() &&
(!user || user.role === "admin");
if (shouldCheck) {
const success = await testDefaultProviderSvc();
setDefaultCheckSuccessful(success);
if (success) {
setDefaultLLMProviderTestComplete();
}
}
}, [user]);
// Test default provider on mount
useEffect(() => {
testDefaultProvider();
}, [testDefaultProvider]);
const hasProviders = (llmProviders?.length ?? 0) > 0;
const validProviderExists = hasProviders && defaultCheckSuccessful;
const shouldShowConfigurationNeeded =
!validProviderExists && (providerOptions?.length ?? 0) > 0;
const refreshProviderInfo = useCallback(async () => {
// Refetch provider lists and re-test default provider if needed
await Promise.all([
refetchProviders(),
refetchOptions(),
testDefaultProvider(),
]);
}, [refetchProviders, refetchOptions, testDefaultProvider]);
await refetchProviders();
}, [refetchProviders]);
return (
<ProviderContext.Provider
value={{
shouldShowConfigurationNeeded,
providerOptions: providerOptions ?? [],
refreshProviderInfo,
llmProviders,
isLoadingProviders,

View File

@@ -17,7 +17,6 @@ const mockProviderStatus = {
llmProviders: [] as unknown[],
isLoadingProviders: false,
hasProviders: false,
providerOptions: [],
refreshProviderInfo: jest.fn(),
};
@@ -71,7 +70,6 @@ describe("useShowOnboarding", () => {
mockProviderStatus.llmProviders = [];
mockProviderStatus.isLoadingProviders = false;
mockProviderStatus.hasProviders = false;
mockProviderStatus.providerOptions = [];
});
it("returns showOnboarding=false while providers are loading", () => {
@@ -198,7 +196,6 @@ describe("useShowOnboarding", () => {
OnboardingStep.Welcome
);
expect(result.current.onboardingActions).toBeDefined();
expect(result.current.llmDescriptors).toEqual([]);
});
describe("localStorage persistence", () => {

View File

@@ -5,6 +5,7 @@ import { errorHandlingFetcher } from "@/lib/fetcher";
import { SWR_KEYS } from "@/lib/swr-keys";
import {
LLMProviderDescriptor,
LLMProviderName,
LLMProviderResponse,
LLMProviderView,
WellKnownLLMProviderDescriptor,
@@ -138,12 +139,12 @@ export function useAdminLLMProviders() {
* Used inside individual provider modals to pre-populate model lists
* before the user has entered credentials.
*
* @param providerEndpoint - The provider's API endpoint name (e.g. "openai", "anthropic").
* @param providerName - The provider's API endpoint name (e.g. "openai", "anthropic").
* Pass `null` to suppress the request.
*/
export function useWellKnownLLMProvider(providerEndpoint: string | null) {
export function useWellKnownLLMProvider(providerName: LLMProviderName) {
const { data, error, isLoading } = useSWR<WellKnownLLMProviderDescriptor>(
providerEndpoint ? SWR_KEYS.wellKnownLlmProvider(providerEndpoint) : null,
providerName ? SWR_KEYS.wellKnownLlmProvider(providerName) : null,
errorHandlingFetcher,
{
revalidateOnFocus: false,
@@ -159,6 +160,20 @@ export function useWellKnownLLMProvider(providerEndpoint: string | null) {
};
}
export function useTestingModelFromLLMProvider(
providerName: LLMProviderName,
llmProvider?: LLMProviderView
): string | undefined {
const { wellKnownLLMProvider } = useWellKnownLLMProvider(providerName);
const firstVisibleModelToTest = llmProvider?.model_configurations.find(
(modelConfiguration) => modelConfiguration.is_visible
)?.name;
return (
firstVisibleModelToTest ??
wellKnownLLMProvider?.recommended_default_model?.name
);
}
export function useWellKnownLLMProviders() {
const {
data: wellKnownLLMProviders,

View File

@@ -0,0 +1,192 @@
"use client";
import { useState, useCallback, useEffect, useMemo } from "react";
import {
MAX_MODELS,
SelectedModel,
} from "@/refresh-components/popovers/ModelSelector";
import { LLMOverride } from "@/app/app/services/lib";
import { LlmManager } from "@/lib/hooks";
import { buildLlmOptions } from "@/refresh-components/popovers/LLMPopover";
export interface UseMultiModelChatReturn {
/** Currently selected models for multi-model comparison. */
selectedModels: SelectedModel[];
/** Whether multi-model mode is active (>1 model selected). */
isMultiModelActive: boolean;
/** Add a model to the selection. */
addModel: (model: SelectedModel) => void;
/** Remove a model by index. */
removeModel: (index: number) => void;
/** Replace a model at a specific index with a new one. */
replaceModel: (index: number, model: SelectedModel) => void;
/** Clear all selected models. */
clearModels: () => void;
/** Build the LLMOverride[] array from selectedModels. */
buildLlmOverrides: () => LLMOverride[];
/**
* Restore multi-model selection from model version strings (e.g. from chat history).
* Matches against available llmOptions to reconstruct full SelectedModel objects.
*/
restoreFromModelNames: (modelNames: string[]) => void;
/**
* Switch to a single model by name (after user picks a preferred response).
* Matches against llmOptions to find the full SelectedModel.
*/
selectSingleModel: (modelName: string) => void;
}
export default function useMultiModelChat(
llmManager: LlmManager
): UseMultiModelChatReturn {
const [selectedModels, setSelectedModels] = useState<SelectedModel[]>([]);
const [defaultInitialized, setDefaultInitialized] = useState(false);
// Initialize with the default model from llmManager once providers load
const llmOptions = useMemo(
() =>
llmManager.llmProviders ? buildLlmOptions(llmManager.llmProviders) : [],
[llmManager.llmProviders]
);
useEffect(() => {
if (defaultInitialized) return;
if (llmOptions.length === 0) return;
const { currentLlm } = llmManager;
// Don't initialize if currentLlm hasn't loaded yet
if (!currentLlm.modelName) return;
const match = llmOptions.find(
(opt) =>
opt.provider === currentLlm.provider &&
opt.modelName === currentLlm.modelName
);
if (match) {
setSelectedModels([
{
name: match.name,
provider: match.provider,
modelName: match.modelName,
displayName: match.displayName,
},
]);
setDefaultInitialized(true);
}
}, [llmOptions, llmManager.currentLlm, defaultInitialized]);
const isMultiModelActive = selectedModels.length > 1;
const addModel = useCallback((model: SelectedModel) => {
setSelectedModels((prev) => {
if (prev.length >= MAX_MODELS) return prev;
if (
prev.some(
(m) =>
m.provider === model.provider && m.modelName === model.modelName
)
) {
return prev;
}
return [...prev, model];
});
}, []);
const removeModel = useCallback((index: number) => {
setSelectedModels((prev) => prev.filter((_, i) => i !== index));
}, []);
const replaceModel = useCallback((index: number, model: SelectedModel) => {
setSelectedModels((prev) => {
// Don't replace with a model that's already selected elsewhere
if (
prev.some(
(m, i) =>
i !== index &&
m.provider === model.provider &&
m.modelName === model.modelName
)
) {
return prev;
}
const next = [...prev];
next[index] = model;
return next;
});
}, []);
const clearModels = useCallback(() => {
setSelectedModels([]);
}, []);
const restoreFromModelNames = useCallback(
(modelNames: string[]) => {
if (modelNames.length < 2 || llmOptions.length === 0) return;
const restored: SelectedModel[] = [];
for (const name of modelNames) {
// Try matching by modelName (raw version string like "claude-opus-4-6")
// or by displayName (friendly name like "Claude Opus 4.6")
const match = llmOptions.find(
(opt) =>
opt.modelName === name ||
opt.displayName === name ||
opt.name === name
);
if (match) {
restored.push({
name: match.name,
provider: match.provider,
modelName: match.modelName,
displayName: match.displayName,
});
}
}
if (restored.length >= 2) {
setSelectedModels(restored.slice(0, MAX_MODELS));
setDefaultInitialized(true);
}
},
[llmOptions]
);
const selectSingleModel = useCallback(
(modelName: string) => {
if (llmOptions.length === 0) return;
const match = llmOptions.find(
(opt) =>
opt.modelName === modelName ||
opt.displayName === modelName ||
opt.name === modelName
);
if (match) {
setSelectedModels([
{
name: match.name,
provider: match.provider,
modelName: match.modelName,
displayName: match.displayName,
},
]);
}
},
[llmOptions]
);
const buildLlmOverrides = useCallback((): LLMOverride[] => {
return selectedModels.map((m) => ({
model_provider: m.provider,
model_version: m.modelName,
display_name: m.displayName,
}));
}, [selectedModels]);
return {
selectedModels,
isMultiModelActive,
addModel,
removeModel,
replaceModel,
clearModels,
buildLlmOverrides,
restoreFromModelNames,
selectSingleModel,
};
}

View File

@@ -9,7 +9,6 @@ import {
OnboardingState,
OnboardingStep,
} from "@/interfaces/onboarding";
import { WellKnownLLMProviderDescriptor } from "@/interfaces/llm";
import { updateUserPersonalization } from "@/lib/userSettings";
import { useUser } from "@/providers/UserProvider";
import { MinimalPersonaSnapshot } from "@/app/admin/agents/interfaces";
@@ -22,7 +21,6 @@ function getOnboardingCompletedKey(userId: string): string {
function useOnboardingState(liveAgent?: MinimalPersonaSnapshot): {
state: OnboardingState;
llmDescriptors: WellKnownLLMProviderDescriptor[];
actions: OnboardingActions;
isLoading: boolean;
hasProviders: boolean;
@@ -35,7 +33,6 @@ function useOnboardingState(liveAgent?: MinimalPersonaSnapshot): {
llmProviders,
isLoadingProviders,
hasProviders: hasLlmProviders,
providerOptions,
refreshProviderInfo,
} = useProviderStatus();
@@ -43,7 +40,6 @@ function useOnboardingState(liveAgent?: MinimalPersonaSnapshot): {
const { refetch: refreshPersonaProviders } = useLLMProviders(liveAgent?.id);
const userName = user?.personalization?.name;
const llmDescriptors = providerOptions;
const nameUpdateTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(
null
@@ -235,7 +231,6 @@ function useOnboardingState(liveAgent?: MinimalPersonaSnapshot): {
return {
state,
llmDescriptors,
actions: {
nextStep,
prevStep,
@@ -280,7 +275,6 @@ export function useShowOnboarding({
const {
state: onboardingState,
actions: onboardingActions,
llmDescriptors,
isLoading: isLoadingOnboarding,
hasProviders: hasAnyProvider,
} = useOnboardingState(liveAgent);
@@ -350,7 +344,6 @@ export function useShowOnboarding({
onboardingDismissed,
onboardingState,
onboardingActions,
llmDescriptors,
isLoadingOnboarding,
hideOnboarding,
finishOnboarding,

View File

@@ -15,6 +15,7 @@ export enum LLMProviderName {
LITELLM = "litellm",
LITELLM_PROXY = "litellm_proxy",
BIFROST = "bifrost",
OPENAI_COMPATIBLE = "openai_compatible",
CUSTOM = "custom",
}
@@ -122,7 +123,6 @@ export interface LLMProviderFormProps {
variant?: LLMModalVariant;
existingLlmProvider?: LLMProviderView;
shouldMarkAsDefault?: boolean;
open?: boolean;
onOpenChange?: (open: boolean) => void;
/** The current default model name for this provider (from the global default). */
@@ -182,6 +182,21 @@ export interface BifrostModelResponse {
supports_reasoning: boolean;
}
export interface OpenAICompatibleFetchParams {
api_base?: string;
api_key?: string;
provider_name?: string;
signal?: AbortSignal;
}
export interface OpenAICompatibleModelResponse {
name: string;
display_name: string;
max_input_tokens: number | null;
supports_image_input: boolean;
supports_reasoning: boolean;
}
export interface VertexAIFetchParams {
model_configurations?: ModelConfiguration[];
}
@@ -200,5 +215,6 @@ export type FetchModelsParams =
| OpenRouterFetchParams
| LiteLLMProxyFetchParams
| BifrostFetchParams
| OpenAICompatibleFetchParams
| VertexAIFetchParams
| LMStudioFetchParams;

View File

@@ -1,8 +1,9 @@
"use client";
import type { RichStr } from "@opal/types";
import type { RichStr, WithoutStyles } from "@opal/types";
import { resolveStr } from "@opal/components/text/InlineMarkdown";
import Text from "@/refresh-components/texts/Text";
import Separator from "@/refresh-components/Separator";
import { SvgXOctagon, SvgAlertCircle } from "@opal/icons";
import { useField, useFormikContext } from "formik";
import { Section } from "@/layouts/general-layouts";
@@ -229,9 +230,27 @@ function ErrorTextLayout({ children, type = "error" }: ErrorTextLayoutProps) {
);
}
/**
* FieldSeparator - A horizontal rule with inline padding, used to visually separate field groups.
*/
function FieldSeparator() {
return <Separator noPadding className="p-2" />;
}
/**
* FieldPadder - Wraps a field in standard horizontal + vertical padding (`p-2 w-full`).
*/
type FieldPadderProps = WithoutStyles<React.HTMLAttributes<HTMLDivElement>>;
function FieldPadder(props: FieldPadderProps) {
return <div {...props} className="p-2 w-full" />;
}
export {
VerticalInputLayout as Vertical,
HorizontalInputLayout as Horizontal,
ErrorLayout as Error,
ErrorTextLayout,
FieldSeparator,
FieldPadder,
type FieldPadderProps,
};

View File

@@ -7,6 +7,7 @@ import {
SvgOllama,
SvgAws,
SvgOpenrouter,
SvgPlug,
SvgServer,
SvgAzure,
SvgGemini,
@@ -27,6 +28,7 @@ const PROVIDER_ICONS: Record<string, IconFunctionComponent> = {
[LLMProviderName.OPENROUTER]: SvgOpenrouter,
[LLMProviderName.LM_STUDIO]: SvgLmStudio,
[LLMProviderName.BIFROST]: SvgBifrost,
[LLMProviderName.OPENAI_COMPATIBLE]: SvgPlug,
// fallback
[LLMProviderName.CUSTOM]: SvgServer,
@@ -44,6 +46,7 @@ const PROVIDER_PRODUCT_NAMES: Record<string, string> = {
[LLMProviderName.OPENROUTER]: "OpenRouter",
[LLMProviderName.LM_STUDIO]: "LM Studio",
[LLMProviderName.BIFROST]: "Bifrost",
[LLMProviderName.OPENAI_COMPATIBLE]: "OpenAI Compatible",
// fallback
[LLMProviderName.CUSTOM]: "Custom Models",
@@ -61,6 +64,7 @@ const PROVIDER_DISPLAY_NAMES: Record<string, string> = {
[LLMProviderName.OPENROUTER]: "OpenRouter",
[LLMProviderName.LM_STUDIO]: "LM Studio",
[LLMProviderName.BIFROST]: "Bifrost",
[LLMProviderName.OPENAI_COMPATIBLE]: "OpenAI Compatible",
// fallback
[LLMProviderName.CUSTOM]: "Other providers or self-hosted",

View File

@@ -80,6 +80,11 @@ export const SWR_KEYS = {
// ── Users ─────────────────────────────────────────────────────────────────
acceptedUsers: "/api/manage/users/accepted/all",
invitedUsers: "/api/manage/users/invited",
// Curator-accessible listing of all users (and optionally service-account
// entries when `?include_api_keys=true`). Used by group create/edit pages so
// global curators — who cannot hit the admin-only `/accepted/all` and
// `/invited` endpoints — can still load the member picker.
groupMemberCandidates: "/api/manage/users?include_api_keys=true",
pendingTenantUsers: "/api/tenants/users/pending",
userCounts: "/api/manage/users/counts",

View File

@@ -89,18 +89,6 @@ export const KeyWideLayout: Story = {
},
};
export const Disabled: Story = {
render: () => (
<KeyValueInput
keyTitle="Key"
valueTitle="Value"
items={[{ key: "LOCKED", value: "cannot-edit" }]}
onChange={() => {}}
disabled
/>
),
};
export const EmptyLineMode: Story = {
render: function EmptyStory() {
const [items, setItems] = React.useState<KeyValue[]>([]);

View File

@@ -68,21 +68,13 @@
* ```
*/
import React, {
useCallback,
useContext,
useEffect,
useMemo,
useId,
useRef,
} from "react";
import React, { useCallback, useEffect, useMemo, useRef } from "react";
import { cn } from "@/lib/utils";
import InputTypeIn from "./InputTypeIn";
import { Button, EmptyMessageCard } from "@opal/components";
import type { WithoutStyles } from "@opal/types";
import Text from "@/refresh-components/texts/Text";
import { FieldContext } from "../form/FieldContext";
import { FieldMessage } from "../messages/FieldMessage";
import { ErrorTextLayout } from "@/layouts/input-layouts";
import { SvgMinusCircle, SvgPlusCircle } from "@opal/icons";
export type KeyValue = { key: string; value: string };
@@ -107,82 +99,50 @@ const GRID_COLS = {
interface KeyValueInputItemProps {
item: KeyValue;
onChange: (next: KeyValue) => void;
disabled?: boolean;
onRemove: () => void;
keyPlaceholder?: string;
valuePlaceholder?: string;
error?: KeyValueError;
canRemove: boolean;
index: number;
fieldId: string;
}
function KeyValueInputItem({
item,
onChange,
disabled,
onRemove,
keyPlaceholder,
valuePlaceholder,
error,
canRemove,
index,
fieldId,
}: KeyValueInputItemProps) {
return (
<>
<div className="flex flex-col gap-y-0.5">
<InputTypeIn
placeholder={keyPlaceholder || "Key"}
placeholder={keyPlaceholder}
value={item.key}
onChange={(e) => onChange({ ...item, key: e.target.value })}
aria-label={`${keyPlaceholder || "Key"} ${index + 1}`}
aria-invalid={!!error?.key}
aria-describedby={
error?.key ? `${fieldId}-key-error-${index}` : undefined
}
variant={disabled ? "disabled" : undefined}
showClearButton={false}
/>
{error?.key && (
<FieldMessage variant="error" className="ml-0.5">
<FieldMessage.Content
id={`${fieldId}-key-error-${index}`}
role="alert"
className="ml-0.5"
>
{error.key}
</FieldMessage.Content>
</FieldMessage>
)}
{error?.key && <ErrorTextLayout>{error.key}</ErrorTextLayout>}
</div>
<div className="flex flex-col gap-y-0.5">
<InputTypeIn
placeholder={valuePlaceholder || "Value"}
placeholder={valuePlaceholder}
value={item.value}
onChange={(e) => onChange({ ...item, value: e.target.value })}
aria-label={`${valuePlaceholder || "Value"} ${index + 1}`}
aria-invalid={!!error?.value}
aria-describedby={
error?.value ? `${fieldId}-value-error-${index}` : undefined
}
variant={disabled ? "disabled" : undefined}
showClearButton={false}
/>
{error?.value && (
<FieldMessage variant="error" className="ml-0.5">
<FieldMessage.Content
id={`${fieldId}-value-error-${index}`}
role="alert"
className="ml-0.5"
>
{error.value}
</FieldMessage.Content>
</FieldMessage>
)}
{error?.value && <ErrorTextLayout>{error.value}</ErrorTextLayout>}
</div>
<Button
disabled={disabled || !canRemove}
disabled={!canRemove}
prominence="tertiary"
icon={SvgMinusCircle}
onClick={onRemove}
@@ -198,46 +158,31 @@ export interface KeyValueInputProps
> {
/** Title for the key column */
keyTitle?: string;
/** Title for the value column */
valueTitle?: string;
/** Placeholder for the key input */
keyPlaceholder?: string;
/** Placeholder for the value input */
valuePlaceholder?: string;
/** Array of key-value pairs */
items: KeyValue[];
/** Callback when items change */
onChange: (nextItems: KeyValue[]) => void;
/** Custom add handler */
onAdd?: () => void;
/** Custom remove handler */
onRemove?: (index: number) => void;
/** Disabled state */
disabled?: boolean;
/** Mode: 'line' allows removing all items, 'fixed-line' requires at least one item */
mode?: "line" | "fixed-line";
/** Layout: 'equal' - both inputs same width, 'key-wide' - key input is wider (60/40 split) */
layout?: "equal" | "key-wide";
/** Callback when validation state changes */
onValidationChange?: (isValid: boolean, errors: KeyValueError[]) => void;
/** Callback to handle validation errors - integrates with Formik or custom error handling. Called with error message when invalid, null when valid */
onValidationError?: (errorMessage: string | null) => void;
/** Optional custom validator for the key field. Return { isValid, message } */
onKeyValidate?: (
key: string,
index: number,
item: KeyValue,
items: KeyValue[]
) => { isValid: boolean; message?: string };
/** Optional custom validator for the value field. Return { isValid, message } */
onValueValidate?: (
value: string,
index: number,
item: KeyValue,
items: KeyValue[]
) => { isValid: boolean; message?: string };
/** Whether to validate for duplicate keys */
validateDuplicateKeys?: boolean;
/** Whether to validate for empty keys */
validateEmptyKeys?: boolean;
/** Optional name for the field (for accessibility) */
name?: string;
/** Custom label for the add button (defaults to "Add Line") */
addButtonLabel?: string;
}
@@ -245,26 +190,16 @@ export interface KeyValueInputProps
export default function KeyValueInput({
keyTitle = "Key",
valueTitle = "Value",
keyPlaceholder,
valuePlaceholder,
items = [],
onChange,
onAdd,
onRemove,
disabled = false,
mode = "line",
layout = "equal",
onValidationChange,
onValidationError,
onKeyValidate,
onValueValidate,
validateDuplicateKeys = true,
validateEmptyKeys = true,
name,
addButtonLabel = "Add Line",
...rest
}: KeyValueInputProps) {
// Try to get field context if used within FormField (safe access)
const fieldContext = useContext(FieldContext);
// Validation logic
const errors = useMemo((): KeyValueError[] => {
if (!items || items.length === 0) return [];
@@ -273,12 +208,8 @@ export default function KeyValueInput({
const keyCount = new Map<string, number[]>();
items.forEach((item, index) => {
// Validate empty keys - only if value is filled (user is actively working on this row)
if (
validateEmptyKeys &&
item.key.trim() === "" &&
item.value.trim() !== ""
) {
// Validate empty keys
if (item.key.trim() === "" && item.value.trim() !== "") {
const error = errorsList[index];
if (error) {
error.key = "Key cannot be empty";
@@ -291,56 +222,22 @@ export default function KeyValueInput({
existing.push(index);
keyCount.set(item.key, existing);
}
// Custom key validation
if (onKeyValidate) {
const result = onKeyValidate(item.key, index, item, items);
if (result && result.isValid === false) {
const error = errorsList[index];
if (error) {
error.key = result.message || "Invalid key";
}
}
}
// Custom value validation
if (onValueValidate) {
const result = onValueValidate(item.value, index, item, items);
if (result && result.isValid === false) {
const error = errorsList[index];
if (error) {
error.value = result.message || "Invalid value";
}
}
}
});
// Validate duplicate keys
if (validateDuplicateKeys) {
keyCount.forEach((indices, key) => {
if (indices.length > 1) {
indices.forEach((index) => {
const error = errorsList[index];
if (error) {
error.key = "Duplicate key";
}
});
}
});
}
keyCount.forEach((indices, key) => {
if (indices.length > 1) {
indices.forEach((index) => {
const error = errorsList[index];
if (error) {
error.key = "Duplicate key";
}
});
}
});
return errorsList;
}, [
items,
validateDuplicateKeys,
validateEmptyKeys,
onKeyValidate,
onValueValidate,
]);
const isValid = useMemo(() => {
return errors.every((error) => !error.key && !error.value);
}, [errors]);
}, [items]);
const hasAnyError = useMemo(() => {
return errors.some((error) => error.key || error.value);
@@ -371,21 +268,12 @@ export default function KeyValueInput({
}, [hasAnyError, errors]);
// Notify parent of validation changes
const onValidationChangeRef = useRef(onValidationChange);
const onValidationErrorRef = useRef(onValidationError);
useEffect(() => {
onValidationChangeRef.current = onValidationChange;
}, [onValidationChange]);
useEffect(() => {
onValidationErrorRef.current = onValidationError;
}, [onValidationError]);
useEffect(() => {
onValidationChangeRef.current?.(isValid, errors);
}, [isValid, errors]);
// Notify parent of error state for form library integration
useEffect(() => {
onValidationErrorRef.current?.(errorMessage);
@@ -394,25 +282,17 @@ export default function KeyValueInput({
const canRemoveItems = mode === "line" || items.length > 1;
const handleAdd = useCallback(() => {
if (onAdd) {
onAdd();
return;
}
onChange([...(items || []), { key: "", value: "" }]);
}, [onAdd, onChange, items]);
}, [onChange, items]);
const handleRemove = useCallback(
(index: number) => {
if (!canRemoveItems && items.length === 1) return;
if (onRemove) {
onRemove(index);
return;
}
const next = (items || []).filter((_, i) => i !== index);
onChange(next);
},
[canRemoveItems, items, onRemove, onChange]
[canRemoveItems, items, onChange]
);
const handleItemChange = useCallback(
@@ -431,8 +311,6 @@ export default function KeyValueInput({
}
}, [mode]); // Only run on mode change
const autoId = useId();
const fieldId = fieldContext?.baseId || name || `key-value-input-${autoId}`;
const gridCols = GRID_COLS[layout];
return (
@@ -460,23 +338,24 @@ export default function KeyValueInput({
key={index}
item={item}
onChange={(next) => handleItemChange(index, next)}
disabled={disabled}
onRemove={() => handleRemove(index)}
keyPlaceholder={keyTitle}
valuePlaceholder={valueTitle}
keyPlaceholder={keyPlaceholder}
valuePlaceholder={valuePlaceholder}
error={errors[index]}
canRemove={canRemoveItems}
index={index}
fieldId={fieldId}
/>
))}
</div>
) : (
<EmptyMessageCard title="No items added yet." />
<EmptyMessageCard
title="No items added yet."
padding="sm"
sizePreset="secondary"
/>
)}
<Button
disabled={disabled}
prominence="secondary"
onClick={handleAdd}
icon={SvgPlusCircle}

View File

@@ -1,7 +1,7 @@
"use client";
import { useState, useEffect, useCallback, useMemo, useRef } from "react";
import Popover, { PopoverMenu } from "@/refresh-components/Popover";
import Popover from "@/refresh-components/Popover";
import { LlmDescriptor, LlmManager } from "@/lib/hooks";
import { structureValue } from "@/lib/llmConfig/utils";
import {
@@ -11,25 +11,11 @@ import {
import { LLMProviderDescriptor } from "@/interfaces/llm";
import { Slider } from "@/components/ui/slider";
import { useUser } from "@/providers/UserProvider";
import LineItem from "@/refresh-components/buttons/LineItem";
import InputTypeIn from "@/refresh-components/inputs/InputTypeIn";
import Text from "@/refresh-components/texts/Text";
import SimpleLoader from "@/refresh-components/loaders/SimpleLoader";
import {
Accordion,
AccordionContent,
AccordionItem,
AccordionTrigger,
} from "@/components/ui/accordion";
import {
SvgCheck,
SvgChevronDown,
SvgChevronRight,
SvgRefreshCw,
} from "@opal/icons";
import { Section } from "@/layouts/general-layouts";
import { SvgRefreshCw } from "@opal/icons";
import { OpenButton } from "@opal/components";
import { LLMOption, LLMOptionGroup } from "./interfaces";
import ModelListContent from "./ModelListContent";
export interface LLMPopoverProps {
llmManager: LlmManager;
@@ -150,7 +136,6 @@ export default function LLMPopover({
const isLoadingProviders = llmManager.isLoadingProviders;
const [open, setOpen] = useState(false);
const [searchQuery, setSearchQuery] = useState("");
const { user } = useUser();
const [localTemperature, setLocalTemperature] = useState(
@@ -161,9 +146,7 @@ export default function LLMPopover({
setLocalTemperature(llmManager.temperature ?? 0.5);
}, [llmManager.temperature]);
const searchInputRef = useRef<HTMLInputElement>(null);
const scrollContainerRef = useRef<HTMLDivElement>(null);
const selectedItemRef = useRef<HTMLDivElement>(null);
const handleGlobalTemperatureChange = useCallback((value: number[]) => {
const value_0 = value[0];
@@ -182,39 +165,28 @@ export default function LLMPopover({
[llmManager]
);
const llmOptions = useMemo(
() => buildLlmOptions(llmProviders, currentModelName),
[llmProviders, currentModelName]
const isSelected = useCallback(
(option: LLMOption) =>
option.modelName === llmManager.currentLlm.modelName &&
option.provider === llmManager.currentLlm.provider,
[llmManager.currentLlm.modelName, llmManager.currentLlm.provider]
);
// Filter options by vision capability (when images are uploaded) and search query
const filteredOptions = useMemo(() => {
let result = llmOptions;
if (requiresImageInput) {
result = result.filter((opt) => opt.supportsImageInput);
}
if (searchQuery.trim()) {
const query = searchQuery.toLowerCase();
result = result.filter(
(opt) =>
opt.displayName.toLowerCase().includes(query) ||
opt.modelName.toLowerCase().includes(query) ||
(opt.vendor && opt.vendor.toLowerCase().includes(query))
const handleSelectModel = useCallback(
(option: LLMOption) => {
llmManager.updateCurrentLlm({
modelName: option.modelName,
provider: option.provider,
name: option.name,
} as LlmDescriptor);
onSelect?.(
structureValue(option.name, option.provider, option.modelName)
);
}
return result;
}, [llmOptions, searchQuery, requiresImageInput]);
// Group options by provider using backend-provided display names and ordering
// For aggregator providers (bedrock, openrouter, vertex_ai), flatten to "Provider/Vendor" format
const groupedOptions = useMemo(
() => groupLlmOptions(filteredOptions),
[filteredOptions]
setOpen(false);
},
[llmManager, onSelect]
);
// Get display name for the model to show in the button
// Use currentModelName prop if provided (e.g., for regenerate showing the model used),
// otherwise fall back to the globally selected model
const currentLlmDisplayName = useMemo(() => {
// Only use currentModelName if it's a non-empty string
const currentModel =
@@ -234,122 +206,30 @@ export default function LLMPopover({
return currentModel;
}, [llmProviders, currentModelName, llmManager.currentLlm.modelName]);
// Determine which group the current model belongs to (for auto-expand)
const currentGroupKey = useMemo(() => {
const currentModel = llmManager.currentLlm.modelName;
const currentProvider = llmManager.currentLlm.provider;
// Match by both modelName AND provider to handle same model name across providers
const option = llmOptions.find(
(o) => o.modelName === currentModel && o.provider === currentProvider
);
if (!option) return "openai";
const provider = option.provider.toLowerCase();
const isAggregator = AGGREGATOR_PROVIDERS.has(provider);
if (isAggregator && option.vendor) {
return `${provider}/${option.vendor.toLowerCase()}`;
}
return provider;
}, [
llmOptions,
llmManager.currentLlm.modelName,
llmManager.currentLlm.provider,
]);
// Track expanded groups - initialize with current model's group
const [expandedGroups, setExpandedGroups] = useState<string[]>([
currentGroupKey,
]);
// Reset state when popover closes/opens
useEffect(() => {
if (!open) {
setSearchQuery("");
} else {
// Reset expanded groups to only show the selected model's group
setExpandedGroups([currentGroupKey]);
}
}, [open, currentGroupKey]);
// Auto-scroll to selected model when popover opens
useEffect(() => {
if (open) {
// Small delay to let accordion content render
const timer = setTimeout(() => {
selectedItemRef.current?.scrollIntoView({
behavior: "instant",
block: "center",
});
}, 50);
return () => clearTimeout(timer);
}
}, [open]);
const isSearching = searchQuery.trim().length > 0;
// Compute final expanded groups
const effectiveExpandedGroups = useMemo(() => {
if (isSearching) {
// Force expand all when searching
return groupedOptions.map((g) => g.key);
}
return expandedGroups;
}, [isSearching, groupedOptions, expandedGroups]);
// Handler for accordion changes
const handleAccordionChange = (value: string[]) => {
// Only update state when not searching (force-expanding)
if (!isSearching) {
setExpandedGroups(value);
}
};
const handleSelectModel = (option: LLMOption) => {
llmManager.updateCurrentLlm({
modelName: option.modelName,
provider: option.provider,
name: option.name,
} as LlmDescriptor);
onSelect?.(structureValue(option.name, option.provider, option.modelName));
setOpen(false);
};
const renderModelItem = (option: LLMOption) => {
const isSelected =
option.modelName === llmManager.currentLlm.modelName &&
option.provider === llmManager.currentLlm.provider;
const capabilities: string[] = [];
if (option.supportsReasoning) {
capabilities.push("Reasoning");
}
if (option.supportsImageInput) {
capabilities.push("Vision");
}
const description =
capabilities.length > 0 ? capabilities.join(", ") : undefined;
return (
<div
key={`${option.name}-${option.modelName}`}
ref={isSelected ? selectedItemRef : undefined}
>
<LineItem
selected={isSelected}
description={description}
onClick={() => handleSelectModel(option)}
rightChildren={
isSelected ? (
<SvgCheck className="h-4 w-4 stroke-action-link-05 shrink-0" />
) : null
}
>
{option.displayName}
</LineItem>
const temperatureFooter = user?.preferences?.temperature_override_enabled ? (
<>
<div className="border-t border-border-02 mx-2" />
<div className="flex flex-col w-full py-2 gap-2">
<Slider
value={[localTemperature]}
max={llmManager.maxTemperature}
min={0}
step={0.01}
onValueChange={handleGlobalTemperatureChange}
onValueCommit={handleGlobalTemperatureCommit}
className="w-full"
/>
<div className="flex flex-row items-center justify-between">
<Text secondaryBody text03>
Temperature (creativity)
</Text>
<Text secondaryBody text03>
{localTemperature.toFixed(1)}
</Text>
</div>
</div>
);
};
</>
) : undefined;
return (
<Popover open={open} onOpenChange={setOpen}>
@@ -373,129 +253,16 @@ export default function LLMPopover({
</div>
<Popover.Content side="top" align="end" width="xl">
<Section gap={0.5}>
{/* Search Input */}
<InputTypeIn
ref={searchInputRef}
leftSearchIcon
variant="internal"
value={searchQuery}
onChange={(e) => setSearchQuery(e.target.value)}
placeholder="Search models..."
/>
{/* Model List with Vendor Groups */}
<PopoverMenu scrollContainerRef={scrollContainerRef}>
{isLoadingProviders
? [
<div key="loading" className="flex items-center gap-2 py-3">
<SimpleLoader />
<Text secondaryBody text03>
Loading models...
</Text>
</div>,
]
: groupedOptions.length === 0
? [
<div key="empty" className="py-3">
<Text secondaryBody text03>
No models found
</Text>
</div>,
]
: groupedOptions.length === 1
? // Single provider - show models directly without accordion
[
<div
key="single-provider"
className="flex flex-col gap-1"
>
{groupedOptions[0]!.options.map(renderModelItem)}
</div>,
]
: // Multiple providers - show accordion with groups
[
<Accordion
key="accordion"
type="multiple"
value={effectiveExpandedGroups}
onValueChange={handleAccordionChange}
className="w-full flex flex-col"
>
{groupedOptions.map((group) => {
const isExpanded = effectiveExpandedGroups.includes(
group.key
);
return (
<AccordionItem
key={group.key}
value={group.key}
className="border-none pt-1"
>
{/* Group Header */}
<AccordionTrigger className="flex items-center rounded-08 hover:no-underline hover:bg-background-tint-02 group [&>svg]:hidden w-full py-1">
<div className="flex items-center gap-1 shrink-0">
<div className="flex items-center justify-center size-5 shrink-0">
<group.Icon size={16} />
</div>
<Text
secondaryBody
text03
nowrap
className="px-0.5"
>
{group.displayName}
</Text>
</div>
<div className="flex-1" />
<div className="flex items-center justify-center size-6 shrink-0">
{isExpanded ? (
<SvgChevronDown className="h-4 w-4 stroke-text-04 shrink-0" />
) : (
<SvgChevronRight className="h-4 w-4 stroke-text-04 shrink-0" />
)}
</div>
</AccordionTrigger>
{/* Model Items - full width highlight */}
<AccordionContent className="pb-0 pt-0">
<div className="flex flex-col gap-1">
{group.options.map(renderModelItem)}
</div>
</AccordionContent>
</AccordionItem>
);
})}
</Accordion>,
]}
</PopoverMenu>
{/* Global Temperature Slider (shown if enabled in user prefs) */}
{user?.preferences?.temperature_override_enabled && (
<>
<div className="border-t border-border-02 mx-2" />
<div className="flex flex-col w-full py-2 gap-2">
<Slider
value={[localTemperature]}
max={llmManager.maxTemperature}
min={0}
step={0.01}
onValueChange={handleGlobalTemperatureChange}
onValueCommit={handleGlobalTemperatureCommit}
className="w-full"
/>
<div className="flex flex-row items-center justify-between">
<Text secondaryBody text03>
Temperature (creativity)
</Text>
<Text secondaryBody text03>
{localTemperature.toFixed(1)}
</Text>
</div>
</div>
</>
)}
</Section>
<ModelListContent
llmProviders={llmProviders}
currentModelName={currentModelName}
requiresImageInput={requiresImageInput}
isLoading={isLoadingProviders}
onSelect={handleSelectModel}
isSelected={isSelected}
scrollContainerRef={scrollContainerRef}
footer={temperatureFooter}
/>
</Popover.Content>
</Popover>
);

View File

@@ -0,0 +1,200 @@
"use client";
import { useState, useMemo, useRef, useEffect } from "react";
import { PopoverMenu } from "@/refresh-components/Popover";
import InputTypeIn from "@/refresh-components/inputs/InputTypeIn";
import { Text } from "@opal/components";
import { SvgCheck, SvgChevronDown, SvgChevronRight } from "@opal/icons";
import { Section } from "@/layouts/general-layouts";
import { LLMOption } from "./interfaces";
import { buildLlmOptions, groupLlmOptions } from "./LLMPopover";
import LineItem from "@/refresh-components/buttons/LineItem";
import { LLMProviderDescriptor } from "@/interfaces/llm";
import {
Collapsible,
CollapsibleContent,
CollapsibleTrigger,
} from "@/refresh-components/Collapsible";
export interface ModelListContentProps {
llmProviders: LLMProviderDescriptor[] | undefined;
currentModelName?: string;
requiresImageInput?: boolean;
onSelect: (option: LLMOption) => void;
isSelected: (option: LLMOption) => boolean;
isDisabled?: (option: LLMOption) => boolean;
scrollContainerRef?: React.RefObject<HTMLDivElement | null>;
isLoading?: boolean;
footer?: React.ReactNode;
}
export default function ModelListContent({
llmProviders,
currentModelName,
requiresImageInput,
onSelect,
isSelected,
isDisabled,
scrollContainerRef: externalScrollRef,
isLoading,
footer,
}: ModelListContentProps) {
const [searchQuery, setSearchQuery] = useState("");
const internalScrollRef = useRef<HTMLDivElement>(null);
const scrollContainerRef = externalScrollRef ?? internalScrollRef;
const llmOptions = useMemo(
() => buildLlmOptions(llmProviders, currentModelName),
[llmProviders, currentModelName]
);
const filteredOptions = useMemo(() => {
let result = llmOptions;
if (requiresImageInput) {
result = result.filter((opt) => opt.supportsImageInput);
}
if (searchQuery.trim()) {
const query = searchQuery.toLowerCase();
result = result.filter(
(opt) =>
opt.displayName.toLowerCase().includes(query) ||
opt.modelName.toLowerCase().includes(query) ||
(opt.vendor && opt.vendor.toLowerCase().includes(query))
);
}
return result;
}, [llmOptions, searchQuery, requiresImageInput]);
const groupedOptions = useMemo(
() => groupLlmOptions(filteredOptions),
[filteredOptions]
);
// Find which group contains a currently-selected model (for auto-expand)
const defaultGroupKey = useMemo(() => {
for (const group of groupedOptions) {
if (group.options.some((opt) => isSelected(opt))) {
return group.key;
}
}
return groupedOptions[0]?.key ?? "";
}, [groupedOptions, isSelected]);
const [expandedGroups, setExpandedGroups] = useState<Set<string>>(
new Set([defaultGroupKey])
);
// Reset expanded groups when default changes (e.g. popover re-opens)
useEffect(() => {
setExpandedGroups(new Set([defaultGroupKey]));
}, [defaultGroupKey]);
const isSearching = searchQuery.trim().length > 0;
const toggleGroup = (key: string) => {
if (isSearching) return;
setExpandedGroups((prev) => {
const next = new Set(prev);
if (next.has(key)) next.delete(key);
else next.add(key);
return next;
});
};
const isGroupOpen = (key: string) => isSearching || expandedGroups.has(key);
const renderModelItem = (option: LLMOption) => {
const selected = isSelected(option);
const disabled = isDisabled?.(option) ?? false;
const capabilities: string[] = [];
if (option.supportsReasoning) capabilities.push("Reasoning");
if (option.supportsImageInput) capabilities.push("Vision");
const description =
capabilities.length > 0 ? capabilities.join(", ") : undefined;
return (
<LineItem
key={`${option.provider}:${option.modelName}`}
selected={selected}
disabled={disabled}
description={description}
onClick={() => onSelect(option)}
rightChildren={
selected ? (
<SvgCheck className="h-4 w-4 stroke-action-link-05 shrink-0" />
) : null
}
>
{option.displayName}
</LineItem>
);
};
return (
<Section gap={0.5}>
<InputTypeIn
leftSearchIcon
variant="internal"
value={searchQuery}
onChange={(e) => setSearchQuery(e.target.value)}
placeholder="Search models..."
/>
<PopoverMenu scrollContainerRef={scrollContainerRef}>
{isLoading
? [
<Text key="loading" font="secondary-body" color="text-03">
Loading models...
</Text>,
]
: groupedOptions.length === 0
? [
<Text key="empty" font="secondary-body" color="text-03">
No models found
</Text>,
]
: groupedOptions.length === 1
? [
<Section key="single-provider" gap={0.25}>
{groupedOptions[0]!.options.map(renderModelItem)}
</Section>,
]
: groupedOptions.map((group) => {
const open = isGroupOpen(group.key);
return (
<Collapsible
key={group.key}
open={open}
onOpenChange={() => toggleGroup(group.key)}
>
<CollapsibleTrigger asChild>
<LineItem
muted
icon={group.Icon}
rightChildren={
open ? (
<SvgChevronDown className="h-4 w-4 stroke-text-04 shrink-0" />
) : (
<SvgChevronRight className="h-4 w-4 stroke-text-04 shrink-0" />
)
}
>
{group.displayName}
</LineItem>
</CollapsibleTrigger>
<CollapsibleContent>
<Section gap={0.25}>
{group.options.map(renderModelItem)}
</Section>
</CollapsibleContent>
</Collapsible>
);
})}
</PopoverMenu>
{footer}
</Section>
);
}

View File

@@ -0,0 +1,230 @@
"use client";
import { useState, useMemo, useRef } from "react";
import Popover from "@/refresh-components/Popover";
import { LlmManager } from "@/lib/hooks";
import { getProviderIcon } from "@/app/admin/configuration/llm/utils";
import { Button, SelectButton, OpenButton } from "@opal/components";
import { SvgPlusCircle, SvgX } from "@opal/icons";
import { LLMOption } from "@/refresh-components/popovers/interfaces";
import ModelListContent from "@/refresh-components/popovers/ModelListContent";
import Separator from "@/refresh-components/Separator";
export const MAX_MODELS = 3;
export interface SelectedModel {
name: string;
provider: string;
modelName: string;
displayName: string;
}
export interface ModelSelectorProps {
llmManager: LlmManager;
selectedModels: SelectedModel[];
onAdd: (model: SelectedModel) => void;
onRemove: (index: number) => void;
onReplace: (index: number, model: SelectedModel) => void;
}
function modelKey(provider: string, modelName: string): string {
return `${provider}:${modelName}`;
}
export default function ModelSelector({
llmManager,
selectedModels,
onAdd,
onRemove,
onReplace,
}: ModelSelectorProps) {
const [open, setOpen] = useState(false);
// null = add mode (via + button), number = replace mode (via pill click)
const [replacingIndex, setReplacingIndex] = useState<number | null>(null);
// Virtual anchor ref — points to the clicked pill so the popover positions above it
const anchorRef = useRef<HTMLElement | null>(null);
const isMultiModel = selectedModels.length > 1;
const atMax = selectedModels.length >= MAX_MODELS;
const selectedKeys = useMemo(
() => new Set(selectedModels.map((m) => modelKey(m.provider, m.modelName))),
[selectedModels]
);
const otherSelectedKeys = useMemo(() => {
if (replacingIndex === null) return new Set<string>();
return new Set(
selectedModels
.filter((_, i) => i !== replacingIndex)
.map((m) => modelKey(m.provider, m.modelName))
);
}, [selectedModels, replacingIndex]);
const replacingKey =
replacingIndex !== null
? (() => {
const m = selectedModels[replacingIndex];
return m ? modelKey(m.provider, m.modelName) : null;
})()
: null;
const isSelected = (option: LLMOption) => {
const key = modelKey(option.provider, option.modelName);
if (replacingIndex !== null) return key === replacingKey;
return selectedKeys.has(key);
};
const isDisabled = (option: LLMOption) => {
const key = modelKey(option.provider, option.modelName);
if (replacingIndex !== null) return otherSelectedKeys.has(key);
return !selectedKeys.has(key) && atMax;
};
const handleSelect = (option: LLMOption) => {
const model: SelectedModel = {
name: option.name,
provider: option.provider,
modelName: option.modelName,
displayName: option.displayName,
};
if (replacingIndex !== null) {
onReplace(replacingIndex, model);
setOpen(false);
setReplacingIndex(null);
return;
}
const key = modelKey(option.provider, option.modelName);
const existingIndex = selectedModels.findIndex(
(m) => modelKey(m.provider, m.modelName) === key
);
if (existingIndex >= 0) {
onRemove(existingIndex);
} else if (!atMax) {
onAdd(model);
}
};
const handleOpenChange = (nextOpen: boolean) => {
setOpen(nextOpen);
if (!nextOpen) setReplacingIndex(null);
};
const handlePillClick = (index: number, element: HTMLElement) => {
anchorRef.current = element;
setReplacingIndex(index);
setOpen(true);
};
return (
<Popover open={open} onOpenChange={handleOpenChange}>
<div className="flex items-center justify-end gap-1 p-1">
{!atMax && (
<Button
prominence="tertiary"
icon={SvgPlusCircle}
size="sm"
tooltip="Add Model"
onClick={(e: React.MouseEvent) => {
anchorRef.current = e.currentTarget as HTMLElement;
setReplacingIndex(null);
setOpen(true);
}}
/>
)}
<Popover.Anchor
virtualRef={anchorRef as React.RefObject<HTMLElement>}
/>
{selectedModels.length > 0 && (
<>
{!atMax && (
<Separator
orientation="vertical"
paddingXRem={0.5}
paddingYRem={0.5}
/>
)}
<div className="flex items-center">
{selectedModels.map((model, index) => {
const ProviderIcon = getProviderIcon(
model.provider,
model.modelName
);
if (!isMultiModel) {
return (
<OpenButton
key={modelKey(model.provider, model.modelName)}
icon={ProviderIcon}
onClick={(e: React.MouseEvent) =>
handlePillClick(index, e.currentTarget as HTMLElement)
}
>
{model.displayName}
</OpenButton>
);
}
return (
<div
key={modelKey(model.provider, model.modelName)}
className="flex items-center"
>
{index > 0 && (
<Separator
orientation="vertical"
paddingXRem={0.5}
className="h-5"
/>
)}
<SelectButton
icon={ProviderIcon}
rightIcon={SvgX}
state="empty"
variant="select-tinted"
interaction="hover"
size="lg"
onClick={(e: React.MouseEvent) => {
const target = e.target as HTMLElement;
const btn = e.currentTarget as HTMLElement;
const icons = btn.querySelectorAll(
".interactive-foreground-icon"
);
const lastIcon = icons[icons.length - 1];
if (lastIcon && lastIcon.contains(target)) {
onRemove(index);
} else {
handlePillClick(index, btn);
}
}}
>
{model.displayName}
</SelectButton>
</div>
);
})}
</div>
</>
)}
</div>
<Popover.Content
side="top"
align="start"
width="lg"
avoidCollisions={false}
>
<ModelListContent
llmProviders={llmManager.llmProviders}
isLoading={llmManager.isLoadingProviders}
onSelect={handleSelect}
isSelected={isSelected}
isDisabled={isDisabled}
/>
</Popover.Content>
</Popover>
);
}

View File

@@ -232,7 +232,6 @@ export default function AppPage({ firstMessage }: ChatPageProps) {
onboardingDismissed,
onboardingState,
onboardingActions,
llmDescriptors,
isLoadingOnboarding,
finishOnboarding,
hideOnboarding,
@@ -812,7 +811,6 @@ export default function AppPage({ firstMessage }: ChatPageProps) {
handleFinishOnboarding={finishOnboarding}
state={onboardingState}
actions={onboardingActions}
llmDescriptors={llmDescriptors}
/>
)}

View File

@@ -13,7 +13,7 @@ import {
import { ADMIN_ROUTES } from "@/lib/admin-routes";
import { Section } from "@/layouts/general-layouts";
import { Button, SelectCard } from "@opal/components";
import { CardHeaderLayout } from "@opal/layouts";
import { Card } from "@opal/layouts";
import { Disabled, Hoverable } from "@opal/core";
import Text from "@/refresh-components/texts/Text";
import SimpleLoader from "@/refresh-components/loaders/SimpleLoader";
@@ -113,7 +113,7 @@ export default function CodeInterpreterPage() {
{isEnabled || isLoading ? (
<Hoverable.Root group="code-interpreter/Card">
<SelectCard state="filled" padding="sm" rounding="lg">
<CardHeaderLayout
<Card.Header
sizePreset="main-ui"
variant="section"
icon={SvgTerminal}
@@ -161,7 +161,7 @@ export default function CodeInterpreterPage() {
rounding="lg"
onClick={() => handleToggle(true)}
>
<CardHeaderLayout
<Card.Header
sizePreset="main-ui"
variant="section"
icon={SvgTerminal}

View File

@@ -1,8 +1,7 @@
"use client";
import { useMemo, useState } from "react";
import { useState } from "react";
import { useRouter } from "next/navigation";
import useSWR from "swr";
import { Table, Button } from "@opal/components";
import { IllustrationContent } from "@opal/layouts";
import { SvgUsers } from "@opal/icons";
@@ -14,17 +13,14 @@ import Text from "@/refresh-components/texts/Text";
import SimpleLoader from "@/refresh-components/loaders/SimpleLoader";
import Separator from "@/refresh-components/Separator";
import { toast } from "@/hooks/useToast";
import { errorHandlingFetcher } from "@/lib/fetcher";
import useAdminUsers from "@/hooks/useAdminUsers";
import { SWR_KEYS } from "@/lib/swr-keys";
import type { ApiKeyDescriptor, MemberRow } from "./interfaces";
import useGroupMemberCandidates from "./useGroupMemberCandidates";
import {
createGroup,
updateAgentGroupSharing,
updateDocSetGroupSharing,
saveTokenLimits,
} from "./svc";
import { apiKeyToMemberRow, memberTableColumns, PAGE_SIZE } from "./shared";
import { memberTableColumns, PAGE_SIZE } from "./shared";
import SharedGroupResources from "@/refresh-pages/admin/GroupsPage/SharedGroupResources";
import TokenLimitSection from "./TokenLimitSection";
import type { TokenLimit } from "./TokenLimitSection";
@@ -42,22 +38,7 @@ function CreateGroupPage() {
{ tokenBudget: null, periodHours: null },
]);
const { users, isLoading: usersLoading, error: usersError } = useAdminUsers();
const {
data: apiKeys,
isLoading: apiKeysLoading,
error: apiKeysError,
} = useSWR<ApiKeyDescriptor[]>(SWR_KEYS.adminApiKeys, errorHandlingFetcher);
const isLoading = usersLoading || apiKeysLoading;
const error = usersError ?? apiKeysError;
const allRows: MemberRow[] = useMemo(() => {
const activeUsers = users.filter((u) => u.is_active);
const serviceAccountRows = (apiKeys ?? []).map(apiKeyToMemberRow);
return [...activeUsers, ...serviceAccountRows];
}, [users, apiKeys]);
const { rows: allRows, isLoading, error } = useGroupMemberCandidates();
async function handleCreate() {
const trimmed = groupName.trim();
@@ -134,11 +115,11 @@ function CreateGroupPage() {
{/* Members table */}
{isLoading && <SimpleLoader />}
{error && (
{error ? (
<Text as="p" secondaryBody text03>
Failed to load users.
</Text>
)}
) : null}
{!isLoading && !error && (
<Section

View File

@@ -3,6 +3,7 @@
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { useRouter } from "next/navigation";
import useSWR, { useSWRConfig } from "swr";
import useGroupMemberCandidates from "./useGroupMemberCandidates";
import { Table, Button } from "@opal/components";
import { IllustrationContent } from "@opal/layouts";
import { SvgUsers, SvgTrash, SvgMinusCircle, SvgPlusCircle } from "@opal/icons";
@@ -19,20 +20,9 @@ import ConfirmationModalLayout from "@/refresh-components/layouts/ConfirmationMo
import Separator from "@/refresh-components/Separator";
import { toast } from "@/hooks/useToast";
import { errorHandlingFetcher } from "@/lib/fetcher";
import useAdminUsers from "@/hooks/useAdminUsers";
import type { UserGroup } from "@/lib/types";
import type {
ApiKeyDescriptor,
MemberRow,
TokenRateLimitDisplay,
} from "./interfaces";
import {
apiKeyToMemberRow,
baseColumns,
memberTableColumns,
tc,
PAGE_SIZE,
} from "./shared";
import type { MemberRow, TokenRateLimitDisplay } from "./interfaces";
import { baseColumns, memberTableColumns, tc, PAGE_SIZE } from "./shared";
import {
renameGroup,
updateGroup,
@@ -104,18 +94,15 @@ function EditGroupPage({ groupId }: EditGroupPageProps) {
const initialAgentIdsRef = useRef<number[]>([]);
const initialDocSetIdsRef = useRef<number[]>([]);
// Users and API keys
const { users, isLoading: usersLoading, error: usersError } = useAdminUsers();
// Users + service accounts (curator-accessible — see hook docs).
const {
data: apiKeys,
isLoading: apiKeysLoading,
error: apiKeysError,
} = useSWR<ApiKeyDescriptor[]>(SWR_KEYS.adminApiKeys, errorHandlingFetcher);
rows: allRows,
isLoading: candidatesLoading,
error: candidatesError,
} = useGroupMemberCandidates();
const isLoading =
groupLoading || usersLoading || apiKeysLoading || tokenLimitsLoading;
const error = groupError ?? usersError ?? apiKeysError;
const isLoading = groupLoading || candidatesLoading || tokenLimitsLoading;
const error = groupError ?? candidatesError;
// Pre-populate form when group data loads
useEffect(() => {
@@ -145,12 +132,6 @@ function EditGroupPage({ groupId }: EditGroupPageProps) {
}
}, [tokenRateLimits]);
const allRows = useMemo(() => {
const activeUsers = users.filter((u) => u.is_active);
const serviceAccountRows = (apiKeys ?? []).map(apiKeyToMemberRow);
return [...activeUsers, ...serviceAccountRows];
}, [users, apiKeys]);
const memberRows = useMemo(() => {
const selected = new Set(selectedUserIds);
return allRows.filter((r) => selected.has(r.id ?? r.email));

View File

@@ -0,0 +1,147 @@
"use client";
import { useMemo } from "react";
import useSWR from "swr";
import { errorHandlingFetcher } from "@/lib/fetcher";
import { SWR_KEYS } from "@/lib/swr-keys";
import { useUser } from "@/providers/UserProvider";
import { AccountType, UserStatus, type UserRole } from "@/lib/types";
import type {
UserGroupInfo,
UserRow,
} from "@/refresh-pages/admin/UsersPage/interfaces";
import type { ApiKeyDescriptor, MemberRow } from "./interfaces";
// Backend response shape for `/api/manage/users?include_api_keys=true`. The
// existing `AllUsersResponse` in `lib/types.ts` types `accepted` as `User[]`,
// which is missing fields the table needs (`personal_name`, `account_type`,
// `groups`, etc.), so we declare an accurate local type here.
interface FullUserSnapshot {
id: string;
email: string;
role: UserRole;
account_type: AccountType;
is_active: boolean;
password_configured: boolean;
personal_name: string | null;
created_at: string;
updated_at: string;
groups: UserGroupInfo[];
is_scim_synced: boolean;
}
interface ManageUsersResponse {
accepted: FullUserSnapshot[];
invited: { email: string }[];
slack_users: FullUserSnapshot[];
accepted_pages: number;
invited_pages: number;
slack_users_pages: number;
}
function snapshotToMemberRow(snapshot: FullUserSnapshot): MemberRow {
return {
id: snapshot.id,
email: snapshot.email,
role: snapshot.role,
status: snapshot.is_active ? UserStatus.ACTIVE : UserStatus.INACTIVE,
is_active: snapshot.is_active,
is_scim_synced: snapshot.is_scim_synced,
personal_name: snapshot.personal_name,
created_at: snapshot.created_at,
updated_at: snapshot.updated_at,
groups: snapshot.groups,
};
}
function serviceAccountToMemberRow(
snapshot: FullUserSnapshot,
apiKey: ApiKeyDescriptor | undefined
): MemberRow {
return {
id: snapshot.id,
email: "Service Account",
role: apiKey?.api_key_role ?? snapshot.role,
status: UserStatus.ACTIVE,
is_active: true,
is_scim_synced: false,
personal_name:
apiKey?.api_key_name ?? snapshot.personal_name ?? "Unnamed Key",
created_at: null,
updated_at: null,
groups: [],
api_key_display: apiKey?.api_key_display,
};
}
interface UseGroupMemberCandidatesResult {
/** Active users + service-account rows, in the order the table expects. */
rows: MemberRow[];
/** Subset of `rows` representing real (non-service-account) users. */
userRows: MemberRow[];
isLoading: boolean;
error: unknown;
}
/**
* Returns the candidate list for the group create/edit member pickers.
*
* Hits `/api/manage/users?include_api_keys=true`, which is gated by
* `current_curator_or_admin_user` on the backend, so this works for both
* admins and global curators (the admin-only `/accepted/all` and `/invited`
* endpoints used to be called here, which 403'd for global curators and broke
* the Edit Group page entirely).
*
* For admins, we additionally fetch `/admin/api-key` to enrich service-account
* rows with the masked api-key display string. That call is admin-only and is
* skipped for curators; its failure is non-fatal.
*/
export default function useGroupMemberCandidates(): UseGroupMemberCandidatesResult {
const { isAdmin } = useUser();
const {
data: usersData,
isLoading: usersLoading,
error: usersError,
} = useSWR<ManageUsersResponse>(
SWR_KEYS.groupMemberCandidates,
errorHandlingFetcher
);
const { data: apiKeys, isLoading: apiKeysLoading } = useSWR<
ApiKeyDescriptor[]
>(isAdmin ? SWR_KEYS.adminApiKeys : null, errorHandlingFetcher);
const apiKeysByUserId = useMemo(() => {
const map = new Map<string, ApiKeyDescriptor>();
for (const key of apiKeys ?? []) map.set(key.user_id, key);
return map;
}, [apiKeys]);
const { rows, userRows } = useMemo(() => {
const accepted = usersData?.accepted ?? [];
const userRowsLocal: MemberRow[] = [];
const serviceAccountRows: MemberRow[] = [];
for (const snapshot of accepted) {
if (!snapshot.is_active) continue;
if (snapshot.account_type === AccountType.SERVICE_ACCOUNT) {
serviceAccountRows.push(
serviceAccountToMemberRow(snapshot, apiKeysByUserId.get(snapshot.id))
);
} else {
userRowsLocal.push(snapshotToMemberRow(snapshot));
}
}
return {
rows: [...userRowsLocal, ...serviceAccountRows],
userRows: userRowsLocal,
};
}, [usersData, apiKeysByUserId]);
return {
rows,
userRows,
isLoading: usersLoading || (isAdmin && apiKeysLoading),
error: usersError,
};
}

View File

@@ -23,7 +23,7 @@ import Message from "@/refresh-components/messages/Message";
import ConfirmationModalLayout from "@/refresh-components/layouts/ConfirmationModalLayout";
import InputSelect from "@/refresh-components/inputs/InputSelect";
import { Button, SelectCard, Text } from "@opal/components";
import { Content, CardHeaderLayout } from "@opal/layouts";
import { Content, Card } from "@opal/layouts";
import { Hoverable } from "@opal/core";
import {
SvgArrowExchange,
@@ -260,7 +260,7 @@ export default function ImageGenerationContent() {
: undefined
}
>
<CardHeaderLayout
<Card.Header
sizePreset="main-ui"
variant="section"
icon={() => (

View File

@@ -8,7 +8,7 @@ import {
useWellKnownLLMProviders,
} from "@/hooks/useLLMProviders";
import { ThreeDotsLoader } from "@/components/Loading";
import { Content, CardHeaderLayout } from "@opal/layouts";
import { Content, Card } from "@opal/layouts";
import { Button, SelectCard } from "@opal/components";
import { Hoverable } from "@opal/core";
import { SvgArrowExchange, SvgSettings, SvgTrash } from "@opal/icons";
@@ -24,13 +24,14 @@ import { refreshLlmProviderCaches } from "@/lib/llmConfig/cache";
import { deleteLlmProvider, setDefaultLlmModel } from "@/lib/llmConfig/svc";
import Text from "@/refresh-components/texts/Text";
import { Horizontal as HorizontalInput } from "@/layouts/input-layouts";
import Card from "@/refresh-components/cards/Card";
import LegacyCard from "@/refresh-components/cards/Card";
import InputSelect from "@/refresh-components/inputs/InputSelect";
import Message from "@/refresh-components/messages/Message";
import ConfirmationModalLayout from "@/refresh-components/layouts/ConfirmationModalLayout";
import { useCreateModal } from "@/refresh-components/contexts/ModalContext";
import Separator from "@/refresh-components/Separator";
import {
LLMProviderName,
LLMProviderView,
WellKnownLLMProviderDescriptor,
} from "@/interfaces/llm";
@@ -46,6 +47,7 @@ import CustomModal from "@/sections/modals/llmConfig/CustomModal";
import LMStudioForm from "@/sections/modals/llmConfig/LMStudioForm";
import LiteLLMProxyModal from "@/sections/modals/llmConfig/LiteLLMProxyModal";
import BifrostModal from "@/sections/modals/llmConfig/BifrostModal";
import OpenAICompatibleModal from "@/sections/modals/llmConfig/OpenAICompatibleModal";
import { Section } from "@/layouts/general-layouts";
const route = ADMIN_ROUTES.LLM_MODELS;
@@ -57,93 +59,60 @@ const route = ADMIN_ROUTES.LLM_MODELS;
// Client-side ordering for the "Add Provider" cards. The backend may return
// wellKnownLLMProviders in an arbitrary order, so we sort explicitly here.
const PROVIDER_DISPLAY_ORDER: string[] = [
"openai",
"anthropic",
"vertex_ai",
"bedrock",
"azure",
"litellm_proxy",
"ollama_chat",
"openrouter",
"lm_studio",
"bifrost",
LLMProviderName.OPENAI,
LLMProviderName.ANTHROPIC,
LLMProviderName.VERTEX_AI,
LLMProviderName.BEDROCK,
LLMProviderName.AZURE,
LLMProviderName.LITELLM,
LLMProviderName.LITELLM_PROXY,
LLMProviderName.OLLAMA_CHAT,
LLMProviderName.OPENROUTER,
LLMProviderName.LM_STUDIO,
LLMProviderName.BIFROST,
LLMProviderName.OPENAI_COMPATIBLE,
];
const PROVIDER_MODAL_MAP: Record<
string,
(
shouldMarkAsDefault: boolean,
open: boolean,
onOpenChange: (open: boolean) => void
) => React.ReactNode
> = {
openai: (d, open, onOpenChange) => (
<OpenAIModal
shouldMarkAsDefault={d}
open={open}
onOpenChange={onOpenChange}
/>
openai: (d, onOpenChange) => (
<OpenAIModal shouldMarkAsDefault={d} onOpenChange={onOpenChange} />
),
anthropic: (d, open, onOpenChange) => (
<AnthropicModal
shouldMarkAsDefault={d}
open={open}
onOpenChange={onOpenChange}
/>
anthropic: (d, onOpenChange) => (
<AnthropicModal shouldMarkAsDefault={d} onOpenChange={onOpenChange} />
),
ollama_chat: (d, open, onOpenChange) => (
<OllamaModal
shouldMarkAsDefault={d}
open={open}
onOpenChange={onOpenChange}
/>
ollama_chat: (d, onOpenChange) => (
<OllamaModal shouldMarkAsDefault={d} onOpenChange={onOpenChange} />
),
azure: (d, open, onOpenChange) => (
<AzureModal
shouldMarkAsDefault={d}
open={open}
onOpenChange={onOpenChange}
/>
azure: (d, onOpenChange) => (
<AzureModal shouldMarkAsDefault={d} onOpenChange={onOpenChange} />
),
bedrock: (d, open, onOpenChange) => (
<BedrockModal
shouldMarkAsDefault={d}
open={open}
onOpenChange={onOpenChange}
/>
bedrock: (d, onOpenChange) => (
<BedrockModal shouldMarkAsDefault={d} onOpenChange={onOpenChange} />
),
vertex_ai: (d, open, onOpenChange) => (
<VertexAIModal
shouldMarkAsDefault={d}
open={open}
onOpenChange={onOpenChange}
/>
vertex_ai: (d, onOpenChange) => (
<VertexAIModal shouldMarkAsDefault={d} onOpenChange={onOpenChange} />
),
openrouter: (d, open, onOpenChange) => (
<OpenRouterModal
shouldMarkAsDefault={d}
open={open}
onOpenChange={onOpenChange}
/>
openrouter: (d, onOpenChange) => (
<OpenRouterModal shouldMarkAsDefault={d} onOpenChange={onOpenChange} />
),
lm_studio: (d, open, onOpenChange) => (
<LMStudioForm
shouldMarkAsDefault={d}
open={open}
onOpenChange={onOpenChange}
/>
lm_studio: (d, onOpenChange) => (
<LMStudioForm shouldMarkAsDefault={d} onOpenChange={onOpenChange} />
),
litellm_proxy: (d, open, onOpenChange) => (
<LiteLLMProxyModal
shouldMarkAsDefault={d}
open={open}
onOpenChange={onOpenChange}
/>
litellm_proxy: (d, onOpenChange) => (
<LiteLLMProxyModal shouldMarkAsDefault={d} onOpenChange={onOpenChange} />
),
bifrost: (d, open, onOpenChange) => (
<BifrostModal
bifrost: (d, onOpenChange) => (
<BifrostModal shouldMarkAsDefault={d} onOpenChange={onOpenChange} />
),
openai_compatible: (d, onOpenChange) => (
<OpenAICompatibleModal
shouldMarkAsDefault={d}
open={open}
onOpenChange={onOpenChange}
/>
),
@@ -210,14 +179,17 @@ function ExistingProviderCard({
</ConfirmationModalLayout>
)}
<Hoverable.Root group="ExistingProviderCard">
<Hoverable.Root
group="ExistingProviderCard"
interaction={deleteModal.isOpen ? "hover" : "rest"}
>
<SelectCard
state="filled"
padding="sm"
rounding="lg"
onClick={() => setIsOpen(true)}
>
<CardHeaderLayout
<Card.Header
icon={getProviderIcon(provider.provider)}
title={provider.name}
description={getProviderDisplayName(provider.provider)}
@@ -252,12 +224,8 @@ function ExistingProviderCard({
</div>
}
/>
{getModalForExistingProvider(
provider,
isOpen,
setIsOpen,
defaultModelName
)}
{isOpen &&
getModalForExistingProvider(provider, setIsOpen, defaultModelName)}
</SelectCard>
</Hoverable.Root>
</>
@@ -273,7 +241,6 @@ interface NewProviderCardProps {
isFirstProvider: boolean;
formFn: (
shouldMarkAsDefault: boolean,
open: boolean,
onOpenChange: (open: boolean) => void
) => React.ReactNode;
}
@@ -292,7 +259,7 @@ function NewProviderCard({
rounding="lg"
onClick={() => setIsOpen(true)}
>
<CardHeaderLayout
<Card.Header
icon={getProviderIcon(provider.name)}
title={getProviderProductName(provider.name)}
description={getProviderDisplayName(provider.name)}
@@ -311,7 +278,7 @@ function NewProviderCard({
</Button>
}
/>
{formFn(isFirstProvider, isOpen, setIsOpen)}
{isOpen && formFn(isFirstProvider, setIsOpen)}
</SelectCard>
);
}
@@ -336,7 +303,7 @@ function NewCustomProviderCard({
rounding="lg"
onClick={() => setIsOpen(true)}
>
<CardHeaderLayout
<Card.Header
icon={getProviderIcon("custom")}
title={getProviderProductName("custom")}
description={getProviderDisplayName("custom")}
@@ -355,11 +322,12 @@ function NewCustomProviderCard({
</Button>
}
/>
<CustomModal
shouldMarkAsDefault={isFirstProvider}
open={isOpen}
onOpenChange={setIsOpen}
/>
{isOpen && (
<CustomModal
shouldMarkAsDefault={isFirstProvider}
onOpenChange={setIsOpen}
/>
)}
</SelectCard>
);
}
@@ -368,7 +336,7 @@ function NewCustomProviderCard({
// LLMConfigurationPage — main page component
// ============================================================================
export default function LLMConfigurationPage() {
export default function LLMProviderConfigurationPage() {
const { mutate } = useSWRConfig();
const { llmProviders: existingLlmProviders, defaultText } =
useAdminLLMProviders();
@@ -424,7 +392,7 @@ export default function LLMConfigurationPage() {
<SettingsLayouts.Body>
{hasProviders ? (
<Card>
<LegacyCard>
<HorizontalInput
title="Default Model"
description="This model will be used by Onyx by default in your chats."
@@ -455,7 +423,7 @@ export default function LLMConfigurationPage() {
</InputSelect.Content>
</InputSelect>
</HorizontalInput>
</Card>
</LegacyCard>
) : (
<Message
info

View File

@@ -6,7 +6,7 @@ import { InfoIcon } from "@/components/icons/icons";
import Text from "@/refresh-components/texts/Text";
import { Section } from "@/layouts/general-layouts";
import * as SettingsLayouts from "@/layouts/settings-layouts";
import { Content, CardHeaderLayout } from "@opal/layouts";
import { Content, Card } from "@opal/layouts";
import useSWR from "swr";
import { errorHandlingFetcher, FetchError } from "@/lib/fetcher";
import { SWR_KEYS } from "@/lib/swr-keys";
@@ -275,7 +275,7 @@ function ProviderCard({
: undefined
}
>
<CardHeaderLayout
<Card.Header
sizePreset="main-ui"
variant="section"
icon={icon}

View File

@@ -2,7 +2,7 @@
import type { IconFunctionComponent } from "@opal/types";
import { Button, SelectCard } from "@opal/components";
import { Content, CardHeaderLayout } from "@opal/layouts";
import { Content, Card } from "@opal/layouts";
import {
SvgArrowExchange,
SvgArrowRightCircle,
@@ -15,7 +15,7 @@ import {
* ProviderCard — a stateful card for selecting / connecting / disconnecting
* an external service provider (LLM, search engine, voice model, etc.).
*
* Built on opal `SelectCard` + `CardHeaderLayout`. Maps a three-state
* Built on opal `SelectCard` + `Card.Header`. Maps a three-state
* status model to the `SelectCard` state system:
*
* | Status | SelectCard state | Right action |
@@ -92,7 +92,7 @@ export default function ProviderCard({
aria-label={ariaLabel}
onClick={isDisconnected && onConnect ? onConnect : undefined}
>
<CardHeaderLayout
<Card.Header
sizePreset="main-ui"
variant="section"
icon={icon}

View File

@@ -1,16 +1,12 @@
"use client";
import { useState } from "react";
import { useSWRConfig } from "swr";
import { Formik } from "formik";
import { LLMProviderFormProps } from "@/interfaces/llm";
import * as Yup from "yup";
import { LLMProviderFormProps, LLMProviderName } from "@/interfaces/llm";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
buildDefaultInitialValues,
buildDefaultValidationSchema,
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
buildOnboardingInitialValues,
} from "@/sections/modals/llmConfig/utils";
import {
submitLLMProvider,
@@ -18,22 +14,17 @@ import {
} from "@/sections/modals/llmConfig/svc";
import {
APIKeyField,
ModelsField,
ModelSelectionField,
DisplayNameField,
ModelsAccessField,
FieldSeparator,
SingleDefaultModelField,
LLMConfigurationModalWrapper,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
const ANTHROPIC_PROVIDER_NAME = "anthropic";
const DEFAULT_DEFAULT_MODEL_NAME = "claude-sonnet-4-5";
import * as InputLayouts from "@/layouts/input-layouts";
export default function AnthropicModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
@@ -41,14 +32,11 @@ export default function AnthropicModal({
llmDescriptor,
}: LLMProviderFormProps) {
const isOnboarding = variant === "onboarding";
const [isTesting, setIsTesting] = useState(false);
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
ANTHROPIC_PROVIDER_NAME
LLMProviderName.ANTHROPIC
);
if (open === false) return null;
const onClose = () => onOpenChange?.(false);
const modelConfigurations = buildAvailableModelConfigurations(
@@ -56,58 +44,34 @@ export default function AnthropicModal({
wellKnownLLMProvider ?? llmDescriptor
);
const initialValues = isOnboarding
? {
...buildOnboardingInitialValues(),
name: ANTHROPIC_PROVIDER_NAME,
provider: ANTHROPIC_PROVIDER_NAME,
api_key: "",
default_model_name: DEFAULT_DEFAULT_MODEL_NAME,
}
: {
...buildDefaultInitialValues(
existingLlmProvider,
modelConfigurations,
defaultModelName
),
api_key: existingLlmProvider?.api_key ?? "",
api_base: existingLlmProvider?.api_base ?? undefined,
default_model_name:
(defaultModelName &&
modelConfigurations.some((m) => m.name === defaultModelName)
? defaultModelName
: undefined) ??
wellKnownLLMProvider?.recommended_default_model?.name ??
DEFAULT_DEFAULT_MODEL_NAME,
is_auto_mode: existingLlmProvider?.is_auto_mode ?? true,
};
const initialValues = useInitialValues(
isOnboarding,
LLMProviderName.ANTHROPIC,
existingLlmProvider
);
const validationSchema = isOnboarding
? Yup.object().shape({
api_key: Yup.string().required("API Key is required"),
default_model_name: Yup.string().required("Model name is required"),
})
: buildDefaultValidationSchema().shape({
api_key: Yup.string().required("API Key is required"),
});
const validationSchema = buildValidationSchema(isOnboarding, {
apiKey: true,
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.ANTHROPIC}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
if (isOnboarding && onboardingState && onboardingActions) {
const modelConfigsToUse =
(wellKnownLLMProvider ?? llmDescriptor)?.known_models ?? [];
await submitOnboardingProvider({
providerName: ANTHROPIC_PROVIDER_NAME,
providerName: LLMProviderName.ANTHROPIC,
payload: {
...values,
model_configurations: modelConfigsToUse,
is_auto_mode:
values.default_model_name === DEFAULT_DEFAULT_MODEL_NAME,
is_auto_mode: values.is_auto_mode,
},
onboardingState,
onboardingActions,
@@ -117,13 +81,13 @@ export default function AnthropicModal({
});
} else {
await submitLLMProvider({
providerName: ANTHROPIC_PROVIDER_NAME,
providerName: LLMProviderName.ANTHROPIC,
values,
initialValues,
modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -131,47 +95,30 @@ export default function AnthropicModal({
}
}}
>
{(formikProps) => (
<LLMConfigurationModalWrapper
providerEndpoint={ANTHROPIC_PROVIDER_NAME}
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
>
<APIKeyField providerName="Anthropic" />
<APIKeyField providerName="Anthropic" />
{!isOnboarding && (
<>
<FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<FieldSeparator />
{isOnboarding ? (
<SingleDefaultModelField placeholder="E.g. claude-sonnet-4-5" />
) : (
<ModelsField
modelConfigurations={modelConfigurations}
formikProps={formikProps}
recommendedDefaultModel={
wellKnownLLMProvider?.recommended_default_model ?? null
}
shouldShowAutoUpdateToggle={true}
/>
)}
{!isOnboarding && (
<>
<FieldSeparator />
<ModelsAccessField formikProps={formikProps} />
</>
)}
</LLMConfigurationModalWrapper>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
</Formik>
<InputLayouts.FieldSeparator />
<ModelSelectionField
modelConfigurations={modelConfigurations}
recommendedDefaultModel={
wellKnownLLMProvider?.recommended_default_model ?? null
}
shouldShowAutoUpdateToggle={true}
/>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</ModalWrapper>
);
}

View File

@@ -1,22 +1,22 @@
"use client";
import { useState } from "react";
import React, { useState } from "react";
import { useSWRConfig } from "swr";
import { Formik } from "formik";
import { useFormikContext } from "formik";
import InputTypeInField from "@/refresh-components/form/InputTypeInField";
import * as InputLayouts from "@/layouts/input-layouts";
import {
LLMProviderFormProps,
LLMProviderName,
LLMProviderView,
ModelConfiguration,
} from "@/interfaces/llm";
import * as Yup from "yup";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
buildDefaultInitialValues,
buildDefaultValidationSchema,
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
buildOnboardingInitialValues,
BaseLLMFormValues,
} from "@/sections/modals/llmConfig/utils";
import {
@@ -26,12 +26,9 @@ import {
import {
APIKeyField,
DisplayNameField,
FieldSeparator,
FieldWrapper,
ModelsAccessField,
ModelsField,
SingleDefaultModelField,
LLMConfigurationModalWrapper,
ModelAccessField,
ModelSelectionField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
import {
isValidAzureTargetUri,
@@ -39,8 +36,6 @@ import {
} from "@/lib/azureTargetUri";
import { toast } from "@/hooks/useToast";
const AZURE_PROVIDER_NAME = "azure";
interface AzureModalValues extends BaseLLMFormValues {
api_key: string;
target_uri: string;
@@ -49,6 +44,43 @@ interface AzureModalValues extends BaseLLMFormValues {
deployment_name?: string;
}
interface AzureModelSelectionProps {
modelConfigurations: ModelConfiguration[];
setAddedModels: React.Dispatch<React.SetStateAction<ModelConfiguration[]>>;
}
function AzureModelSelection({
modelConfigurations,
setAddedModels,
}: AzureModelSelectionProps) {
const formikProps = useFormikContext<AzureModalValues>();
return (
<ModelSelectionField
modelConfigurations={modelConfigurations}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onAddModel={(modelName) => {
const newModel: ModelConfiguration = {
name: modelName,
is_visible: true,
max_input_tokens: null,
supports_image_input: false,
supports_reasoning: false,
};
setAddedModels((prev) => [...prev, newModel]);
const currentSelected = formikProps.values.visible_model_names ?? [];
formikProps.setFieldValue("visible_model_names", [
...currentSelected,
modelName,
]);
if (!formikProps.values.test_model_name) {
formikProps.setFieldValue("test_model_name", modelName);
}
}}
/>
);
}
function buildTargetUri(existingLlmProvider?: LLMProviderView): string {
if (!existingLlmProvider?.api_base || !existingLlmProvider?.api_version) {
return "";
@@ -83,7 +115,6 @@ export default function AzureModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
@@ -91,14 +122,13 @@ export default function AzureModal({
llmDescriptor,
}: LLMProviderFormProps) {
const isOnboarding = variant === "onboarding";
const [isTesting, setIsTesting] = useState(false);
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } = useWellKnownLLMProvider(AZURE_PROVIDER_NAME);
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
LLMProviderName.AZURE
);
const [addedModels, setAddedModels] = useState<ModelConfiguration[]>([]);
if (open === false) return null;
const onClose = () => {
setAddedModels([]);
onOpenChange?.(false);
@@ -116,54 +146,36 @@ export default function AzureModal({
...addedModels.filter((m) => !existingNames.has(m.name)),
];
const initialValues: AzureModalValues = isOnboarding
? ({
...buildOnboardingInitialValues(),
name: AZURE_PROVIDER_NAME,
provider: AZURE_PROVIDER_NAME,
api_key: "",
target_uri: "",
default_model_name: "",
} as AzureModalValues)
: {
...buildDefaultInitialValues(
existingLlmProvider,
modelConfigurations,
defaultModelName
),
api_key: existingLlmProvider?.api_key ?? "",
target_uri: buildTargetUri(existingLlmProvider),
};
const initialValues: AzureModalValues = {
...useInitialValues(
isOnboarding,
LLMProviderName.AZURE,
existingLlmProvider
),
target_uri: buildTargetUri(existingLlmProvider),
} as AzureModalValues;
const validationSchema = isOnboarding
? Yup.object().shape({
api_key: Yup.string().required("API Key is required"),
target_uri: Yup.string()
.required("Target URI is required")
.test(
"valid-target-uri",
"Target URI must be a valid URL with api-version query parameter and either a deployment name in the path or /openai/responses",
(value) => (value ? isValidAzureTargetUri(value) : false)
),
default_model_name: Yup.string().required("Model name is required"),
})
: buildDefaultValidationSchema().shape({
api_key: Yup.string().required("API Key is required"),
target_uri: Yup.string()
.required("Target URI is required")
.test(
"valid-target-uri",
"Target URI must be a valid URL with api-version query parameter and either a deployment name in the path or /openai/responses",
(value) => (value ? isValidAzureTargetUri(value) : false)
),
});
const validationSchema = buildValidationSchema(isOnboarding, {
apiKey: true,
extra: {
target_uri: Yup.string()
.required("Target URI is required")
.test(
"valid-target-uri",
"Target URI must be a valid URL with api-version query parameter and either a deployment name in the path or /openai/responses",
(value) => (value ? isValidAzureTargetUri(value) : false)
),
},
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.AZURE}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
const processedValues = processValues(values);
if (isOnboarding && onboardingState && onboardingActions) {
@@ -171,7 +183,7 @@ export default function AzureModal({
(wellKnownLLMProvider ?? llmDescriptor)?.known_models ?? [];
await submitOnboardingProvider({
providerName: AZURE_PROVIDER_NAME,
providerName: LLMProviderName.AZURE,
payload: {
...processedValues,
model_configurations: modelConfigsToUse,
@@ -184,13 +196,13 @@ export default function AzureModal({
});
} else {
await submitLLMProvider({
providerName: AZURE_PROVIDER_NAME,
providerName: LLMProviderName.AZURE,
values: processedValues,
initialValues,
modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -198,78 +210,40 @@ export default function AzureModal({
}
}}
>
{(formikProps) => (
<LLMConfigurationModalWrapper
providerEndpoint={AZURE_PROVIDER_NAME}
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
<InputLayouts.FieldPadder>
<InputLayouts.Vertical
name="target_uri"
title="Target URI"
subDescription="Paste your endpoint target URI from Azure OpenAI (including API endpoint base, deployment name, and API version)."
>
<FieldWrapper>
<InputLayouts.Vertical
name="target_uri"
title="Target URI"
subDescription="Paste your endpoint target URI from Azure OpenAI (including API endpoint base, deployment name, and API version)."
>
<InputTypeInField
name="target_uri"
placeholder="https://your-resource.cognitiveservices.azure.com/openai/deployments/deployment-name/chat/completions?api-version=2025-01-01-preview"
/>
</InputLayouts.Vertical>
</FieldWrapper>
<InputTypeInField
name="target_uri"
placeholder="https://your-resource.cognitiveservices.azure.com/openai/deployments/deployment-name/chat/completions?api-version=2025-01-01-preview"
/>
</InputLayouts.Vertical>
</InputLayouts.FieldPadder>
<APIKeyField providerName="Azure" />
<APIKeyField providerName="Azure" />
{!isOnboarding && (
<>
<FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<FieldSeparator />
{isOnboarding ? (
<SingleDefaultModelField placeholder="E.g. gpt-4o" />
) : (
<ModelsField
modelConfigurations={modelConfigurations}
formikProps={formikProps}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onAddModel={(modelName) => {
const newModel: ModelConfiguration = {
name: modelName,
is_visible: true,
max_input_tokens: null,
supports_image_input: false,
supports_reasoning: false,
};
setAddedModels((prev) => [...prev, newModel]);
const currentSelected =
formikProps.values.selected_model_names ?? [];
formikProps.setFieldValue("selected_model_names", [
...currentSelected,
modelName,
]);
if (!formikProps.values.default_model_name) {
formikProps.setFieldValue("default_model_name", modelName);
}
}}
/>
)}
{!isOnboarding && (
<>
<FieldSeparator />
<ModelsAccessField formikProps={formikProps} />
</>
)}
</LLMConfigurationModalWrapper>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
</Formik>
<InputLayouts.FieldSeparator />
<AzureModelSelection
modelConfigurations={modelConfigurations}
setAddedModels={setAddedModels}
/>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</ModalWrapper>
);
}

View File

@@ -2,7 +2,7 @@
import { useState, useEffect } from "react";
import { useSWRConfig } from "swr";
import { Formik, FormikProps } from "formik";
import { useFormikContext } from "formik";
import InputTypeInField from "@/refresh-components/form/InputTypeInField";
import InputSelectField from "@/refresh-components/form/InputSelectField";
import InputSelect from "@/refresh-components/inputs/InputSelect";
@@ -10,16 +10,16 @@ import * as InputLayouts from "@/layouts/input-layouts";
import PasswordInputTypeInField from "@/refresh-components/form/PasswordInputTypeInField";
import {
LLMProviderFormProps,
LLMProviderName,
LLMProviderView,
ModelConfiguration,
} from "@/interfaces/llm";
import * as Yup from "yup";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
buildDefaultInitialValues,
buildDefaultValidationSchema,
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
buildOnboardingInitialValues,
BaseLLMFormValues,
} from "@/sections/modals/llmConfig/utils";
import {
@@ -27,13 +27,10 @@ import {
submitOnboardingProvider,
} from "@/sections/modals/llmConfig/svc";
import {
ModelsField,
ModelSelectionField,
DisplayNameField,
FieldSeparator,
FieldWrapper,
ModelsAccessField,
SingleDefaultModelField,
LLMConfigurationModalWrapper,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
import { fetchBedrockModels } from "@/app/admin/configuration/llm/utils";
import { Card } from "@opal/components";
@@ -43,7 +40,6 @@ import { Content } from "@opal/layouts";
import { toast } from "@/hooks/useToast";
import useOnMount from "@/hooks/useOnMount";
const BEDROCK_PROVIDER_NAME = "bedrock";
const AWS_REGION_OPTIONS = [
{ name: "us-east-1", value: "us-east-1" },
{ name: "us-east-2", value: "us-east-2" },
@@ -79,26 +75,21 @@ interface BedrockModalValues extends BaseLLMFormValues {
}
interface BedrockModalInternalsProps {
formikProps: FormikProps<BedrockModalValues>;
existingLlmProvider: LLMProviderView | undefined;
fetchedModels: ModelConfiguration[];
setFetchedModels: (models: ModelConfiguration[]) => void;
modelConfigurations: ModelConfiguration[];
isTesting: boolean;
onClose: () => void;
isOnboarding: boolean;
}
function BedrockModalInternals({
formikProps,
existingLlmProvider,
fetchedModels,
setFetchedModels,
modelConfigurations,
isTesting,
onClose,
isOnboarding,
}: BedrockModalInternalsProps) {
const formikProps = useFormikContext<BedrockModalValues>();
const authMethod = formikProps.values.custom_config?.BEDROCK_AUTH_METHOD;
useEffect(() => {
@@ -159,16 +150,8 @@ function BedrockModalInternals({
});
return (
<LLMConfigurationModalWrapper
providerEndpoint={BEDROCK_PROVIDER_NAME}
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
>
<FieldWrapper>
<>
<InputLayouts.FieldPadder>
<Section gap={1}>
<InputLayouts.Vertical
name={FIELD_AWS_REGION_NAME}
@@ -222,7 +205,7 @@ function BedrockModalInternals({
</InputSelect>
</InputLayouts.Vertical>
</Section>
</FieldWrapper>
</InputLayouts.FieldPadder>
{authMethod === AUTH_METHOD_ACCESS_KEY && (
<Card background="light" border="none" padding="sm">
@@ -250,7 +233,7 @@ function BedrockModalInternals({
)}
{authMethod === AUTH_METHOD_IAM && (
<FieldWrapper>
<InputLayouts.FieldPadder>
<Card background="none" border="solid" padding="sm">
<Content
icon={SvgAlertCircle}
@@ -259,7 +242,7 @@ function BedrockModalInternals({
sizePreset="main-ui"
/>
</Card>
</FieldWrapper>
</InputLayouts.FieldPadder>
)}
{authMethod === AUTH_METHOD_LONG_TERM_API_KEY && (
@@ -280,32 +263,26 @@ function BedrockModalInternals({
{!isOnboarding && (
<>
<FieldSeparator />
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<FieldSeparator />
{isOnboarding ? (
<SingleDefaultModelField placeholder="E.g. us.anthropic.claude-sonnet-4-5-v1" />
) : (
<ModelsField
modelConfigurations={currentModels}
formikProps={formikProps}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onRefetch={isFetchDisabled ? undefined : handleFetchModels}
/>
)}
<InputLayouts.FieldSeparator />
<ModelSelectionField
modelConfigurations={currentModels}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onRefetch={isFetchDisabled ? undefined : handleFetchModels}
/>
{!isOnboarding && (
<>
<FieldSeparator />
<ModelsAccessField formikProps={formikProps} />
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</LLMConfigurationModalWrapper>
</>
);
}
@@ -313,7 +290,6 @@ export default function BedrockModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
@@ -321,15 +297,12 @@ export default function BedrockModal({
llmDescriptor,
}: LLMProviderFormProps) {
const [fetchedModels, setFetchedModels] = useState<ModelConfiguration[]>([]);
const [isTesting, setIsTesting] = useState(false);
const isOnboarding = variant === "onboarding";
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
BEDROCK_PROVIDER_NAME
LLMProviderName.BEDROCK
);
if (open === false) return null;
const onClose = () => onOpenChange?.(false);
const modelConfigurations = buildAvailableModelConfigurations(
@@ -337,64 +310,45 @@ export default function BedrockModal({
wellKnownLLMProvider ?? llmDescriptor
);
const initialValues: BedrockModalValues = isOnboarding
? ({
...buildOnboardingInitialValues(),
name: BEDROCK_PROVIDER_NAME,
provider: BEDROCK_PROVIDER_NAME,
default_model_name: "",
custom_config: {
AWS_REGION_NAME: "",
BEDROCK_AUTH_METHOD: "access_key",
AWS_ACCESS_KEY_ID: "",
AWS_SECRET_ACCESS_KEY: "",
AWS_BEARER_TOKEN_BEDROCK: "",
},
} as BedrockModalValues)
: {
...buildDefaultInitialValues(
existingLlmProvider,
modelConfigurations,
defaultModelName
),
custom_config: {
AWS_REGION_NAME:
(existingLlmProvider?.custom_config?.AWS_REGION_NAME as string) ??
"",
BEDROCK_AUTH_METHOD:
(existingLlmProvider?.custom_config
?.BEDROCK_AUTH_METHOD as string) ?? "access_key",
AWS_ACCESS_KEY_ID:
(existingLlmProvider?.custom_config?.AWS_ACCESS_KEY_ID as string) ??
"",
AWS_SECRET_ACCESS_KEY:
(existingLlmProvider?.custom_config
?.AWS_SECRET_ACCESS_KEY as string) ?? "",
AWS_BEARER_TOKEN_BEDROCK:
(existingLlmProvider?.custom_config
?.AWS_BEARER_TOKEN_BEDROCK as string) ?? "",
},
};
const initialValues: BedrockModalValues = {
...useInitialValues(
isOnboarding,
LLMProviderName.BEDROCK,
existingLlmProvider
),
custom_config: {
AWS_REGION_NAME:
(existingLlmProvider?.custom_config?.AWS_REGION_NAME as string) ?? "",
BEDROCK_AUTH_METHOD:
(existingLlmProvider?.custom_config?.BEDROCK_AUTH_METHOD as string) ??
"access_key",
AWS_ACCESS_KEY_ID:
(existingLlmProvider?.custom_config?.AWS_ACCESS_KEY_ID as string) ?? "",
AWS_SECRET_ACCESS_KEY:
(existingLlmProvider?.custom_config?.AWS_SECRET_ACCESS_KEY as string) ??
"",
AWS_BEARER_TOKEN_BEDROCK:
(existingLlmProvider?.custom_config
?.AWS_BEARER_TOKEN_BEDROCK as string) ?? "",
},
} as BedrockModalValues;
const validationSchema = isOnboarding
? Yup.object().shape({
default_model_name: Yup.string().required("Model name is required"),
custom_config: Yup.object({
AWS_REGION_NAME: Yup.string().required("AWS Region is required"),
}),
})
: buildDefaultValidationSchema().shape({
custom_config: Yup.object({
AWS_REGION_NAME: Yup.string().required("AWS Region is required"),
}),
});
const validationSchema = buildValidationSchema(isOnboarding, {
extra: {
custom_config: Yup.object({
AWS_REGION_NAME: Yup.string().required("AWS Region is required"),
}),
},
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.BEDROCK}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
const filteredCustomConfig = Object.fromEntries(
Object.entries(values.custom_config || {}).filter(([, v]) => v !== "")
);
@@ -412,7 +366,7 @@ export default function BedrockModal({
fetchedModels.length > 0 ? fetchedModels : [];
await submitOnboardingProvider({
providerName: BEDROCK_PROVIDER_NAME,
providerName: LLMProviderName.BEDROCK,
payload: {
...submitValues,
model_configurations: modelConfigsToUse,
@@ -425,14 +379,14 @@ export default function BedrockModal({
});
} else {
await submitLLMProvider({
providerName: BEDROCK_PROVIDER_NAME,
providerName: LLMProviderName.BEDROCK,
values: submitValues,
initialValues,
modelConfigurations:
fetchedModels.length > 0 ? fetchedModels : modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -440,18 +394,13 @@ export default function BedrockModal({
}
}}
>
{(formikProps) => (
<BedrockModalInternals
formikProps={formikProps}
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
modelConfigurations={modelConfigurations}
isTesting={isTesting}
onClose={onClose}
isOnboarding={isOnboarding}
/>
)}
</Formik>
<BedrockModalInternals
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
modelConfigurations={modelConfigurations}
isOnboarding={isOnboarding}
/>
</ModalWrapper>
);
}

View File

@@ -3,9 +3,7 @@
import { useState, useEffect } from "react";
import { markdown } from "@opal/utils";
import { useSWRConfig } from "swr";
import { Formik, FormikProps } from "formik";
import InputTypeInField from "@/refresh-components/form/InputTypeInField";
import PasswordInputTypeInField from "@/refresh-components/form/PasswordInputTypeInField";
import { useFormikContext } from "formik";
import * as InputLayouts from "@/layouts/input-layouts";
import {
LLMProviderFormProps,
@@ -14,13 +12,11 @@ import {
ModelConfiguration,
} from "@/interfaces/llm";
import { fetchBifrostModels } from "@/app/admin/configuration/llm/utils";
import * as Yup from "yup";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
buildDefaultInitialValues,
buildDefaultValidationSchema,
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
buildOnboardingInitialValues,
BaseLLMFormValues,
} from "@/sections/modals/llmConfig/utils";
import {
@@ -28,45 +24,36 @@ import {
submitOnboardingProvider,
} from "@/sections/modals/llmConfig/svc";
import {
ModelsField,
APIBaseField,
APIKeyField,
ModelSelectionField,
DisplayNameField,
ModelsAccessField,
FieldSeparator,
FieldWrapper,
SingleDefaultModelField,
LLMConfigurationModalWrapper,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
import { toast } from "@/hooks/useToast";
const BIFROST_PROVIDER_NAME = LLMProviderName.BIFROST;
const DEFAULT_API_BASE = "";
interface BifrostModalValues extends BaseLLMFormValues {
api_key: string;
api_base: string;
}
interface BifrostModalInternalsProps {
formikProps: FormikProps<BifrostModalValues>;
existingLlmProvider: LLMProviderView | undefined;
fetchedModels: ModelConfiguration[];
setFetchedModels: (models: ModelConfiguration[]) => void;
modelConfigurations: ModelConfiguration[];
isTesting: boolean;
onClose: () => void;
isOnboarding: boolean;
}
function BifrostModalInternals({
formikProps,
existingLlmProvider,
fetchedModels,
setFetchedModels,
modelConfigurations,
isTesting,
onClose,
isOnboarding,
}: BifrostModalInternalsProps) {
const formikProps = useFormikContext<BifrostModalValues>();
const currentModels =
fetchedModels.length > 0
? fetchedModels
@@ -100,69 +87,41 @@ function BifrostModalInternals({
}, []);
return (
<LLMConfigurationModalWrapper
providerEndpoint={LLMProviderName.BIFROST}
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
>
<FieldWrapper>
<InputLayouts.Vertical
name="api_base"
title="API Base URL"
subDescription="Paste your Bifrost gateway endpoint URL (including API version)."
>
<InputTypeInField
name="api_base"
placeholder="https://your-bifrost-gateway.com/v1"
/>
</InputLayouts.Vertical>
</FieldWrapper>
<>
<APIBaseField
subDescription="Paste your Bifrost gateway endpoint URL (including API version)."
placeholder="https://your-bifrost-gateway.com/v1"
/>
<FieldWrapper>
<InputLayouts.Vertical
name="api_key"
title="API Key"
suffix="optional"
subDescription={markdown(
"Paste your API key from [Bifrost](https://docs.getbifrost.ai/overview) to access your models."
)}
>
<PasswordInputTypeInField name="api_key" placeholder="API Key" />
</InputLayouts.Vertical>
</FieldWrapper>
<APIKeyField
optional
subDescription={markdown(
"Paste your API key from [Bifrost](https://docs.getbifrost.ai/overview) to access your models."
)}
/>
{!isOnboarding && (
<>
<FieldSeparator />
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<FieldSeparator />
{isOnboarding ? (
<SingleDefaultModelField placeholder="E.g. anthropic/claude-sonnet-4-6" />
) : (
<ModelsField
modelConfigurations={currentModels}
formikProps={formikProps}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onRefetch={isFetchDisabled ? undefined : handleFetchModels}
/>
)}
<InputLayouts.FieldSeparator />
<ModelSelectionField
modelConfigurations={currentModels}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onRefetch={isFetchDisabled ? undefined : handleFetchModels}
/>
{!isOnboarding && (
<>
<FieldSeparator />
<ModelsAccessField formikProps={formikProps} />
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</LLMConfigurationModalWrapper>
</>
);
}
@@ -170,7 +129,6 @@ export default function BifrostModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
@@ -178,15 +136,12 @@ export default function BifrostModal({
llmDescriptor,
}: LLMProviderFormProps) {
const [fetchedModels, setFetchedModels] = useState<ModelConfiguration[]>([]);
const [isTesting, setIsTesting] = useState(false);
const isOnboarding = variant === "onboarding";
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
BIFROST_PROVIDER_NAME
LLMProviderName.BIFROST
);
if (open === false) return null;
const onClose = () => onOpenChange?.(false);
const modelConfigurations = buildAvailableModelConfigurations(
@@ -194,46 +149,30 @@ export default function BifrostModal({
wellKnownLLMProvider ?? llmDescriptor
);
const initialValues: BifrostModalValues = isOnboarding
? ({
...buildOnboardingInitialValues(),
name: BIFROST_PROVIDER_NAME,
provider: BIFROST_PROVIDER_NAME,
api_key: "",
api_base: DEFAULT_API_BASE,
default_model_name: "",
} as BifrostModalValues)
: {
...buildDefaultInitialValues(
existingLlmProvider,
modelConfigurations,
defaultModelName
),
api_key: existingLlmProvider?.api_key ?? "",
api_base: existingLlmProvider?.api_base ?? DEFAULT_API_BASE,
};
const initialValues: BifrostModalValues = useInitialValues(
isOnboarding,
LLMProviderName.BIFROST,
existingLlmProvider
) as BifrostModalValues;
const validationSchema = isOnboarding
? Yup.object().shape({
api_base: Yup.string().required("API Base URL is required"),
default_model_name: Yup.string().required("Model name is required"),
})
: buildDefaultValidationSchema().shape({
api_base: Yup.string().required("API Base URL is required"),
});
const validationSchema = buildValidationSchema(isOnboarding, {
apiBase: true,
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.BIFROST}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
if (isOnboarding && onboardingState && onboardingActions) {
const modelConfigsToUse =
fetchedModels.length > 0 ? fetchedModels : [];
await submitOnboardingProvider({
providerName: BIFROST_PROVIDER_NAME,
providerName: LLMProviderName.BIFROST,
payload: {
...values,
model_configurations: modelConfigsToUse,
@@ -246,14 +185,14 @@ export default function BifrostModal({
});
} else {
await submitLLMProvider({
providerName: BIFROST_PROVIDER_NAME,
providerName: LLMProviderName.BIFROST,
values,
initialValues,
modelConfigurations:
fetchedModels.length > 0 ? fetchedModels : modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -261,18 +200,13 @@ export default function BifrostModal({
}
}}
>
{(formikProps) => (
<BifrostModalInternals
formikProps={formikProps}
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
modelConfigurations={modelConfigurations}
isTesting={isTesting}
onClose={onClose}
isOnboarding={isOnboarding}
/>
)}
</Formik>
<BifrostModalInternals
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
modelConfigurations={modelConfigurations}
isOnboarding={isOnboarding}
/>
</ModalWrapper>
);
}

View File

@@ -70,7 +70,9 @@ describe("Custom LLM Provider Configuration Workflow", () => {
}
) {
const nameInput = screen.getByPlaceholderText("Display Name");
const providerInput = screen.getByPlaceholderText("Provider Name");
const providerInput = screen.getByPlaceholderText(
"Provider Name as shown on LiteLLM"
);
await user.type(nameInput, options.name);
await user.type(providerInput, options.provider);
@@ -99,7 +101,7 @@ describe("Custom LLM Provider Configuration Workflow", () => {
}),
} as Response);
render(<CustomModal open={true} onOpenChange={() => {}} />);
render(<CustomModal onOpenChange={() => {}} />);
await fillBasicFields(user, {
name: "My Custom Provider",
@@ -166,7 +168,7 @@ describe("Custom LLM Provider Configuration Workflow", () => {
json: async () => ({ detail: "Invalid API key" }),
} as Response);
render(<CustomModal open={true} onOpenChange={() => {}} />);
render(<CustomModal onOpenChange={() => {}} />);
await fillBasicFields(user, {
name: "Bad Provider",
@@ -244,7 +246,6 @@ describe("Custom LLM Provider Configuration Workflow", () => {
render(
<CustomModal
existingLlmProvider={existingProvider}
open={true}
onOpenChange={() => {}}
/>
);
@@ -339,7 +340,6 @@ describe("Custom LLM Provider Configuration Workflow", () => {
render(
<CustomModal
existingLlmProvider={existingProvider}
open={true}
onOpenChange={() => {}}
/>
);
@@ -406,13 +406,7 @@ describe("Custom LLM Provider Configuration Workflow", () => {
json: async () => ({}),
} as Response);
render(
<CustomModal
shouldMarkAsDefault={true}
open={true}
onOpenChange={() => {}}
/>
);
render(<CustomModal shouldMarkAsDefault={true} onOpenChange={() => {}} />);
await fillBasicFields(user, {
name: "New Default Provider",
@@ -457,7 +451,7 @@ describe("Custom LLM Provider Configuration Workflow", () => {
json: async () => ({ detail: "Database error" }),
} as Response);
render(<CustomModal open={true} onOpenChange={() => {}} />);
render(<CustomModal onOpenChange={() => {}} />);
await fillBasicFields(user, {
name: "Test Provider",
@@ -492,13 +486,15 @@ describe("Custom LLM Provider Configuration Workflow", () => {
json: async () => ({ id: 1, name: "Provider with Custom Config" }),
} as Response);
render(<CustomModal open={true} onOpenChange={() => {}} />);
render(<CustomModal onOpenChange={() => {}} />);
// Fill basic fields
const nameInput = screen.getByPlaceholderText("Display Name");
await user.type(nameInput, "Cloudflare Provider");
const providerInput = screen.getByPlaceholderText("Provider Name");
const providerInput = screen.getByPlaceholderText(
"Provider Name as shown on LiteLLM"
);
await user.type(providerInput, "cloudflare");
// Click "Add Line" button for custom config (aria-label from KeyValueInput)
@@ -508,8 +504,8 @@ describe("Custom LLM Provider Configuration Workflow", () => {
await user.click(addLineButton);
// Fill in custom config key-value pair
const keyInputs = screen.getAllByPlaceholderText("Key");
const valueInputs = screen.getAllByPlaceholderText("Value");
const keyInputs = screen.getAllByRole("textbox", { name: /Key \d+/ });
const valueInputs = screen.getAllByRole("textbox", { name: /Value \d+/ });
await user.type(keyInputs[0]!, "CLOUDFLARE_ACCOUNT_ID");
await user.type(valueInputs[0]!, "my-account-id-123");

View File

@@ -1,24 +1,24 @@
"use client";
import { useState } from "react";
import { useSWRConfig } from "swr";
import { Formik, FormikProps } from "formik";
import { LLMProviderFormProps, ModelConfiguration } from "@/interfaces/llm";
import * as Yup from "yup";
import { useFormikContext } from "formik";
import {
buildDefaultInitialValues,
buildOnboardingInitialValues,
} from "@/sections/modals/llmConfig/utils";
LLMProviderFormProps,
LLMProviderName,
ModelConfiguration,
} from "@/interfaces/llm";
import * as Yup from "yup";
import { useInitialValues } from "@/sections/modals/llmConfig/utils";
import {
submitLLMProvider,
submitOnboardingProvider,
} from "@/sections/modals/llmConfig/svc";
import {
APIKeyField,
APIBaseField,
DisplayNameField,
FieldSeparator,
ModelsAccessField,
LLMConfigurationModalWrapper,
FieldWrapper,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
import InputTypeInField from "@/refresh-components/form/InputTypeInField";
import * as InputLayouts from "@/layouts/input-layouts";
@@ -30,6 +30,7 @@ import InputSelect from "@/refresh-components/inputs/InputSelect";
import Text from "@/refresh-components/texts/Text";
import { Button, Card, EmptyMessageCard } from "@opal/components";
import { SvgMinusCircle, SvgPlusCircle } from "@opal/icons";
import { markdown } from "@opal/utils";
import { toast } from "@/hooks/useToast";
import { Content } from "@opal/layouts";
import { Section } from "@/layouts/general-layouts";
@@ -107,13 +108,10 @@ function ModelConfigurationItem({
);
}
interface ModelConfigurationListProps {
formikProps: FormikProps<{
function ModelConfigurationList() {
const formikProps = useFormikContext<{
model_configurations: CustomModelConfiguration[];
}>;
}
function ModelConfigurationList({ formikProps }: ModelConfigurationListProps) {
}>();
const models = formikProps.values.model_configurations;
function handleChange(index: number, next: CustomModelConfiguration) {
@@ -179,42 +177,53 @@ function ModelConfigurationList({ formikProps }: ModelConfigurationListProps) {
);
}
function CustomConfigKeyValue() {
const formikProps = useFormikContext<{ custom_config_list: KeyValue[] }>();
return (
<KeyValueInput
items={formikProps.values.custom_config_list}
onChange={(items) =>
formikProps.setFieldValue("custom_config_list", items)
}
addButtonLabel="Add Line"
/>
);
}
// ─── Custom Config Processing ─────────────────────────────────────────────────
function customConfigProcessing(items: KeyValue[]) {
const customConfig: { [key: string]: string } = {};
items.forEach(({ key, value }) => {
customConfig[key] = value;
});
return customConfig;
function keyValueListToDict(items: KeyValue[]): Record<string, string> {
const result: Record<string, string> = {};
for (const { key, value } of items) {
if (key.trim() !== "") {
result[key] = value;
}
}
return result;
}
export default function CustomModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
onboardingActions,
}: LLMProviderFormProps) {
const isOnboarding = variant === "onboarding";
const [isTesting, setIsTesting] = useState(false);
const { mutate } = useSWRConfig();
if (open === false) return null;
const onClose = () => onOpenChange?.(false);
const initialValues = {
...buildDefaultInitialValues(
existingLlmProvider,
undefined,
defaultModelName
...useInitialValues(
isOnboarding,
LLMProviderName.CUSTOM,
existingLlmProvider
),
...(isOnboarding ? buildOnboardingInitialValues() : {}),
provider: existingLlmProvider?.provider ?? "",
api_version: existingLlmProvider?.api_version ?? "",
model_configurations: existingLlmProvider?.model_configurations.map(
(mc) => ({
name: mc.name,
@@ -259,11 +268,13 @@ export default function CustomModal({
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.CUSTOM}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
setSubmitting(true);
const modelConfigurations = values.model_configurations
@@ -283,13 +294,18 @@ export default function CustomModal({
return;
}
// Always send custom_config as a dict (even empty) so the backend
// preserves it as non-null — this is the signal that the provider was
// created via CustomModal.
const customConfig = keyValueListToDict(values.custom_config_list);
if (isOnboarding && onboardingState && onboardingActions) {
await submitOnboardingProvider({
providerName: values.provider,
payload: {
...values,
model_configurations: modelConfigurations,
custom_config: customConfigProcessing(values.custom_config_list),
custom_config: customConfig,
},
onboardingState,
onboardingActions,
@@ -306,19 +322,19 @@ export default function CustomModal({
providerName: values.provider,
values: {
...values,
selected_model_names: selectedModelNames,
custom_config: customConfigProcessing(values.custom_config_list),
visible_model_names: selectedModelNames,
custom_config: customConfig,
},
initialValues: {
...initialValues,
custom_config: customConfigProcessing(
custom_config: keyValueListToDict(
initialValues.custom_config_list
),
},
modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -326,84 +342,87 @@ export default function CustomModal({
}
}}
>
{(formikProps) => (
<LLMConfigurationModalWrapper
providerEndpoint="custom"
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
>
{!isOnboarding && (
<Section gap={0}>
<DisplayNameField disabled={!!existingLlmProvider} />
<FieldWrapper>
<InputLayouts.Vertical
name="provider"
title="Provider Name"
subDescription="Should be one of the providers listed at https://docs.litellm.ai/docs/providers."
>
<InputTypeInField
name="provider"
placeholder="Provider Name"
variant={existingLlmProvider ? "disabled" : undefined}
/>
</InputLayouts.Vertical>
</FieldWrapper>
</Section>
)}
<FieldSeparator />
<FieldWrapper>
<Section gap={0.75}>
<Content
title="Provider Configs"
description="Add properties as needed by the model provider. This is passed to LiteLLM completion() call as arguments in the environment variable. See LiteLLM documentation for more instructions."
widthVariant="full"
variant="section"
sizePreset="main-content"
/>
<KeyValueInput
items={formikProps.values.custom_config_list}
onChange={(items) =>
formikProps.setFieldValue("custom_config_list", items)
}
addButtonLabel="Add Line"
/>
</Section>
</FieldWrapper>
<FieldSeparator />
<Section gap={0.5}>
<FieldWrapper>
<Content
title="Models"
description="List LLM models you wish to use and their configurations for this provider. See full list of models at LiteLLM."
variant="section"
sizePreset="main-content"
widthVariant="full"
/>
</FieldWrapper>
<Card padding="sm">
<ModelConfigurationList formikProps={formikProps as any} />
</Card>
</Section>
{!isOnboarding && (
<>
<FieldSeparator />
<ModelsAccessField formikProps={formikProps} />
</>
)}
</LLMConfigurationModalWrapper>
{!isOnboarding && (
<InputLayouts.FieldPadder>
<InputLayouts.Vertical
name="provider"
title="Provider Name"
subDescription={markdown(
"Should be one of the providers listed at [LiteLLM](https://docs.litellm.ai/docs/providers)."
)}
>
<InputTypeInField
name="provider"
placeholder="Provider Name as shown on LiteLLM"
variant={existingLlmProvider ? "disabled" : undefined}
/>
</InputLayouts.Vertical>
</InputLayouts.FieldPadder>
)}
</Formik>
<APIBaseField optional />
<InputLayouts.FieldPadder>
<InputLayouts.Vertical
name="api_version"
title="API Version"
suffix="optional"
>
<InputTypeInField name="api_version" />
</InputLayouts.Vertical>
</InputLayouts.FieldPadder>
<APIKeyField
optional
subDescription="Paste your API key if your model provider requires authentication."
/>
<InputLayouts.FieldPadder>
<Section gap={0.75}>
<Content
title="Additional Configs"
description={markdown(
"Add extra properties as needed by the model provider. These are passed to LiteLLM's `completion()` call as [environment variables](https://docs.litellm.ai/docs/set_keys#environment-variables). See [documentation](https://docs.onyx.app/admins/ai_models/custom_inference_provider) for more instructions."
)}
widthVariant="full"
variant="section"
sizePreset="main-content"
/>
<CustomConfigKeyValue />
</Section>
</InputLayouts.FieldPadder>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<InputLayouts.FieldSeparator />
<Section gap={0.5}>
<InputLayouts.FieldPadder>
<Content
title="Models"
description="List LLM models you wish to use and their configurations for this provider. See full list of models at LiteLLM."
variant="section"
sizePreset="main-content"
widthVariant="full"
/>
</InputLayouts.FieldPadder>
<Card padding="sm">
<ModelConfigurationList />
</Card>
</Section>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</ModalWrapper>
);
}

View File

@@ -2,23 +2,20 @@
import { useCallback, useEffect, useMemo, useState } from "react";
import { useSWRConfig } from "swr";
import { Formik, FormikProps } from "formik";
import { useFormikContext } from "formik";
import InputTypeInField from "@/refresh-components/form/InputTypeInField";
import * as InputLayouts from "@/layouts/input-layouts";
import PasswordInputTypeInField from "@/refresh-components/form/PasswordInputTypeInField";
import {
LLMProviderFormProps,
LLMProviderName,
LLMProviderView,
ModelConfiguration,
} from "@/interfaces/llm";
import * as Yup from "yup";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
buildDefaultInitialValues,
buildDefaultValidationSchema,
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
buildOnboardingInitialValues,
BaseLLMFormValues,
} from "@/sections/modals/llmConfig/utils";
import {
@@ -26,13 +23,12 @@ import {
submitOnboardingProvider,
} from "@/sections/modals/llmConfig/svc";
import {
ModelsField,
APIKeyField,
APIBaseField,
ModelSelectionField,
DisplayNameField,
ModelsAccessField,
FieldSeparator,
FieldWrapper,
SingleDefaultModelField,
LLMConfigurationModalWrapper,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
import { fetchModels } from "@/app/admin/configuration/llm/utils";
import debounce from "lodash/debounce";
@@ -48,24 +44,19 @@ interface LMStudioFormValues extends BaseLLMFormValues {
}
interface LMStudioFormInternalsProps {
formikProps: FormikProps<LMStudioFormValues>;
existingLlmProvider: LLMProviderView | undefined;
fetchedModels: ModelConfiguration[];
setFetchedModels: (models: ModelConfiguration[]) => void;
isTesting: boolean;
onClose: () => void;
isOnboarding: boolean;
}
function LMStudioFormInternals({
formikProps,
existingLlmProvider,
fetchedModels,
setFetchedModels,
isTesting,
onClose,
isOnboarding,
}: LMStudioFormInternalsProps) {
const formikProps = useFormikContext<LMStudioFormValues>();
const initialApiKey =
(existingLlmProvider?.custom_config?.LM_STUDIO_API_KEY as string) ?? "";
@@ -120,69 +111,39 @@ function LMStudioFormInternals({
: existingLlmProvider?.model_configurations || [];
return (
<LLMConfigurationModalWrapper
providerEndpoint={LLMProviderName.LM_STUDIO}
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
>
<FieldWrapper>
<InputLayouts.Vertical
name="api_base"
title="API Base URL"
subDescription="The base URL for your LM Studio server."
>
<InputTypeInField
name="api_base"
placeholder="Your LM Studio API base URL"
/>
</InputLayouts.Vertical>
</FieldWrapper>
<>
<APIBaseField
subDescription="The base URL for your LM Studio server."
placeholder="Your LM Studio API base URL"
/>
<FieldWrapper>
<InputLayouts.Vertical
name="custom_config.LM_STUDIO_API_KEY"
title="API Key"
subDescription="Optional API key if your LM Studio server requires authentication."
suffix="optional"
>
<PasswordInputTypeInField
name="custom_config.LM_STUDIO_API_KEY"
placeholder="API Key"
/>
</InputLayouts.Vertical>
</FieldWrapper>
<APIKeyField
name="custom_config.LM_STUDIO_API_KEY"
optional
subDescription="Optional API key if your LM Studio server requires authentication."
/>
{!isOnboarding && (
<>
<FieldSeparator />
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<FieldSeparator />
{isOnboarding ? (
<SingleDefaultModelField placeholder="E.g. llama3.1" />
) : (
<ModelsField
modelConfigurations={currentModels}
formikProps={formikProps}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
/>
)}
<InputLayouts.FieldSeparator />
<ModelSelectionField
modelConfigurations={currentModels}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
/>
{!isOnboarding && (
<>
<FieldSeparator />
<ModelsAccessField formikProps={formikProps} />
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</LLMConfigurationModalWrapper>
</>
);
}
@@ -190,7 +151,6 @@ export default function LMStudioForm({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
@@ -198,15 +158,12 @@ export default function LMStudioForm({
llmDescriptor,
}: LLMProviderFormProps) {
const [fetchedModels, setFetchedModels] = useState<ModelConfiguration[]>([]);
const [isTesting, setIsTesting] = useState(false);
const isOnboarding = variant === "onboarding";
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
LLMProviderName.LM_STUDIO
);
if (open === false) return null;
const onClose = () => onOpenChange?.(false);
const modelConfigurations = buildAvailableModelConfigurations(
@@ -214,46 +171,31 @@ export default function LMStudioForm({
wellKnownLLMProvider ?? llmDescriptor
);
const initialValues: LMStudioFormValues = isOnboarding
? ({
...buildOnboardingInitialValues(),
name: LLMProviderName.LM_STUDIO,
provider: LLMProviderName.LM_STUDIO,
api_base: DEFAULT_API_BASE,
default_model_name: "",
custom_config: {
LM_STUDIO_API_KEY: "",
},
} as LMStudioFormValues)
: {
...buildDefaultInitialValues(
existingLlmProvider,
modelConfigurations,
defaultModelName
),
api_base: existingLlmProvider?.api_base ?? DEFAULT_API_BASE,
custom_config: {
LM_STUDIO_API_KEY:
(existingLlmProvider?.custom_config?.LM_STUDIO_API_KEY as string) ??
"",
},
};
const initialValues: LMStudioFormValues = {
...useInitialValues(
isOnboarding,
LLMProviderName.LM_STUDIO,
existingLlmProvider
),
api_base: existingLlmProvider?.api_base ?? DEFAULT_API_BASE,
custom_config: {
LM_STUDIO_API_KEY:
(existingLlmProvider?.custom_config?.LM_STUDIO_API_KEY as string) ?? "",
},
} as LMStudioFormValues;
const validationSchema = isOnboarding
? Yup.object().shape({
api_base: Yup.string().required("API Base URL is required"),
default_model_name: Yup.string().required("Model name is required"),
})
: buildDefaultValidationSchema().shape({
api_base: Yup.string().required("API Base URL is required"),
});
const validationSchema = buildValidationSchema(isOnboarding, {
apiBase: true,
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.LM_STUDIO}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
const filteredCustomConfig = Object.fromEntries(
Object.entries(values.custom_config || {}).filter(([, v]) => v !== "")
);
@@ -291,7 +233,7 @@ export default function LMStudioForm({
fetchedModels.length > 0 ? fetchedModels : modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -299,17 +241,12 @@ export default function LMStudioForm({
}
}}
>
{(formikProps) => (
<LMStudioFormInternals
formikProps={formikProps}
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
isTesting={isTesting}
onClose={onClose}
isOnboarding={isOnboarding}
/>
)}
</Formik>
<LMStudioFormInternals
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
isOnboarding={isOnboarding}
/>
</ModalWrapper>
);
}

View File

@@ -2,8 +2,7 @@
import { useState, useEffect } from "react";
import { useSWRConfig } from "swr";
import { Formik, FormikProps } from "formik";
import InputTypeInField from "@/refresh-components/form/InputTypeInField";
import { useFormikContext } from "formik";
import * as InputLayouts from "@/layouts/input-layouts";
import {
LLMProviderFormProps,
@@ -12,13 +11,11 @@ import {
ModelConfiguration,
} from "@/interfaces/llm";
import { fetchLiteLLMProxyModels } from "@/app/admin/configuration/llm/utils";
import * as Yup from "yup";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
buildDefaultInitialValues,
buildDefaultValidationSchema,
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
buildOnboardingInitialValues,
BaseLLMFormValues,
} from "@/sections/modals/llmConfig/utils";
import {
@@ -27,13 +24,11 @@ import {
} from "@/sections/modals/llmConfig/svc";
import {
APIKeyField,
ModelsField,
APIBaseField,
ModelSelectionField,
DisplayNameField,
ModelsAccessField,
FieldSeparator,
FieldWrapper,
SingleDefaultModelField,
LLMConfigurationModalWrapper,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
import { toast } from "@/hooks/useToast";
@@ -45,26 +40,21 @@ interface LiteLLMProxyModalValues extends BaseLLMFormValues {
}
interface LiteLLMProxyModalInternalsProps {
formikProps: FormikProps<LiteLLMProxyModalValues>;
existingLlmProvider: LLMProviderView | undefined;
fetchedModels: ModelConfiguration[];
setFetchedModels: (models: ModelConfiguration[]) => void;
modelConfigurations: ModelConfiguration[];
isTesting: boolean;
onClose: () => void;
isOnboarding: boolean;
}
function LiteLLMProxyModalInternals({
formikProps,
existingLlmProvider,
fetchedModels,
setFetchedModels,
modelConfigurations,
isTesting,
onClose,
isOnboarding,
}: LiteLLMProxyModalInternalsProps) {
const formikProps = useFormikContext<LiteLLMProxyModalValues>();
const currentModels =
fetchedModels.length > 0
? fetchedModels
@@ -98,58 +88,36 @@ function LiteLLMProxyModalInternals({
}, []);
return (
<LLMConfigurationModalWrapper
providerEndpoint={LLMProviderName.LITELLM_PROXY}
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
>
<FieldWrapper>
<InputLayouts.Vertical
name="api_base"
title="API Base URL"
subDescription="The base URL for your LiteLLM Proxy server."
>
<InputTypeInField
name="api_base"
placeholder="https://your-litellm-proxy.com"
/>
</InputLayouts.Vertical>
</FieldWrapper>
<>
<APIBaseField
subDescription="The base URL for your LiteLLM Proxy server."
placeholder="https://your-litellm-proxy.com"
/>
<APIKeyField providerName="LiteLLM Proxy" />
{!isOnboarding && (
<>
<FieldSeparator />
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<FieldSeparator />
{isOnboarding ? (
<SingleDefaultModelField placeholder="E.g. gpt-4o" />
) : (
<ModelsField
modelConfigurations={currentModels}
formikProps={formikProps}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onRefetch={isFetchDisabled ? undefined : handleFetchModels}
/>
)}
<InputLayouts.FieldSeparator />
<ModelSelectionField
modelConfigurations={currentModels}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onRefetch={isFetchDisabled ? undefined : handleFetchModels}
/>
{!isOnboarding && (
<>
<FieldSeparator />
<ModelsAccessField formikProps={formikProps} />
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</LLMConfigurationModalWrapper>
</>
);
}
@@ -157,7 +125,6 @@ export default function LiteLLMProxyModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
@@ -165,15 +132,12 @@ export default function LiteLLMProxyModal({
llmDescriptor,
}: LLMProviderFormProps) {
const [fetchedModels, setFetchedModels] = useState<ModelConfiguration[]>([]);
const [isTesting, setIsTesting] = useState(false);
const isOnboarding = variant === "onboarding";
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
LLMProviderName.LITELLM_PROXY
);
if (open === false) return null;
const onClose = () => onOpenChange?.(false);
const modelConfigurations = buildAvailableModelConfigurations(
@@ -181,42 +145,28 @@ export default function LiteLLMProxyModal({
wellKnownLLMProvider ?? llmDescriptor
);
const initialValues: LiteLLMProxyModalValues = isOnboarding
? ({
...buildOnboardingInitialValues(),
name: LLMProviderName.LITELLM_PROXY,
provider: LLMProviderName.LITELLM_PROXY,
api_key: "",
api_base: DEFAULT_API_BASE,
default_model_name: "",
} as LiteLLMProxyModalValues)
: {
...buildDefaultInitialValues(
existingLlmProvider,
modelConfigurations,
defaultModelName
),
api_key: existingLlmProvider?.api_key ?? "",
api_base: existingLlmProvider?.api_base ?? DEFAULT_API_BASE,
};
const initialValues: LiteLLMProxyModalValues = {
...useInitialValues(
isOnboarding,
LLMProviderName.LITELLM_PROXY,
existingLlmProvider
),
api_base: existingLlmProvider?.api_base ?? DEFAULT_API_BASE,
} as LiteLLMProxyModalValues;
const validationSchema = isOnboarding
? Yup.object().shape({
api_key: Yup.string().required("API Key is required"),
api_base: Yup.string().required("API Base URL is required"),
default_model_name: Yup.string().required("Model name is required"),
})
: buildDefaultValidationSchema().shape({
api_key: Yup.string().required("API Key is required"),
api_base: Yup.string().required("API Base URL is required"),
});
const validationSchema = buildValidationSchema(isOnboarding, {
apiKey: true,
apiBase: true,
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.LITELLM_PROXY}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
if (isOnboarding && onboardingState && onboardingActions) {
const modelConfigsToUse =
fetchedModels.length > 0 ? fetchedModels : [];
@@ -242,7 +192,7 @@ export default function LiteLLMProxyModal({
fetchedModels.length > 0 ? fetchedModels : modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -250,18 +200,13 @@ export default function LiteLLMProxyModal({
}
}}
>
{(formikProps) => (
<LiteLLMProxyModalInternals
formikProps={formikProps}
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
modelConfigurations={modelConfigurations}
isTesting={isTesting}
onClose={onClose}
isOnboarding={isOnboarding}
/>
)}
</Formik>
<LiteLLMProxyModalInternals
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
modelConfigurations={modelConfigurations}
isOnboarding={isOnboarding}
/>
</ModalWrapper>
);
}

View File

@@ -1,23 +1,21 @@
"use client";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { useEffect, useState } from "react";
import { useSWRConfig } from "swr";
import { Formik, FormikProps } from "formik";
import InputTypeInField from "@/refresh-components/form/InputTypeInField";
import { useFormikContext } from "formik";
import * as InputLayouts from "@/layouts/input-layouts";
import PasswordInputTypeInField from "@/refresh-components/form/PasswordInputTypeInField";
import {
LLMProviderFormProps,
LLMProviderName,
LLMProviderView,
ModelConfiguration,
} from "@/interfaces/llm";
import * as Yup from "yup";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
buildDefaultInitialValues,
buildDefaultValidationSchema,
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
buildOnboardingInitialValues,
BaseLLMFormValues,
} from "@/sections/modals/llmConfig/utils";
import {
@@ -25,21 +23,20 @@ import {
submitOnboardingProvider,
} from "@/sections/modals/llmConfig/svc";
import {
ModelsField,
APIBaseField,
ModelSelectionField,
DisplayNameField,
ModelsAccessField,
FieldSeparator,
SingleDefaultModelField,
LLMConfigurationModalWrapper,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
import { fetchOllamaModels } from "@/app/admin/configuration/llm/utils";
import debounce from "lodash/debounce";
import Tabs from "@/refresh-components/Tabs";
import { Card } from "@opal/components";
import { toast } from "@/hooks/useToast";
import InputTypeInField from "@/refresh-components/form/InputTypeInField";
const OLLAMA_PROVIDER_NAME = "ollama_chat";
const DEFAULT_API_BASE = "http://127.0.0.1:11434";
const CLOUD_API_BASE = "https://ollama.com";
const TAB_SELF_HOSTED = "self-hosted";
const TAB_CLOUD = "cloud";
@@ -51,75 +48,48 @@ interface OllamaModalValues extends BaseLLMFormValues {
}
interface OllamaModalInternalsProps {
formikProps: FormikProps<OllamaModalValues>;
existingLlmProvider: LLMProviderView | undefined;
fetchedModels: ModelConfiguration[];
setFetchedModels: (models: ModelConfiguration[]) => void;
isTesting: boolean;
onClose: () => void;
isOnboarding: boolean;
}
function OllamaModalInternals({
formikProps,
existingLlmProvider,
fetchedModels,
setFetchedModels,
isTesting,
onClose,
isOnboarding,
}: OllamaModalInternalsProps) {
const isInitialMount = useRef(true);
const formikProps = useFormikContext<OllamaModalValues>();
const doFetchModels = useCallback(
(apiBase: string, signal: AbortSignal) => {
fetchOllamaModels({
api_base: apiBase,
provider_name: existingLlmProvider?.name,
signal,
}).then((data) => {
if (signal.aborted) return;
if (data.error) {
toast.error(data.error);
setFetchedModels([]);
return;
}
setFetchedModels(data.models);
});
},
[existingLlmProvider?.name, setFetchedModels]
);
const handleFetchModels = async (signal?: AbortSignal) => {
// Only Ollama cloud accepts API key
const apiBase = formikProps.values.custom_config?.OLLAMA_API_KEY
? CLOUD_API_BASE
: formikProps.values.api_base;
const { models, error } = await fetchOllamaModels({
api_base: apiBase,
provider_name: existingLlmProvider?.name,
signal,
});
if (signal?.aborted) return;
if (error) {
throw new Error(error);
}
setFetchedModels(models);
};
const debouncedFetchModels = useMemo(
() => debounce(doFetchModels, 500),
[doFetchModels]
);
// Skip the initial fetch for new providers — api_base starts with a default
// value, which would otherwise trigger a fetch before the user has done
// anything. Existing providers should still auto-fetch on mount.
// Auto-fetch models on initial load when editing an existing provider
useEffect(() => {
if (isInitialMount.current) {
isInitialMount.current = false;
if (!existingLlmProvider) return;
if (existingLlmProvider) {
handleFetchModels().catch((err) => {
toast.error(
err instanceof Error ? err.message : "Failed to fetch models"
);
});
}
if (formikProps.values.api_base) {
const controller = new AbortController();
debouncedFetchModels(formikProps.values.api_base, controller.signal);
return () => {
debouncedFetchModels.cancel();
controller.abort();
};
} else {
setFetchedModels([]);
}
}, [
formikProps.values.api_base,
debouncedFetchModels,
setFetchedModels,
existingLlmProvider,
]);
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
const currentModels =
fetchedModels.length > 0
@@ -131,15 +101,7 @@ function OllamaModalInternals({
existingLlmProvider && hasApiKey ? TAB_CLOUD : TAB_SELF_HOSTED;
return (
<LLMConfigurationModalWrapper
providerEndpoint={OLLAMA_PROVIDER_NAME}
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
>
<>
<Card background="light" border="none" padding="sm">
<Tabs defaultValue={defaultTab}>
<Tabs.List>
@@ -148,7 +110,7 @@ function OllamaModalInternals({
</Tabs.Trigger>
<Tabs.Trigger value={TAB_CLOUD}>Ollama Cloud</Tabs.Trigger>
</Tabs.List>
<Tabs.Content value={TAB_SELF_HOSTED}>
<Tabs.Content value={TAB_SELF_HOSTED} padding={0}>
<InputLayouts.Vertical
name="api_base"
title="API Base URL"
@@ -178,31 +140,26 @@ function OllamaModalInternals({
{!isOnboarding && (
<>
<FieldSeparator />
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<FieldSeparator />
{isOnboarding ? (
<SingleDefaultModelField placeholder="E.g. llama3.1" />
) : (
<ModelsField
modelConfigurations={currentModels}
formikProps={formikProps}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
/>
)}
<InputLayouts.FieldSeparator />
<ModelSelectionField
modelConfigurations={currentModels}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onRefetch={handleFetchModels}
/>
{!isOnboarding && (
<>
<FieldSeparator />
<ModelsAccessField formikProps={formikProps} />
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</LLMConfigurationModalWrapper>
</>
);
}
@@ -210,7 +167,6 @@ export default function OllamaModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
@@ -218,13 +174,11 @@ export default function OllamaModal({
llmDescriptor,
}: LLMProviderFormProps) {
const [fetchedModels, setFetchedModels] = useState<ModelConfiguration[]>([]);
const [isTesting, setIsTesting] = useState(false);
const isOnboarding = variant === "onboarding";
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } =
useWellKnownLLMProvider(OLLAMA_PROVIDER_NAME);
if (open === false) return null;
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
LLMProviderName.OLLAMA_CHAT
);
const onClose = () => onOpenChange?.(false);
@@ -233,52 +187,40 @@ export default function OllamaModal({
wellKnownLLMProvider ?? llmDescriptor
);
const initialValues: OllamaModalValues = isOnboarding
? ({
...buildOnboardingInitialValues(),
name: OLLAMA_PROVIDER_NAME,
provider: OLLAMA_PROVIDER_NAME,
api_base: DEFAULT_API_BASE,
default_model_name: "",
custom_config: {
OLLAMA_API_KEY: "",
},
} as OllamaModalValues)
: {
...buildDefaultInitialValues(
existingLlmProvider,
modelConfigurations,
defaultModelName
),
api_base: existingLlmProvider?.api_base ?? DEFAULT_API_BASE,
custom_config: {
OLLAMA_API_KEY:
(existingLlmProvider?.custom_config?.OLLAMA_API_KEY as string) ??
"",
},
};
const initialValues: OllamaModalValues = {
...useInitialValues(
isOnboarding,
LLMProviderName.OLLAMA_CHAT,
existingLlmProvider
),
api_base: existingLlmProvider?.api_base ?? DEFAULT_API_BASE,
custom_config: {
OLLAMA_API_KEY:
(existingLlmProvider?.custom_config?.OLLAMA_API_KEY as string) ?? "",
},
} as OllamaModalValues;
const validationSchema = isOnboarding
? Yup.object().shape({
api_base: Yup.string().required("API Base URL is required"),
default_model_name: Yup.string().required("Model name is required"),
})
: buildDefaultValidationSchema().shape({
api_base: Yup.string().required("API Base URL is required"),
});
const validationSchema = buildValidationSchema(isOnboarding, {
apiBase: true,
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.OLLAMA_CHAT}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
const filteredCustomConfig = Object.fromEntries(
Object.entries(values.custom_config || {}).filter(([, v]) => v !== "")
);
const submitValues = {
...values,
api_base: filteredCustomConfig.OLLAMA_API_KEY
? CLOUD_API_BASE
: values.api_base,
custom_config:
Object.keys(filteredCustomConfig).length > 0
? filteredCustomConfig
@@ -290,7 +232,7 @@ export default function OllamaModal({
fetchedModels.length > 0 ? fetchedModels : [];
await submitOnboardingProvider({
providerName: OLLAMA_PROVIDER_NAME,
providerName: LLMProviderName.OLLAMA_CHAT,
payload: {
...submitValues,
model_configurations: modelConfigsToUse,
@@ -303,14 +245,14 @@ export default function OllamaModal({
});
} else {
await submitLLMProvider({
providerName: OLLAMA_PROVIDER_NAME,
providerName: LLMProviderName.OLLAMA_CHAT,
values: submitValues,
initialValues,
modelConfigurations:
fetchedModels.length > 0 ? fetchedModels : modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -318,17 +260,12 @@ export default function OllamaModal({
}
}}
>
{(formikProps) => (
<OllamaModalInternals
formikProps={formikProps}
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
isTesting={isTesting}
onClose={onClose}
isOnboarding={isOnboarding}
/>
)}
</Formik>
<OllamaModalInternals
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
isOnboarding={isOnboarding}
/>
</ModalWrapper>
);
}

View File

@@ -0,0 +1,211 @@
"use client";
import { useState, useEffect } from "react";
import { markdown } from "@opal/utils";
import { useSWRConfig } from "swr";
import { useFormikContext } from "formik";
import * as InputLayouts from "@/layouts/input-layouts";
import {
LLMProviderFormProps,
LLMProviderName,
LLMProviderView,
ModelConfiguration,
} from "@/interfaces/llm";
import { fetchOpenAICompatibleModels } from "@/app/admin/configuration/llm/utils";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
BaseLLMFormValues,
} from "@/sections/modals/llmConfig/utils";
import {
submitLLMProvider,
submitOnboardingProvider,
} from "@/sections/modals/llmConfig/svc";
import {
APIBaseField,
APIKeyField,
ModelSelectionField,
DisplayNameField,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
import { toast } from "@/hooks/useToast";
interface OpenAICompatibleModalValues extends BaseLLMFormValues {
api_key: string;
api_base: string;
}
interface OpenAICompatibleModalInternalsProps {
existingLlmProvider: LLMProviderView | undefined;
fetchedModels: ModelConfiguration[];
setFetchedModels: (models: ModelConfiguration[]) => void;
modelConfigurations: ModelConfiguration[];
isOnboarding: boolean;
}
function OpenAICompatibleModalInternals({
existingLlmProvider,
fetchedModels,
setFetchedModels,
modelConfigurations,
isOnboarding,
}: OpenAICompatibleModalInternalsProps) {
const formikProps = useFormikContext<OpenAICompatibleModalValues>();
const currentModels =
fetchedModels.length > 0
? fetchedModels
: existingLlmProvider?.model_configurations || modelConfigurations;
const isFetchDisabled = !formikProps.values.api_base;
const handleFetchModels = async () => {
const { models, error } = await fetchOpenAICompatibleModels({
api_base: formikProps.values.api_base,
api_key: formikProps.values.api_key || undefined,
provider_name: existingLlmProvider?.name,
});
if (error) {
throw new Error(error);
}
setFetchedModels(models);
};
// Auto-fetch models on initial load when editing an existing provider
useEffect(() => {
if (existingLlmProvider && !isFetchDisabled) {
handleFetchModels().catch((err) => {
toast.error(
err instanceof Error ? err.message : "Failed to fetch models"
);
});
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
return (
<>
<APIBaseField
subDescription="The base URL of your OpenAI-compatible server."
placeholder="http://localhost:8000/v1"
/>
<APIKeyField
optional
subDescription={markdown(
"Provide an API key if your server requires authentication."
)}
/>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<InputLayouts.FieldSeparator />
<ModelSelectionField
modelConfigurations={currentModels}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onRefetch={isFetchDisabled ? undefined : handleFetchModels}
/>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</>
);
}
export default function OpenAICompatibleModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
onOpenChange,
defaultModelName,
onboardingState,
onboardingActions,
llmDescriptor,
}: LLMProviderFormProps) {
const [fetchedModels, setFetchedModels] = useState<ModelConfiguration[]>([]);
const isOnboarding = variant === "onboarding";
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
LLMProviderName.OPENAI_COMPATIBLE
);
const onClose = () => onOpenChange?.(false);
const modelConfigurations = buildAvailableModelConfigurations(
existingLlmProvider,
wellKnownLLMProvider ?? llmDescriptor
);
const initialValues = useInitialValues(
isOnboarding,
LLMProviderName.OPENAI_COMPATIBLE,
existingLlmProvider
) as OpenAICompatibleModalValues;
const validationSchema = buildValidationSchema(isOnboarding, {
apiBase: true,
});
return (
<ModalWrapper
providerName={LLMProviderName.OPENAI_COMPATIBLE}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
onSubmit={async (values, { setSubmitting, setStatus }) => {
if (isOnboarding && onboardingState && onboardingActions) {
const modelConfigsToUse =
fetchedModels.length > 0 ? fetchedModels : [];
await submitOnboardingProvider({
providerName: LLMProviderName.OPENAI_COMPATIBLE,
payload: {
...values,
model_configurations: modelConfigsToUse,
},
onboardingState,
onboardingActions,
isCustomProvider: false,
onClose,
setIsSubmitting: setSubmitting,
});
} else {
await submitLLMProvider({
providerName: LLMProviderName.OPENAI_COMPATIBLE,
values,
initialValues,
modelConfigurations:
fetchedModels.length > 0 ? fetchedModels : modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setStatus,
mutate,
onClose,
setSubmitting,
});
}
}}
>
<OpenAICompatibleModalInternals
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
modelConfigurations={modelConfigurations}
isOnboarding={isOnboarding}
/>
</ModalWrapper>
);
}

View File

@@ -1,16 +1,12 @@
"use client";
import { useState } from "react";
import { useSWRConfig } from "swr";
import { Formik } from "formik";
import { LLMProviderFormProps } from "@/interfaces/llm";
import * as Yup from "yup";
import { LLMProviderFormProps, LLMProviderName } from "@/interfaces/llm";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
buildDefaultInitialValues,
buildDefaultValidationSchema,
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
buildOnboardingInitialValues,
} from "@/sections/modals/llmConfig/utils";
import {
submitLLMProvider,
@@ -18,22 +14,17 @@ import {
} from "@/sections/modals/llmConfig/svc";
import {
APIKeyField,
ModelsField,
ModelSelectionField,
DisplayNameField,
FieldSeparator,
ModelsAccessField,
SingleDefaultModelField,
LLMConfigurationModalWrapper,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
const OPENAI_PROVIDER_NAME = "openai";
const DEFAULT_DEFAULT_MODEL_NAME = "gpt-5.2";
import * as InputLayouts from "@/layouts/input-layouts";
export default function OpenAIModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
@@ -41,12 +32,10 @@ export default function OpenAIModal({
llmDescriptor,
}: LLMProviderFormProps) {
const isOnboarding = variant === "onboarding";
const [isTesting, setIsTesting] = useState(false);
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } =
useWellKnownLLMProvider(OPENAI_PROVIDER_NAME);
if (open === false) return null;
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
LLMProviderName.OPENAI
);
const onClose = () => onOpenChange?.(false);
@@ -55,57 +44,34 @@ export default function OpenAIModal({
wellKnownLLMProvider ?? llmDescriptor
);
const initialValues = isOnboarding
? {
...buildOnboardingInitialValues(),
name: OPENAI_PROVIDER_NAME,
provider: OPENAI_PROVIDER_NAME,
api_key: "",
default_model_name: DEFAULT_DEFAULT_MODEL_NAME,
}
: {
...buildDefaultInitialValues(
existingLlmProvider,
modelConfigurations,
defaultModelName
),
api_key: existingLlmProvider?.api_key ?? "",
default_model_name:
(defaultModelName &&
modelConfigurations.some((m) => m.name === defaultModelName)
? defaultModelName
: undefined) ??
wellKnownLLMProvider?.recommended_default_model?.name ??
DEFAULT_DEFAULT_MODEL_NAME,
is_auto_mode: existingLlmProvider?.is_auto_mode ?? true,
};
const initialValues = useInitialValues(
isOnboarding,
LLMProviderName.OPENAI,
existingLlmProvider
);
const validationSchema = isOnboarding
? Yup.object().shape({
api_key: Yup.string().required("API Key is required"),
default_model_name: Yup.string().required("Model name is required"),
})
: buildDefaultValidationSchema().shape({
api_key: Yup.string().required("API Key is required"),
});
const validationSchema = buildValidationSchema(isOnboarding, {
apiKey: true,
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.OPENAI}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
if (isOnboarding && onboardingState && onboardingActions) {
const modelConfigsToUse =
(wellKnownLLMProvider ?? llmDescriptor)?.known_models ?? [];
await submitOnboardingProvider({
providerName: OPENAI_PROVIDER_NAME,
providerName: LLMProviderName.OPENAI,
payload: {
...values,
model_configurations: modelConfigsToUse,
is_auto_mode:
values.default_model_name === DEFAULT_DEFAULT_MODEL_NAME,
is_auto_mode: values.is_auto_mode,
},
onboardingState,
onboardingActions,
@@ -115,13 +81,13 @@ export default function OpenAIModal({
});
} else {
await submitLLMProvider({
providerName: OPENAI_PROVIDER_NAME,
providerName: LLMProviderName.OPENAI,
values,
initialValues,
modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -129,47 +95,30 @@ export default function OpenAIModal({
}
}}
>
{(formikProps) => (
<LLMConfigurationModalWrapper
providerEndpoint={OPENAI_PROVIDER_NAME}
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
>
<APIKeyField providerName="OpenAI" />
<APIKeyField providerName="OpenAI" />
{!isOnboarding && (
<>
<FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<FieldSeparator />
{isOnboarding ? (
<SingleDefaultModelField placeholder="E.g. gpt-5.2" />
) : (
<ModelsField
modelConfigurations={modelConfigurations}
formikProps={formikProps}
recommendedDefaultModel={
wellKnownLLMProvider?.recommended_default_model ?? null
}
shouldShowAutoUpdateToggle={true}
/>
)}
{!isOnboarding && (
<>
<FieldSeparator />
<ModelsAccessField formikProps={formikProps} />
</>
)}
</LLMConfigurationModalWrapper>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
</Formik>
<InputLayouts.FieldSeparator />
<ModelSelectionField
modelConfigurations={modelConfigurations}
recommendedDefaultModel={
wellKnownLLMProvider?.recommended_default_model ?? null
}
shouldShowAutoUpdateToggle={true}
/>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</ModalWrapper>
);
}

View File

@@ -2,22 +2,20 @@
import { useState, useEffect } from "react";
import { useSWRConfig } from "swr";
import { Formik, FormikProps } from "formik";
import InputTypeInField from "@/refresh-components/form/InputTypeInField";
import { useFormikContext } from "formik";
import * as InputLayouts from "@/layouts/input-layouts";
import {
LLMProviderFormProps,
LLMProviderName,
LLMProviderView,
ModelConfiguration,
} from "@/interfaces/llm";
import { fetchOpenRouterModels } from "@/app/admin/configuration/llm/utils";
import * as Yup from "yup";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
buildDefaultInitialValues,
buildDefaultValidationSchema,
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
buildOnboardingInitialValues,
BaseLLMFormValues,
} from "@/sections/modals/llmConfig/utils";
import {
@@ -26,44 +24,37 @@ import {
} from "@/sections/modals/llmConfig/svc";
import {
APIKeyField,
ModelsField,
APIBaseField,
ModelSelectionField,
DisplayNameField,
ModelsAccessField,
FieldSeparator,
FieldWrapper,
SingleDefaultModelField,
LLMConfigurationModalWrapper,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
import { toast } from "@/hooks/useToast";
const OPENROUTER_PROVIDER_NAME = "openrouter";
const DEFAULT_API_BASE = "https://openrouter.ai/api/v1";
interface OpenRouterModalValues extends BaseLLMFormValues {
api_key: string;
api_base: string;
}
interface OpenRouterModalInternalsProps {
formikProps: FormikProps<OpenRouterModalValues>;
existingLlmProvider: LLMProviderView | undefined;
fetchedModels: ModelConfiguration[];
setFetchedModels: (models: ModelConfiguration[]) => void;
modelConfigurations: ModelConfiguration[];
isTesting: boolean;
onClose: () => void;
isOnboarding: boolean;
}
function OpenRouterModalInternals({
formikProps,
existingLlmProvider,
fetchedModels,
setFetchedModels,
modelConfigurations,
isTesting,
onClose,
isOnboarding,
}: OpenRouterModalInternalsProps) {
const formikProps = useFormikContext<OpenRouterModalValues>();
const currentModels =
fetchedModels.length > 0
? fetchedModels
@@ -97,58 +88,36 @@ function OpenRouterModalInternals({
}, []);
return (
<LLMConfigurationModalWrapper
providerEndpoint={OPENROUTER_PROVIDER_NAME}
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
>
<FieldWrapper>
<InputLayouts.Vertical
name="api_base"
title="API Base URL"
subDescription="Paste your OpenRouter-compatible endpoint URL or use OpenRouter API directly."
>
<InputTypeInField
name="api_base"
placeholder="Your OpenRouter base URL"
/>
</InputLayouts.Vertical>
</FieldWrapper>
<>
<APIBaseField
subDescription="Paste your OpenRouter-compatible endpoint URL or use OpenRouter API directly."
placeholder="Your OpenRouter base URL"
/>
<APIKeyField providerName="OpenRouter" />
{!isOnboarding && (
<>
<FieldSeparator />
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
<FieldSeparator />
{isOnboarding ? (
<SingleDefaultModelField placeholder="E.g. openai/gpt-4o" />
) : (
<ModelsField
modelConfigurations={currentModels}
formikProps={formikProps}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onRefetch={isFetchDisabled ? undefined : handleFetchModels}
/>
)}
<InputLayouts.FieldSeparator />
<ModelSelectionField
modelConfigurations={currentModels}
recommendedDefaultModel={null}
shouldShowAutoUpdateToggle={false}
onRefetch={isFetchDisabled ? undefined : handleFetchModels}
/>
{!isOnboarding && (
<>
<FieldSeparator />
<ModelsAccessField formikProps={formikProps} />
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</LLMConfigurationModalWrapper>
</>
);
}
@@ -156,7 +125,6 @@ export default function OpenRouterModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
@@ -164,15 +132,12 @@ export default function OpenRouterModal({
llmDescriptor,
}: LLMProviderFormProps) {
const [fetchedModels, setFetchedModels] = useState<ModelConfiguration[]>([]);
const [isTesting, setIsTesting] = useState(false);
const isOnboarding = variant === "onboarding";
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
OPENROUTER_PROVIDER_NAME
LLMProviderName.OPENROUTER
);
if (open === false) return null;
const onClose = () => onOpenChange?.(false);
const modelConfigurations = buildAvailableModelConfigurations(
@@ -180,48 +145,34 @@ export default function OpenRouterModal({
wellKnownLLMProvider ?? llmDescriptor
);
const initialValues: OpenRouterModalValues = isOnboarding
? ({
...buildOnboardingInitialValues(),
name: OPENROUTER_PROVIDER_NAME,
provider: OPENROUTER_PROVIDER_NAME,
api_key: "",
api_base: DEFAULT_API_BASE,
default_model_name: "",
} as OpenRouterModalValues)
: {
...buildDefaultInitialValues(
existingLlmProvider,
modelConfigurations,
defaultModelName
),
api_key: existingLlmProvider?.api_key ?? "",
api_base: existingLlmProvider?.api_base ?? DEFAULT_API_BASE,
};
const initialValues: OpenRouterModalValues = {
...useInitialValues(
isOnboarding,
LLMProviderName.OPENROUTER,
existingLlmProvider
),
api_base: existingLlmProvider?.api_base ?? DEFAULT_API_BASE,
} as OpenRouterModalValues;
const validationSchema = isOnboarding
? Yup.object().shape({
api_key: Yup.string().required("API Key is required"),
api_base: Yup.string().required("API Base URL is required"),
default_model_name: Yup.string().required("Model name is required"),
})
: buildDefaultValidationSchema().shape({
api_key: Yup.string().required("API Key is required"),
api_base: Yup.string().required("API Base URL is required"),
});
const validationSchema = buildValidationSchema(isOnboarding, {
apiKey: true,
apiBase: true,
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.OPENROUTER}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
if (isOnboarding && onboardingState && onboardingActions) {
const modelConfigsToUse =
fetchedModels.length > 0 ? fetchedModels : [];
await submitOnboardingProvider({
providerName: OPENROUTER_PROVIDER_NAME,
providerName: LLMProviderName.OPENROUTER,
payload: {
...values,
model_configurations: modelConfigsToUse,
@@ -234,14 +185,14 @@ export default function OpenRouterModal({
});
} else {
await submitLLMProvider({
providerName: OPENROUTER_PROVIDER_NAME,
providerName: LLMProviderName.OPENROUTER,
values,
initialValues,
modelConfigurations:
fetchedModels.length > 0 ? fetchedModels : modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -249,18 +200,13 @@ export default function OpenRouterModal({
}
}}
>
{(formikProps) => (
<OpenRouterModalInternals
formikProps={formikProps}
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
modelConfigurations={modelConfigurations}
isTesting={isTesting}
onClose={onClose}
isOnboarding={isOnboarding}
/>
)}
</Formik>
<OpenRouterModalInternals
existingLlmProvider={existingLlmProvider}
fetchedModels={fetchedModels}
setFetchedModels={setFetchedModels}
modelConfigurations={modelConfigurations}
isOnboarding={isOnboarding}
/>
</ModalWrapper>
);
}

View File

@@ -1,19 +1,16 @@
"use client";
import { useState } from "react";
import { useSWRConfig } from "swr";
import { Formik } from "formik";
import { FileUploadFormField } from "@/components/Field";
import InputTypeInField from "@/refresh-components/form/InputTypeInField";
import * as InputLayouts from "@/layouts/input-layouts";
import { LLMProviderFormProps } from "@/interfaces/llm";
import { LLMProviderFormProps, LLMProviderName } from "@/interfaces/llm";
import * as Yup from "yup";
import { useWellKnownLLMProvider } from "@/hooks/useLLMProviders";
import {
buildDefaultInitialValues,
buildDefaultValidationSchema,
useInitialValues,
buildValidationSchema,
buildAvailableModelConfigurations,
buildOnboardingInitialValues,
BaseLLMFormValues,
} from "@/sections/modals/llmConfig/utils";
import {
@@ -21,18 +18,12 @@ import {
submitOnboardingProvider,
} from "@/sections/modals/llmConfig/svc";
import {
ModelsField,
ModelSelectionField,
DisplayNameField,
FieldSeparator,
FieldWrapper,
ModelsAccessField,
SingleDefaultModelField,
LLMConfigurationModalWrapper,
ModelAccessField,
ModalWrapper,
} from "@/sections/modals/llmConfig/shared";
const VERTEXAI_PROVIDER_NAME = "vertex_ai";
const VERTEXAI_DISPLAY_NAME = "Google Cloud Vertex AI";
const VERTEXAI_DEFAULT_MODEL = "gemini-2.5-pro";
const VERTEXAI_DEFAULT_LOCATION = "global";
interface VertexAIModalValues extends BaseLLMFormValues {
@@ -46,7 +37,6 @@ export default function VertexAIModal({
variant = "llm-configuration",
existingLlmProvider,
shouldMarkAsDefault,
open,
onOpenChange,
defaultModelName,
onboardingState,
@@ -54,14 +44,11 @@ export default function VertexAIModal({
llmDescriptor,
}: LLMProviderFormProps) {
const isOnboarding = variant === "onboarding";
const [isTesting, setIsTesting] = useState(false);
const { mutate } = useSWRConfig();
const { wellKnownLLMProvider } = useWellKnownLLMProvider(
VERTEXAI_PROVIDER_NAME
LLMProviderName.VERTEX_AI
);
if (open === false) return null;
const onClose = () => onOpenChange?.(false);
const modelConfigurations = buildAvailableModelConfigurations(
@@ -69,66 +56,41 @@ export default function VertexAIModal({
wellKnownLLMProvider ?? llmDescriptor
);
const initialValues: VertexAIModalValues = isOnboarding
? ({
...buildOnboardingInitialValues(),
name: VERTEXAI_PROVIDER_NAME,
provider: VERTEXAI_PROVIDER_NAME,
default_model_name: VERTEXAI_DEFAULT_MODEL,
custom_config: {
vertex_credentials: "",
vertex_location: VERTEXAI_DEFAULT_LOCATION,
},
} as VertexAIModalValues)
: {
...buildDefaultInitialValues(
existingLlmProvider,
modelConfigurations,
defaultModelName
),
default_model_name:
(defaultModelName &&
modelConfigurations.some((m) => m.name === defaultModelName)
? defaultModelName
: undefined) ??
wellKnownLLMProvider?.recommended_default_model?.name ??
VERTEXAI_DEFAULT_MODEL,
is_auto_mode: existingLlmProvider?.is_auto_mode ?? true,
custom_config: {
vertex_credentials:
(existingLlmProvider?.custom_config
?.vertex_credentials as string) ?? "",
vertex_location:
(existingLlmProvider?.custom_config?.vertex_location as string) ??
VERTEXAI_DEFAULT_LOCATION,
},
};
const initialValues: VertexAIModalValues = {
...useInitialValues(
isOnboarding,
LLMProviderName.VERTEX_AI,
existingLlmProvider
),
custom_config: {
vertex_credentials:
(existingLlmProvider?.custom_config?.vertex_credentials as string) ??
"",
vertex_location:
(existingLlmProvider?.custom_config?.vertex_location as string) ??
VERTEXAI_DEFAULT_LOCATION,
},
} as VertexAIModalValues;
const validationSchema = isOnboarding
? Yup.object().shape({
default_model_name: Yup.string().required("Model name is required"),
custom_config: Yup.object({
vertex_credentials: Yup.string().required(
"Credentials file is required"
),
vertex_location: Yup.string(),
}),
})
: buildDefaultValidationSchema().shape({
custom_config: Yup.object({
vertex_credentials: Yup.string().required(
"Credentials file is required"
),
vertex_location: Yup.string(),
}),
});
const validationSchema = buildValidationSchema(isOnboarding, {
extra: {
custom_config: Yup.object({
vertex_credentials: Yup.string().required(
"Credentials file is required"
),
vertex_location: Yup.string(),
}),
},
});
return (
<Formik
<ModalWrapper
providerName={LLMProviderName.VERTEX_AI}
llmProvider={existingLlmProvider}
onClose={onClose}
initialValues={initialValues}
validationSchema={validationSchema}
validateOnMount={true}
onSubmit={async (values, { setSubmitting }) => {
onSubmit={async (values, { setSubmitting, setStatus }) => {
const filteredCustomConfig = Object.fromEntries(
Object.entries(values.custom_config || {}).filter(
([key, v]) => key === "vertex_credentials" || v !== ""
@@ -148,12 +110,11 @@ export default function VertexAIModal({
(wellKnownLLMProvider ?? llmDescriptor)?.known_models ?? [];
await submitOnboardingProvider({
providerName: VERTEXAI_PROVIDER_NAME,
providerName: LLMProviderName.VERTEX_AI,
payload: {
...submitValues,
model_configurations: modelConfigsToUse,
is_auto_mode:
values.default_model_name === VERTEXAI_DEFAULT_MODEL,
is_auto_mode: values.is_auto_mode,
},
onboardingState,
onboardingActions,
@@ -163,13 +124,13 @@ export default function VertexAIModal({
});
} else {
await submitLLMProvider({
providerName: VERTEXAI_PROVIDER_NAME,
providerName: LLMProviderName.VERTEX_AI,
values: submitValues,
initialValues,
modelConfigurations,
existingLlmProvider,
shouldMarkAsDefault,
setIsTesting,
setStatus,
mutate,
onClose,
setSubmitting,
@@ -177,67 +138,54 @@ export default function VertexAIModal({
}
}}
>
{(formikProps) => (
<LLMConfigurationModalWrapper
providerEndpoint={VERTEXAI_PROVIDER_NAME}
providerName={VERTEXAI_DISPLAY_NAME}
existingProviderName={existingLlmProvider?.name}
onClose={onClose}
isFormValid={formikProps.isValid}
isDirty={formikProps.dirty}
isTesting={isTesting}
isSubmitting={formikProps.isSubmitting}
<InputLayouts.FieldPadder>
<InputLayouts.Vertical
name="custom_config.vertex_location"
title="Google Cloud Region Name"
subDescription="Region where your Google Vertex AI models are hosted. See full list of regions supported at Google Cloud."
>
<FieldWrapper>
<InputLayouts.Vertical
name="custom_config.vertex_location"
title="Google Cloud Region Name"
subDescription="Region where your Google Vertex AI models are hosted. See full list of regions supported at Google Cloud."
>
<InputTypeInField
name="custom_config.vertex_location"
placeholder={VERTEXAI_DEFAULT_LOCATION}
/>
</InputLayouts.Vertical>
</FieldWrapper>
<InputTypeInField
name="custom_config.vertex_location"
placeholder={VERTEXAI_DEFAULT_LOCATION}
/>
</InputLayouts.Vertical>
</InputLayouts.FieldPadder>
<FieldWrapper>
<InputLayouts.Vertical
name="custom_config.vertex_credentials"
title="API Key"
subDescription="Attach your API key JSON from Google Cloud to access your models."
>
<FileUploadFormField
name="custom_config.vertex_credentials"
label=""
/>
</InputLayouts.Vertical>
</FieldWrapper>
<InputLayouts.FieldPadder>
<InputLayouts.Vertical
name="custom_config.vertex_credentials"
title="API Key"
subDescription="Attach your API key JSON from Google Cloud to access your models."
>
<FileUploadFormField
name="custom_config.vertex_credentials"
label=""
/>
</InputLayouts.Vertical>
</InputLayouts.FieldPadder>
<FieldSeparator />
{!isOnboarding && (
<DisplayNameField disabled={!!existingLlmProvider} />
)}
<FieldSeparator />
{isOnboarding ? (
<SingleDefaultModelField placeholder="E.g. gemini-2.5-pro" />
) : (
<ModelsField
modelConfigurations={modelConfigurations}
formikProps={formikProps}
recommendedDefaultModel={
wellKnownLLMProvider?.recommended_default_model ?? null
}
shouldShowAutoUpdateToggle={true}
/>
)}
{!isOnboarding && <ModelsAccessField formikProps={formikProps} />}
</LLMConfigurationModalWrapper>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<DisplayNameField disabled={!!existingLlmProvider} />
</>
)}
</Formik>
<InputLayouts.FieldSeparator />
<ModelSelectionField
modelConfigurations={modelConfigurations}
recommendedDefaultModel={
wellKnownLLMProvider?.recommended_default_model ?? null
}
shouldShowAutoUpdateToggle={true}
/>
{!isOnboarding && (
<>
<InputLayouts.FieldSeparator />
<ModelAccessField />
</>
)}
</ModalWrapper>
);
}

View File

@@ -10,55 +10,65 @@ import BedrockModal from "@/sections/modals/llmConfig/BedrockModal";
import LMStudioForm from "@/sections/modals/llmConfig/LMStudioForm";
import LiteLLMProxyModal from "@/sections/modals/llmConfig/LiteLLMProxyModal";
import BifrostModal from "@/sections/modals/llmConfig/BifrostModal";
function detectIfRealOpenAIProvider(provider: LLMProviderView) {
return (
provider.provider === LLMProviderName.OPENAI &&
provider.api_key &&
!provider.api_base &&
Object.keys(provider.custom_config || {}).length === 0
);
}
import OpenAICompatibleModal from "@/sections/modals/llmConfig/OpenAICompatibleModal";
export function getModalForExistingProvider(
provider: LLMProviderView,
open?: boolean,
onOpenChange?: (open: boolean) => void,
defaultModelName?: string
) {
const props = {
existingLlmProvider: provider,
open,
onOpenChange,
defaultModelName,
};
const hasCustomConfig = provider.custom_config != null;
switch (provider.provider) {
// These providers don't use custom_config themselves, so a non-null
// custom_config means the provider was created via CustomModal.
case LLMProviderName.OPENAI:
// "openai" as a provider name can be used for litellm proxy / any OpenAI-compatible provider
if (detectIfRealOpenAIProvider(provider)) {
return <OpenAIModal {...props} />;
} else {
return <CustomModal {...props} />;
}
return hasCustomConfig ? (
<CustomModal {...props} />
) : (
<OpenAIModal {...props} />
);
case LLMProviderName.ANTHROPIC:
return <AnthropicModal {...props} />;
return hasCustomConfig ? (
<CustomModal {...props} />
) : (
<AnthropicModal {...props} />
);
case LLMProviderName.AZURE:
return hasCustomConfig ? (
<CustomModal {...props} />
) : (
<AzureModal {...props} />
);
case LLMProviderName.OPENROUTER:
return hasCustomConfig ? (
<CustomModal {...props} />
) : (
<OpenRouterModal {...props} />
);
// These providers legitimately store settings in custom_config,
// so always use their dedicated modals.
case LLMProviderName.OLLAMA_CHAT:
return <OllamaModal {...props} />;
case LLMProviderName.AZURE:
return <AzureModal {...props} />;
case LLMProviderName.VERTEX_AI:
return <VertexAIModal {...props} />;
case LLMProviderName.BEDROCK:
return <BedrockModal {...props} />;
case LLMProviderName.OPENROUTER:
return <OpenRouterModal {...props} />;
case LLMProviderName.LM_STUDIO:
return <LMStudioForm {...props} />;
case LLMProviderName.LITELLM_PROXY:
return <LiteLLMProxyModal {...props} />;
case LLMProviderName.BIFROST:
return <BifrostModal {...props} />;
case LLMProviderName.OPENAI_COMPATIBLE:
return <OpenAICompatibleModal {...props} />;
default:
return <CustomModal {...props} />;
}

Some files were not shown because too many files have changed in this diff Show More