CH-UICH-UI

SQL Workspace

Multi-tab SQL editor, schema explorer, query streaming, data upload, and saved queries

CH-UI provides a ClickHouse-first SQL workspace with a multi-tab editor, schema explorer, autocomplete, query streaming, data upload, and saved queries.

Overview

The workspace is organized around a tab system with two split groups (left/right). Each tab can be a query editor, table browser, or singleton page.

Multi-Tab Editor

The SQL editor uses CodeMirror 6 with ClickHouse syntax highlighting and schema-aware autocomplete.

Tab Types

TypeBehavior
QueryMulti-instance, supports dirty tracking, SQL persisted in localStorage
Table/DatabaseContext-specific browsing with data preview
SingletonHome, SavedQueries, Dashboards, Brain, Admin, etc. — one instance each

Persistence

Tabs are saved to localStorage (ch-ui-tabs) with auto-save via microtask debounce. Tab state survives browser refreshes and includes:

  • Tab ID, type, and name
  • SQL content and dirty flag
  • Saved query reference (if linked)
  • Split group assignment (left/right)

Split View

Open up to two editor groups side by side. Tabs can be moved between groups. Duplicate detection prevents opening the same saved query twice.

Query Execution

Standard Execution

POST /api/query/run
{
  "query": "SELECT * FROM system.tables LIMIT 20",
  "timeout": 30,
  "maxResultRows": 1000
}
ParameterDescriptionDefaultMax
querySQL to executeRequired
timeoutSeconds before timeout30300 (5 min)
maxResultRowsServer-side row limit sent to ClickHouse1000

Query Streaming

For large result sets, use SSE-based streaming:

POST /api/query/stream

Returns chunked NDJSON with message types:

TypeContent
metaColumn names and types
chunkData rows (with sequence number)
doneStatistics and total row count
errorError message

Stream limit: 1,000,000 rows.

EXPLAIN & Profiling

EndpointPurpose
POST /api/query/explainRaw EXPLAIN output
POST /api/query/planParsed query plan as tree (tries EXPLAIN PLAN, falls back to EXPLAIN AST, then generic)
POST /api/query/profileLatest system.query_log metrics for the exact query (duration, read rows/bytes, memory)
POST /api/query/sampleFirst N rows per shard (default 25, max 500), falls back to global LIMIT

Schema Explorer

The left sidebar provides a navigable schema tree.

Explorer Endpoints

EndpointPurpose
GET /api/query/databasesList all databases
GET /api/query/tables?database=dbList tables in a database
GET /api/query/columns?database=db&table=tblList columns with types
GET /api/query/data-typesList all ClickHouse data types
GET /api/query/clustersList cluster names

Table Browser

Click a table in the explorer to open a table tab showing:

  • Column list with types
  • Data preview (paginated, default 100 rows, max 1000)
  • Host info and table metadata

Autocomplete

The editor provides schema-aware completions using database, table, and column names from the explorer endpoints.

Schema Operations

Admin-only operations for managing databases and tables.

Create Database

POST /api/query/schema/database
{
  "name": "analytics",
  "engine": "Atomic",
  "on_cluster": "default",
  "if_not_exists": true
}

Drop Database

POST /api/query/schema/database/drop
{
  "name": "analytics",
  "if_exists": true,
  "sync": true
}

Create Table

POST /api/query/schema/table
{
  "database": "analytics",
  "name": "events",
  "engine": "MergeTree",
  "columns": [
    { "name": "id", "type": "UInt64" },
    { "name": "timestamp", "type": "DateTime" },
    { "name": "event_type", "type": "String" }
  ],
  "order_by": "(timestamp)",
  "partition_by": "toYYYYMM(timestamp)",
  "if_not_exists": true
}

Drop Table

POST /api/query/schema/table/drop
{
  "database": "analytics",
  "name": "events",
  "if_exists": true
}

Data Upload

Upload CSV, JSON, JSONL, or Parquet files into ClickHouse tables. Max file size: 25 MB.

Two-Step Process

Step 1: Discover schema

POST /api/query/upload/discover
Content-Type: multipart/form-data

Upload the file. The backend detects the format, parses all rows, infers column types, and returns a preview (20 rows) with the inferred schema.

Type inference order: Bool > Int64 > Float64 > DateTime > Date > String. Nullable wrapper added if any null values found.

Step 2: Ingest data

POST /api/query/upload/ingest
Content-Type: multipart/form-data

Confirm the target database, table, and column mapping. Optionally create the table with a custom engine, ORDER BY, PARTITION BY, TTL, etc.

Data is inserted via JSONEachRow in 500-row batches with a 90-second timeout per batch.

Response

{
  "success": true,
  "database": "analytics",
  "table": "events",
  "rows_inserted": 15000,
  "created_table": true,
  "commands": {
    "create_table": "CREATE TABLE ...",
    "insert": "INSERT INTO ... FORMAT JSONEachRow"
  }
}

Saved Queries

Save, duplicate, update, and delete SQL snippets for reuse.

# List
GET /api/saved-queries

# Create
POST /api/saved-queries
{ "name": "Daily active users", "query": "SELECT count(DISTINCT user_id) ..." }

# Update
PUT /api/saved-queries/{id}
{ "name": "Updated name", "query": "..." }

# Duplicate
POST /api/saved-queries/{id}/duplicate
# Creates a copy with " (copy)" suffix

# Delete
DELETE /api/saved-queries/{id}

Opening a saved query in the editor links the tab to the saved query ID. Changes are tracked via a dirty flag.

Display Controls

Max Result Rows

Controls how many rows the UI requests from the server. Stored in localStorage (ch-ui-max-result-rows).

  • Default: 1000
  • Minimum: 1
  • Sent as maxResultRows parameter on query execution

Number Formatting

Toggle locale-aware number formatting (thousands separators). Stored in localStorage (ch-ui-format-numbers), enabled by default.

Guardrails Integration

All query endpoints run governance guardrails before execution. If a policy with block enforcement mode matches, the query is rejected with a 403. See Governance for details.

Safety Defaults

ControlValue
Query timeout30s default, 5 min max
Stream row limit1,000,000
Upload file size25 MB
Upload batch size500 rows
Schema operationsAdmin-only
Brain-generated SQLLIMIT 100 default

On this page