Alation AI API Overview

The Alation AI API is a comprehensive HTTP API that provides access to Alation's AI-powered resources and features. It enables developers and data teams to programmatically interact with AI capabilities built on top of Alation's data catalog and intelligence platform.

What can you do with it?

The API allows you to:

  • Manage conversational AI interactions through chat sessions with specialized AI agents
  • Configure and customize AI agents, tools, and LLM settings to tailor AI behavior to your organization's needs
  • Evaluate AI performance using both generic evaluation frameworks and SQL-specific evaluation tools
  • Execute SQL queries safely through AI-powered interfaces with built-in validation
  • Generate charts and visualizations from data using AI assistance
  • Search and retrieve catalog objects intelligently using AI-enhanced search capabilities

Customer use cases

The Alation AI API supports several key customer scenarios:

  • Self-Service Analytics: Enable business users to ask natural language questions about data and get SQL queries, charts, or insights automatically
  • Data Discovery: Help users find relevant tables, columns, and data sources through conversational search
  • Data Product Development: Build and evaluate SQL queries for data products with AI assistance
  • Custom AI Workflows: Create custom integrations that leverage Alation's AI agents and tools in external applications
  • AI Performance Monitoring: Evaluate and track the quality of AI-generated SQL and responses over time
  • Catalog Navigation: Guide users through complex data catalogs using intelligent contextual search

When should you use this API over another API?

Use the Alation AI API when:

  • You need AI-powered natural language interaction with your data catalog
  • You want to generate SQL queries from natural language questions
  • You need intelligent search capabilities beyond basic text matching
  • You want to create or customize AI agents and tools for your organization

Use other Alation APIs (like the Catalog API or Open Connector API) when:

  • You need direct CRUD operations on catalog objects without AI involvement
  • You're managing data source connections and configurations
  • You're performing bulk metadata operations
  • You need low-level access to catalog objects and relationships

Throttling

The Alation AI API is not (yet) compatible with Alation throttling settings. Instead, it implements its own two-tier token bucket rate-limiting system. There is a lower rate limit for AI operations (e.g., chat and evaluation endpoints) due to their expensive nature. The rest default to a higher rate limit.

Response headers

When rate limits are enforced, the HTTP 429 responses include these headers:

  • Retry-After: Number of seconds to wait before retrying
  • Ai-RateLimit-Limit: The limit value for the tier
  • Ai-RateLimit-Remaining: Number of remaining tokens
  • Ai-RateLimit-Tier: Either "high" or "low"

Max objects

Pagination limits

For paginated endpoints (chats, messages, evaluation sets, configs):

  • Default limit: 100 objects per request
  • Maximum limit: 1000 objects per request
  • Minimum limit: 1 object per request

SQL query execution limits

The API enforces limits on SQL query execution to prevent resource exhaustion:

  • query rows: 10,000 rows maximum per query execution
    • Automatically adds a LIMIT clause to prevent out-of-memory errors

Pagination

Pattern

The API uses offset/limit pagination with a standardized DataPage response wrapper.

Parameters

All paginated endpoints accept:

  • limit (query parameter): Maximum number of objects to return
    • Type: integer
    • Default: 100
    • Range: 1-1000
  • offset (query parameter): Number of objects to skip
    • Type: integer
    • Default: 0
    • Range: 0 to total count

Response format

All paginated responses use the DataPage schema:

{
  "data": [...],     // Array of objects
  "total": 150       // Total count of objects available
}

Example

To get the second page of 50 chats:

GET /ai/api/v1/chats?limit=50&offset=50

Limitations

Please be aware that this is a new API undergoing highly active development. While we will still follow standard versioning and deprecation processes, expect it to change and evolve quickly over time.

SQL execution restrictions

For security, the API enforces strict SQL validation.
Only reads are allowed.
The API does not allow data modification through SQL execution.

We also recommend that users apply permissions in their data source for additional layers of enforcement.

Known feature limitations

  • Structured Output Evaluation: Not currently supported; falls back to LLM-as-a-Judge evaluation
  • Percentage-based Sampling: Not supported in the database engine for evaluation
  • Multimodal Content: Not supported in evaluation tool scorers (images, audio, etc.)
  • AWS Bedrock Regions: Some LLM models are not available in all AWS regions

Swagger

The Alation AI API follows OpenAPI 3.1.0 specifications.
You can locate the Swagger documentation for your tenant at

https://{alation_domain}/ai/docs

In addition, a ReDoc format is also available at

https://{alation_domain}/ai/redoc

Authentication in Swagger UI

To authenticate in the Swagger UI, you need to provide:

Option 1: session cookie

  • Log in to the Alation UI, first.
  • The sessionid cookie will be automatically included in requests.

Option 2: OAuth 2.0 authorization code

  1. Create an OAuth application for your tenant with https://{alation_domain}/ai/docs/oauth2-redirect as one of the redirect URIs.
    Save the client_id and client_secret.
  2. Click "Authorize" in Swagger UI.
  3. Enter your client_id and client_secret.
  4. Log in to to the Alation UI when redirected.
  5. An authorization bearer token will now be automatically included in requests.