“Our new AI assistant can answer any finance question—except when our own API names get in its way.” That was the hard‑earned lesson I learned while developing an LLM‑powered chatbot in our platform.

AI Agent as a New Interface Layer

AI agent powered by large language models (LLMs) is becoming an increasingly important component in modern tech stacks. Think of AI agents as a new kind of middle layer. Instead of a human manually clicking through a UI, calling an API, or writing SQL, an AI agent interprets a natural language request and determines how to execute it using the tools like function calling or MCP (Model Context Protocol). To fully leverage AI, we must rethink how we design our systems that AI agents interact with. This isn’t just about adding AI as a feature, it’s about designing our systems for intelligence.

In this paradigm, the AI agent effectively becomes an API consumer or even a quasi-“user” of your system. This requires a shift in perspective. Just as we design intuitive UIs for humans, we must now design effective interfaces for AI. The crucial difference? AI agents or LLMs operate probabilistically, relying on patterns and context rather than strict logic. To ensure they act reliably and accurately, we need to provide them with as much structured, unambiguous context as possible about what our APIs do and what our data means.

Making APIs Understandable to AI Agent

APIs are the gateways to our software systems. Traditionally, they are documented for humans, but now they need to be interpretable by AI agents. Below are some key considerations for designing AI agent-friendly APIs:

  1. Robust Schemas and Specifications: Embrace machine-readable formats like OpenAPI (Swagger) or GraphQL schemas to provide a formal contract for the AI agent.
  2. Clear, Natural Language Documentation: Embed clear, plain-language descriptions for endpoints, parameters, and responses directly within your API specification. Well-written, human-readable descriptions significantly improve an AI agent’s comprehension.
  3. Descriptive Naming: Use clear, self-explanatory names for endpoints, routes, and parameters.
  4. Structured and Semantic Responses: Design API outputs in types that convey meaning beyond raw values. Use informative field names (e.g., "currency": "USD", "amount": 19.99) and consider metadata that adds context (e.g., indicating units or data types).
  5. Provide Examples: Include example requests and responses in your documentation or specification. AI agents learn patterns from examples, guiding them to generate valid calls.
  6. Consistent and Explanatory Errors: Implement standardized, informative error responses. A clear error message (e.g., {"error": "Invalid date format, use YYYY-MM-DD"}) allows the AI agent to potentially self-correct or report the issue intelligibly.

Making Database Schemas Intelligible to AI Agent

When AI agents need to interact with databases directly, the schema itself becomes the interface. Raw table and column names, especially in legacy systems with terse or coded names (like STATUS column with values P, C, E), are often very hard to understand by AI agents. AI agents cannot magically decode business jargon or internal conventions. To make databases AI agent-accessible, we can consider the following:

  1. Expose Rich Schema Metadata: Provide a machine-readable dictionary describing tables, columns (their purpose and data types), and relationships (foreign keys). Utilize database comment features or maintain a separate metadata file (e.g., JSON/YAML). Knowing ORDERS.customer_id links to CUSTOMERS.id is vital for correct joins.
  2. Define Semantic Meaning and Enumerate Values: Explicitly document the meaning of columns and list allowed values for fields with finite possibilities (e.g., “STATUS column: P=Pending, C=Completed, E=Error”). This prevents hallucination and provides valuable domain knowledge.
  3. Use Clear Naming: For new designs, opt for descriptive names (CustomerOrders vs. CUS_ORDERS). For legacy schemas where renaming is impractical, we can introduce a metadata layer of comprehensive mapping.
  4. Provide Example Queries: Show examples of natural language questions translated into corresponding SQL queries for your schema. This acts as few-shot learning, guiding the AI agent’s query generation.
  5. Consider Dynamic Schema Filtering: For very large schemas, employ techniques or tools that provide the AI agent with only the relevant subset of tables and columns based on the user’s query context. This improves focus and reduces token consumption.

Enhancing Existing Systems for AI Agent

Making systems AI agent-friendly doesn’t always require a full rewrite. Here are some practical strategies to enhance existing systems:

  • Revisit API Contracts: Enhance existing APIs by adding rich descriptions to an OpenAPI spec or GraphQL schema. Consider adding clearer aliases for poorly named fields or endpoints while maintaining backward compatibility. Refactor the interface, not necessarily the core logic.
  • Enhance Schema Metadata: For existing databases, add comments or maintain a metadata mapping file (e.g., YAML or JSON) describing tables, columns, relationships, and value meanings in a centralized location. This not only aids AI agents but also benefits human developers.
  • Create Semantic Proxies: If modifying a legacy API or database is infeasible, build a proxy service. This layer exposes a clean, well-documented, AI agent-friendly interface (e.g., a modern REST API) and translates calls to the underlying legacy system. It can also add validation, ensuring only well-formed requests reach the core system.
  • Simplify Data Exposure: Use patterns like Backend-for-Frontend (BFF) tailored for AI. Expose only the necessary data fields required by the AI agent for its task, reducing complexity, cost (tokens), and potential data leakage.
  • Tailor Authentication and Authorization: The access patterns of the AI agent may differ from human users. Consider implementing dedicated authentication and authorization mechanisms for AI agents to grant them efficient access to necessary data while carefully controlling exposure to sensitive information.
  • Adopt Specialized Evaluation Framework: AI agents require different testing and evaluation strategies since their behavior is probabilistic. Traditional deterministic testing frameworks are insufficient. Adopt specialized evaluation frameworks, such as OpenAI’s evals or DeepEval, to rigorously assess whether the AI agent performs as expected.
  • Monitoring and Tracing: Token usage, latency, and rate limits are important metrics to monitor. With both existing and AI agent using the same tracing context, we can get a better observability of the entire system. OpenTelemetry is the de facto standard for it. OpenLLMetry is a project that extends OpenTelemetry to monitor and trace LLM applications.

Integrating AI agents effectively requires more than just API calls, it demands a shift towards designing for understanding. By treating our API definitions, database schemas, and associated metadata as first-class citizens – effectively creating a “semantic type system” – we provide the crucial context AI agents need to operate accurately, efficiently, and reliably. This investment in clarity not only unlocks the potential of AI within our applications but also results in better-documented, more maintainable systems for human developers as well. The future of intelligent applications hinges on our ability to build interfaces that both humans and machines can truly comprehend.