エッジ開発者プラットフォーム
  • AIゲートウェイ
    • 製品概要
    • 早速スタート。
    • 開発ガイド
      • V1
      • V2
        • 概要
        • パブリックリクエストヘッダーの概要
        • ラベル-混合物
        • 百度-千帆
        • バイト-豆パック
        • アリ·トゥン
        • Moonshot—KIMI
        • ミニマックス
        • Open AI
        • 双子
    • よくある質問
    • 製品の動向
このページは現在英語版のみで提供されており、日本語版も近日中に提供される予定です。ご利用いただきありがとうございます。

早速スタート。

Enable Open Edge

Tencent Cloud Console Enter EdgeOne In the left sidebar, click AI Gateway, if you haven't enabled it yet, you'll need to agree to enable it, please click Activate now.



Create AI Gateway

After successful activation, in the AI Gateway list page, click Create, and complete the entry of the name and description according to the popup prompt.
Name: Required, cannot be modified after creation, can only contain numbers, uppercase and lowercase letters, hyphens, and underscores. Names must be unique.
Description: Optional. It can contain up to 60 characters.




Configure AI Gateway

Once the AI Gateway is successfully created, on the AI Gateway list page, click Details or the specific AI Gateway instance ID to go to the gateway's detail page, which currently supports cache configuration.
Enable/Disable: Turn on the switch to enable the cache. For the same Prompt request, the response can directly come from the gateway's cache without requesting the LLM Service Provider. Turn off the switch to disable the cache. Each request will be responded to by the LLM Service Provider.
Set cache duration: The configurable cache durations are 2 minutes, 5 minutes, 1 hour, 1 day, 1 week, 1 month. The cache will be automatically cleared after the set duration is exceeded.

API Endpoint

The corresponding endpoint at the back end of the AI Gateway is the language model (LLM) service provider. Currently supported providers include Open AI, Minimax, Moonshot AI, Gemini AI, Tencent Hunyuan, Baidu Qianfan, Alibaba Tongyi Qianwen, and Byte Doubao.




Case Demonstrations

Access Open AI through the AI Gateway
Operation Scenario: AI Gateway disables cache, accesses Open AI through AI Gateway.









AI Gateway enables cache, accesses again.






If the response header OE-Cache-Status returns HIT, it indicates that the cache has been hit.
Accessing other LLM service providers through AI Gateway can refer to the above operations correspondingly.
Description:
To further enhance the development experience and simplify interface integration, AI Gateway now releases API V2 version. This version has made significant improvements and optimizations in several key areas, particularly in the unification of request body (Request Body) and response data format. We welcome you to experience AI Gateway Overview.
Major Updates:
Unified request interface address to avoid frequent switching of interface addresses by developers.
Unified Request Body Format: All API requests now use a consistent JSON format, simplifying the data construction process for clients.
Standardized Response Data Structure: A unified response format helps developers quickly understand and process returned information. It includes status codes, messages, and data body, making error handling and success feedback more intuitive and clear.
Enhanced Error Handling Mechanism: Clear error code definitions facilitate problem localization and resolution. Detailed error descriptions help developers quickly take corrective actions.