Unlock 1 Year of EdgeOne + 1TB CDN: Join Our Developer Journey
Get Started Now !
Edge AI
Overview
EdgeOne deploys AI services to global edge nodes, providing developers with low-latency, high-performance, zero Ops AI inference capability. This feature is designed to address the problems of high latency and high cost faced by traditional cloud-based AI services, enabling Pages users to more conveniently integrate AI features into their applications, improving user experience and reducing development and operation costs.
Currently, we have deployed the DeepSeek R1 model at global edge nodes, enabling Pages Projects to quickly access and utilize AI capabilities. All users can experience this service for free, and you can integrate intelligent dialogue features into your website through simple API calls.
Core Advantages
Ready-To-Use Model Service
Preset optimized Deepseek-R1 model
Directly call the AI model from Pages Functions
No need to handle Ops work such as model deployment and version management
Assurance Of Low-Latency Response
Request automatic routing to the nearest edge node
Support streaming to reduce first-byte latency
Built-in connection reuse and transmission optimization
Seamless Integration Development Experience
Seamless integration with EdgeOne Pages project
Automatically inherit domain name and HTTPS configuration
Provide standardized API call templates
Integration Process
1. Click "Create Project" on the Pages Console.
2. Select the "DeepSeek-R1 for Edge" template for deployment.
3. Clone the repository locally, in the project's edge function, the following example code is the core module for calling the AI model.
// In the edge function (example path: /functions/v1/chat/completions/index.js)export async function onRequestPost({ request }) {// Resolve user inputconst { content } = await request.json();try {// Call Edge AI serviceconst response = await AI.chatCompletions({model: '@tx/deepseek-ai/deepseek-r1-distill-qwen-32b',messages: [{ role: 'user', content }],stream: true, // Enable streaming output});// Return streaming responsereturn new Response(response, {headers: {'Content-Type': 'text/event-stream','Cache-Control': 'no-cache','Connection': 'keep-alive','Access-Control-Allow-Origin': '*','Access-Control-Allow-Methods': 'POST, OPTIONS','Access-Control-Allow-Headers': 'Content-Type, Authorization',}});} catch (error) {return new Response(JSON.stringify({error: 'AI_SERVICE_ERROR',message: error.message}), { status: 503 });}}
Precautions
There is currently a call rate limit for API calls; please control the request speed appropriately.
It is recommended to implement an error handling mechanism to improve application stability.
Prohibited for generating illegal content, high-frequency automated requests, etc.
Currently, it is a time-limited free beta service; the official commercial use time will be notified separately.
For best practices, refer to another document Implementing Edge AI on EdgeOne Pages: Operation Guide for DeepSeek R1 Template.