Edge AI
Overview
EdgeOne deploys AI service to global edge nodes, providing developers with low-latency, high-performance, and zero-maintenance AI Inference Capability. This feature aims to address issues like high delay and costly traditional cloud-based AI service, enabling Pages users to seamlessly integrate AI feature in application more conveniently, enhance user experience, and reduce development and operation costs.
Currently, we have deployed the DeepSeek R1 model on global edge nodes, enabling Pages Project to quickly access and use AI capability. All users can try it for free. You just need to invoke APIs to integrate intelligent dialogue feature into your website.
Core Strength
Out-Of-The-Box Model Service
Preset optimized Deepseek-R1 model
Call AI models directly from Pages Functions
No action is required for model deployment, version management, and Ops work
Low Latency Response Guarantee
Request automatic routing to the nearest edge node
Support streaming to reduce initial delay
Built-in connection reuse and transfer optimization
Seamless Integration Development Experience
Seamless integration with EdgeOne Pages Project
Auto inherit domain name and HTTPS configuration
Standardized API call template
Access Process
1. In the Pages console, click "Create project".
2. Select the "DeepSeek-R1 for Edge" template to deploy.
3. Clone the repository locally. In the edge function of the project, the following example code is the core module for calling the AI model.
// In the edge function (example path: /functions/v1/chat/completions/index.js)export async function onRequestPost({ request }) {// Parse user inputconst { content } = await request.json();try {// Call the Edge AI serviceconst response = await AI.chatCompletions({model: '@tx/deepseek-ai/deepseek-v3-0324',messages: [{ role: 'user', content }],stream: true, // Enable streaming output});// return streaming responsereturn new Response(response, {headers: {'Content-Type': 'text/event-stream','Cache-Control': 'no-cache','Connection': 'keep-alive','Access-Control-Allow-Origin': '*','Access-Control-Allow-Methods': 'POST, OPTIONS','Access-Control-Allow-Headers': 'Content-Type, Authorization',}});} catch (error) {return new Response(JSON.stringify({error: 'AI_SERVICE_ERROR',message: error.message}), { status: 503 });}}
Selectable models are as follows:
model ID | Daily call limit |
@tx/deepseek-ai/deepseek-v3-0324 | 50 |
@tx/deepseek-ai/deepseek-r1-0528 | 20 |
@tx/deepseek-ai/deepseek-r1-distill-qwen-32b | 1000 |
Must-Knows
Currently there are API call rate limits, please control request speed appropriately.
Implement an error handling mechanism to improve application stability
Forbid generating illegal content, high frequency automated requests, other scenarios
Currently a limited-time free Beta service. Official commercial use time to be announced.
For best practice, see the document Implement Edge AI on EdgeOne Pages: DeepSeek R1 Template Operation Guide.