Edge Developer Platform
  • Pages
    • Product Introduction
    • Quick Start
      • Importing a Git Repository
      • Starting From a Template
      • Direct Upload
    • Framework Guide
    • Project Guide
      • Project Management
      • edgeone.json
      • Error Codes
    • Build Guide
    • Deployment Guide
      • Overview
      • Create Deploys
      • Manage Deploys
      • Deploy Button
    • Domain Management
      • Overview
      • Custom Domain Name
      • Configuring an HTTPS Certificate
      • How to Configure a DNS CNAME Record
    • Pages Functions
    • KV Storage
    • Edge AI
    • EdgeOne CLI
    • Pages MCP
    • Best Practices
      • 1-Minute Quick Deployment + Free Beta Test in Progress | Your Exclusive DeepSeek, Handled by EdgeOne!
      • Deploy WordPress Gatsby To EdgeOne Pages
      • Build an Ecommerce Platform Using WordPress + GatsbyJS
      • Combining Sanity to Deploy a Portfolio Template
    • Migration Guides
      • Migrating from Vercel to EdgeOne Pages
      • Migrating from Cloudflare Pages to EdgeOne Pages
      • Migrating from Netlify to EdgeOne Pages
    • FAQs
    • Contact Us
    • Release Notes
Unlock 1 Year of EdgeOne + 1TB CDN: Join Our Developer Journey
Get Started Now !

Edge AI

Overview

EdgeOne deploys AI services to global edge nodes, providing developers with low-latency, high-performance, zero Ops AI inference capability. This feature is designed to address the problems of high latency and high cost faced by traditional cloud-based AI services, enabling Pages users to more conveniently integrate AI features into their applications, improving user experience and reducing development and operation costs.

Currently, we have deployed the DeepSeek R1 model at global edge nodes, enabling Pages Projects to quickly access and utilize AI capabilities. All users can experience this service for free, and you can integrate intelligent dialogue features into your website through simple API calls.


Core Advantages

Ready-To-Use Model Service
Preset optimized Deepseek-R1 model
Directly call the AI model from Pages Functions
No need to handle Ops work such as model deployment and version management
Assurance Of Low-Latency Response
Request automatic routing to the nearest edge node
Support streaming to reduce first-byte latency
Built-in connection reuse and transmission optimization
Seamless Integration Development Experience
Seamless integration with EdgeOne Pages project
Automatically inherit domain name and HTTPS configuration
Provide standardized API call templates


Integration Process

1. Click "Create Project" on the Pages Console.
2. Select the "DeepSeek-R1 for Edge" template for deployment.
3. Clone the repository locally, in the project's edge function, the following example code is the core module for calling the AI model.
// In the edge function (example path: /functions/v1/chat/completions/index.js)
export async function onRequestPost({ request }) {
// Resolve user input
const { content } = await request.json();

try {
// Call Edge AI service
const response = await AI.chatCompletions({
model: '@tx/deepseek-ai/deepseek-r1-distill-qwen-32b',
messages: [{ role: 'user', content }],
stream: true, // Enable streaming output
});

// Return streaming response
return new Response(response, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
}
});
} catch (error) {
return new Response(JSON.stringify({
error: 'AI_SERVICE_ERROR',
message: error.message
}), { status: 503 });
}
}


Precautions

There is currently a call rate limit for API calls; please control the request speed appropriately.
It is recommended to implement an error handling mechanism to improve application stability.
Prohibited for generating illegal content, high-frequency automated requests, etc.
Currently, it is a time-limited free beta service; the official commercial use time will be notified separately.