Unlock 1 Year of EdgeOne + 1TB CDN: Join Our Developer Journey
Get Started Now !
Use the Deepseek-R1 model to quickly build a conversational AI site
This document mainly introduces using the "AI Chat Template" of EdgeOne Pages, allowing developers to build a conversational AI website in just one minute. You can also explore more play modes—add an AI assistant to a tech blog, create a knowledge base retrieval prototype for a team, and experience millisecond-level response of Edge AI at zero cost.
The following video demonstrates the template deployment steps and how to use it:
Overview
DeepSeek R1, as a popular model on GitHub that has garnered 70,000 stars and achieved over 4 million downloads on Hugging Face, has attracted significant attention for its powerful language understanding and generation capabilities. In this context, we are offering free edge deployment of DeepSeek R1 and have launched the "AI Chat Template" in combination with EdgeOne Pages. This aims to provide developers with a low-cost, high-efficiency solution for building conversational AI, promoting the widespread adoption and innovative applications of edge AI technology across diverse scenarios.
Template Features and Advantages:
Preset the Deepseek-R1 model, no need for complex configuration, just quickly launch
Sink AI computing power to edge nodes, achieve lower latency, provide faster and smoother experience
Can be directly called in Pages Functions to use the AI model
Native support for OpenAI API standard interfaces, existing toolchains are plug-and-play
Visit the demo site https://deepseek-r1-edge.edgeone.app to experience the ultimate speed of Edge AI, or log in to the EdgeOne Pages console to one-click deploy your exclusive AI application.
Scenarios
Using this template, developers can try proof-of-concept in various scenarios, such as rapidly deploying an intelligent customer service system seamlessly integrated into an enterprise website, accurately interpreting user inquiries and automatically pushing operation guides, reducing the manual handling volume of a large number of basic questions; it can also empower SaaS products to build automation assistants, supporting high-frequency scenarios such as ticket categorization, data report interpretation, and business process configuration, helping users save time on repetitive operations and achieve deep integration of AI capabilities with business flows.
Directions
1. Creating a Project
Click the link to directly enter the creation page https://console.tencentcloud.com/edgeone/pages/new?template=deepseek-r1-edge, configure the project, and then click "Create Now".

2. Deploy
Project creation is successful and will enter the deployment process. You can view the deployment procedure on the details page of the build deployment.

3. Generate a Preview URL
After successful deployment, you can click the preview button in the project overview to generate a preview URL for quick verification of the deployment results.

4. Sending questions
Open the preview URL and send any question to DeepSeek R1, you will see the streaming output with very quick response.

This is merely a simple Demo. You can also clone the code locally, update the project as you intend, such as adding new modules to enrich product capabilities.
In addition, our service goes directly from edge nodes to large models, a fully Serverless architecture AI interface that allows free invocation without requiring an API key.
// In the edge function (example path: /functions/v1/chat/completions/index.js)export async function onRequestPost({ request }) {// Parse user inputconst { content } = await request.json();try {// Call the Edge AI serviceconst response = await AI.chatCompletions({model: '@tx/deepseek-ai/deepseek-r1-distill-qwen-32b',messages: [{ role: 'user', content }],stream: true, // Enable streaming output});// Return a streaming response// ......} catch (error) {// ......}}
Since the project domain name has a validity period limitation, we recommend that you add a custom domain name (e.g., www.example.com) to use it as a persistently callable API address.

API calls also comply with the Open AI API standard, which means you can easily switch between different AI providers and reuse existing code and tools.
The following code shows how to quickly integrate services into your AI Business, implementing an automated failover mechanism with multiple AI providers for fault tolerance, giving your business multi-protection service elasticity.
export class AIService {//......constructor(deepseekApiKey: string,siliconFlowApiKey: string,groqApiKey: string,tencentApiKey: string) {// Initialize AI service configurationthis.serviceConfigs = [// Add configuration item{name: 'edgeonepages',baseURL: 'https://www.example.com/v1', // Replace www.example.com with your domain nameenabled: true,priority: 1 // highest priority},{name: 'tencent',baseURL: 'https://api.lkeap.cloud.tencent.com/v1',model: 'deepseek-v3',enabled: true,priority: 2 // second priority},{name: 'deepseek',baseURL: 'https://api.deepseek.com/v1',model: 'deepseek-chat',enabled: true,priority: 3 // third priority}];//......}}
Conclusion
The DeepSeek R1 AI Chat template for EdgeOne Pages provides developers with the possibility to deploy AI applications on edge nodes. Although it is still in the early stage, it has demonstrated huge potential. We sincerely invite developers to try out this template, experience DeepSeek R1 on edge nodes, and feel the convenience and efficiency of Pages. We also look forward to your valuable feedback to help us continuously improve the Pages experience.