To create an AI chatbot and integrate it with another platform, you have to communicate with large language model using an API. This API receives prompts from the client and sends them to the model to generate answers.
In this tutorial, you will learn how to create such an API using the DeepSeek R1 large language model so external applications can call it. We will use the DeepSeek R1 model, available on HuggingFace, and the Plumber R package to deploy it as an API.
HuggingFace is an open source platform for building, training, and deploying machine learning models, while Plumber is an R package that expose R code as a RESTful APIs accessible to other applications through HTTP requests.
With this API, you can:
-
Build AI applications
-
Connect to external data and extract meaningful insights
-
Integrate into existing applications to provide customer support, create documentations, and so on.
What is the DeepSeek R1 Model?
DeepSeek R1 is the latest large language model from the Chinese company DeepSeek. It was designed to enhance the problem-solving and analytic capabilities of AI systems.
DeepSeek-R1 uses reinforcement learning and supervised fine-tuning to handle complex reasoning tasks. Unlike proprietary models, DeepSeek R1 is open-source and free to use.
Prerequisites
-
Sign up for a HuggingFace account if you don’t already have one
-
Install R and R Studio.
-
Install the
plumber
R package to build the API endpoint -
Install the
httr2
R package to work with HTTP requests and interact with the Hugging Face API
Step 1: Create Your Project Repository
You need to create an R project to create an API application in R. This ensures that all the files needed to keep your API working are kept together under the same directory. R Studio already has a template provided for API projects, so you can follow the steps below to create yours.
In your R Studio IDE, click on the File menu and go to New Project to open the New Project Wizard. Once in the wizard, select New Directory, then click New Plumber API Project. Inside the directory name field, give it a name (for example DeepSeek-R1 API
), and then click on Create Project.
You will see a file called plumber.R
with a sample API template. This is where you’ll write the code to connect to the DeepSeek R1 model on HuggingFace. Make sure that you clear this template before proceeding.
Next, go to your terminal and create a .env
file. This is where you will store the Hugging Face API key.
touch .env
Create a .gitignore
file and add the .env
file to it. This ensures that sensitive information like access tokens and API keys are not pushed to your Git repository.
Step 2: Create a Hugging Face Access Token
We need to create an access token to connect to Hugging Face models. Go to your profile, click Settings, and click Create New Token to create your access token for the Hugging Face repository.
Copy the access token and paste it into your .env
file, and give it the name HUGGINGFACE_ACCESS_TOKEN
.
HUGGINGFACE_ACCESS_TOKEN="<your-access-token>"
Next is to install the dotenv
package, and paste the following code at the top of your plumber.R
file:
dotenv::load_dot_env()
dotenv::load_dot_env()
loads all environment variables in the .env
file, making them available to the plumber.R
script.
Step 3: Build the DeepSeek API Endpoint
Now that we have our project environment set up and API token ready, we’ll write the code to build the API application by connecting to the DeepSeek R1 model on HuggingFace.
Go to the plumber.R
file and load the following libraries:
library(plumber)
library(httr2)
Copy and paste the following code into plumber.R
:
api_key <- Sys.getenv("HUGGINGFACE_ACCESS_TOKEN")
function(prompt)
url <- "https://huggingface.co/api/inference-proxy/together/v1/chat/completions"
req <- request(url)
Here’s what’s going on in the above code:
-
Sys.getenv
gets the HuggingFace access token and stores it in the variableaccess_token
. -
The
url
variable contains the API link to access the DeepSeek model on HuggingFace. You can get this by searching the model namedeepseek-ai/DeepSeek-R1
on HuggingFace. Go to the View Code button, and under the cURL tab, copy the API URL -
#* @post /deepseek_chat
means that the endpoint makes a POST request through the path/deepseek_chat
. -
This endpoint takes an argument
prompt
, a text, or a question a user is expected to give. -
The
req
object is a chain of various operations, which makes arequest()
to theurl
, and then takes theapi_key
inside thereq_auth_bearer_token()
function. Model properties such asmodel
name,role
,prompt
, andmax_tokens
are passed to thereq
object through thereq_body_json
function. -
The
headers
variable contains the authorization required to make a request to HuggingFace API. -
The request is performed and captured in a response object
res
using thereq_perform()
function. -
The
res
object returns a JSON object, which is now parsed to R using theresp_body_json()
function. -
The
content
of theparsed_data
is now returned so you can extract the information you need from the application for which you want to use the API.
Step 4: Test the API Endpoint
Let’s run the API endpoint to see how the application performs. Click on Run API. This will automatically open the API endpoint on your browser on the URL http://127.0.0.1:8634/docs/
.
Click on the API endpoint dropdown, provide a prompt, and click the Execute button. You should receive a reply in a few minutes.
Conclusion
With your API, you can make inferences to the Hugging Face model and build AI applications in R or other programing languages. You need to host your API to make it accessible to clients online. There are various ways of hosting an R Plumber application: you can use Docker or host it on DigitalOcean using the plumberDeploy R package. However, the simplest and easiest way is to use Posit Connect.
You can use the same approach used in this tutorial to try out other HuggingFace models, build an API to generate images or translate different languages. R Plumber is easy to use, and the documentation provides many resources.
If you are interested in model deployment using R Plumber, you can check out this article on how to deploy a Time Series model built on Prophet using R Plumber.
If you find this article interesting, please check my other articles on learndata.xyz.