How to Use Vultr Cloud Inference in Node.js with Langchain

Updated on April 22, 2024
How to Use Vultr Cloud Inference in Node.js with Langchain header image

Introduction

Vultr Cloud Inference allows you to run inference workloads for large language models such as Mixtral 8x7B, Mistral 7B, Meta Llama 2 70B, and more. Using Vultr Cloud Inference, you can run inference workloads without having to worry about the infrastructure, and you only pay for the input and output tokens.

This article demonstrates step-by-step process to start using Vultr Cloud Inference in Node.js with Langchain.

Prerequisites

Before you begin, you must:

Set Up the Environment

Create a new project directory and navigate to the project directory.

console
$ mkdir vultr-cloud-inference-nodejs-langchain
$ cd vultr-cloud-inference-nodejs-langchain

Create a new Node.js project.

console
$ npm init -y

Install the required Node.js packages.

console
$ npm install @langchain/openai

Inference via Langchain

Langchain provides a Node.js SDK to run inference workloads for Vultr Cloud Inference. You can use the @langchain/openai package to make the API calls.

Create a new JavaScript file name inference-langchain.js.

console
$ nano inference-langchain.js

Add the following code to inference-langchain.js.

javascript
const { ChatOpenAI } = require('@langchain/openai');
const { HumanMessage, SystemMessage } = require('@langchain/core/messages');

const apiKey = process.env.VULTR_CLOUD_INFERENCE_API_KEY;

// Set the model
// List of available models: https://api.vultrinference.com/v1/chat/models
const model = '';
const messages = [
  new HumanMessage('What is the capital of India?'),
];

async function main() {
    const client = new ChatOpenAI({
        openAIApiKey: apiKey,
        modelName: model,
        configuration: {
            baseURL: 'https://api.vultrinference.com/v1',
        }
    });
    
    const llmRsponse = await client.invoke(messages);
    console.log(llmRsponse.content);
}

main();

Run the inference-langchain.js file.

console
$ export VULTR_CLOUD_INFERENCE_API_KEY=<your_api_key>
$ node inference-langchain.js

Here, the inference-langchain.js file uses the @langchain/openai package to run inference workloads for Vultr Cloud Inference. Langchain uses Langchain Expression Language (LCEL) for defining different types of messages such as HumanMessage and SystemMessage. For more information, refer to the Langchain documentation.

Conclusion

In this article, you learned how to use Vultr Cloud Inference in Node.js with Langchain. You can now integrate Vultr Cloud Inference into your Node.js applications that uses Langchain to generate completions for large language models.