Options to Integrate Chat GPT with Salesforce

Chat GPT

In this section, we will go over the various options for integrating GPT with Salesforce. There are three ways to achieve Chat GPT with Salesforce:

  1. Build a custom integration using the Salesforce APEX to call GPT Model API
  2. Pre-built integration packages or solutions available on the Salesforce AppExchange
  3. Create and Host a GPT model to any public cloud or the local machine

I don’t believe any ISV partner (https://appexchange.salesforce.com/) has already developed a package to integrate with the GPT Model. Also, creating and hosting a GPT model can be a complex task that requires a good understanding of machine learning and artificial intelligence.

GPT-3 is a pre-trained model that OpenAI provides; it is not publicly available to download and host on one’s local server or in the public cloud. So options #2 and #3 are invalid at this moment.

Integrate GPT-3 Model API with Salesforce

To call the predicted text from a Chat GPT model, we need to use the API provided by the service that hosts the GPT model. Send an HTTP request from APEX to a specific endpoint. The body of the request is where you put any needed authentication information or parameters. High-level steps to Integrate Chat GPT with Salesforce

  1. Sign up for an API key from OpenAI
    • Go to the OpenAI website (https://openai.com/) and click on the “Sign In” button in the top right corner of the page.
    • After signing in, click on the “API” button at the top of the page.
    • Click on the “Create an API Key” button on the API page.
    • Provide a name for your API Key and select the permissions you want to grant to the key.
    • Click on the “Create” button to create the key.
  2. Once you have an API key, you can use it to request the GPT-3 API.
  3. The endpoint for generating text from a GPT-3 model is: https://api.openai.com/v1/engines/davinci-codex/completions
  4. You can make a POST request to this endpoint with a JSON payload that includes your API key, the text that you want to generate completions for, and any other parameters you want to use to customize the generated text.
import json
import requests

api_key = "YOUR_API_KEY"
model = "davinci-codex"
prompt = "Write something interesting about Salesforce integration"

headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {api_key}"
}

data = {
    "prompt": prompt,
    "engine": model,
    "max_tokens": 50,
    "stop": ["Salesforce integration"]
}

response = requests.post(
    "https://api.openai.com/v1/engines/davinci-codex/completions",
    headers=headers,
    json=data
)

print(response.json())

Make a GPT-3 API HTTP request in Apex

Http http = new Http();
HttpRequest request = new HttpRequest();
request.setEndpoint('https://api.openai.com/v1/engines/davinci-codex/completions');
request.setMethod('POST');
request.setHeader('Content-Type', 'application/json');
request.setHeader('Authorization', 'Bearer ' + apiKey);
request.setBody(jsonPayload);
HttpResponse response = http.send(request);

if (response.getStatusCode() == 200) {
    // Success, parse the JSON response
    Map<String, Object> jsonResponse = (Map<String, Object>) JSON.deserializeUntyped(response.getBody());
    // process the jsonResponse
}
else {
    // Handle the error
}

This is just an example of how you might call the GPT-3 API provided by OpenAI, different providers might have different endpoints, authentication, and parameter that are needed.

Browse Open API documentation about the endpoints

https://beta.openai.com/docs/models/gpt-3

It’s worth noting that OpenAI only provides the GPT models and not a chatbot service, however, the GPT models can be used as the core of a chatbot. But you will need to build the chatbot architecture, prompt and context management by yourself.

Browse more at https://twirltech.in/architect-blogs/

Leave a comment

Your email address will not be published. Required fields are marked *