Access any website using OpenAI GPT API, Airtable & Make.com: A Step-By-Step Guide

Greg Vonf @ Business Automated
9 min readSep 3, 2023

The OpenAI GPT AI models have been trained on a significant amount of information, yet once the training is finished, these models stop having access to up-to-date data. That is why you often get inaccurate data or direct answer reminding you about the model cut-off date.

If you are using Chat GPT there are plugins like WebPilot or Web Requests that allow you to query websites and search internet for recent information. How would you do it with API though?

The solution that will enable you to supply the latest information to GPT models via API and make use of current knowledge is using the function feature and Make.com to execute those functions. In this in-depth guide, we will dive into how you can configure OpenAI GPT API to access and interact directly with any website.

Before we move ahead, be sure to check out the video for this guide available on our YouTube channel here.

Also, don’t forget to have a look at our Gumroad page for downloading this scenario blueprint for Airtable and Make.com, available here. You can also get a free account of Make here.

This guide will take you through the whole process step by step.

Required software :

  • Airtable account — (Pro needed to benefit from automations and scripts)
  • Make.com — possible to start on free plan
  • OpenAI account — provides some credits to start with

Step 1: setting up Airtable base

The first step will be to set up Airtable base to create prompts and receive answers. It will contain 2 tables:

  • Questions
  • Prompts

The Question table contains the name of the question, relevant URL to research or free text query as well as linked field to select prompt.

We also have a Trigger Button to start the API request and 2 fields for raw data from the website accessed or query search and finally the GPT output

Questions table where we will ask questions and receive responses from Open AI

The Prompts table contains prompts that indicate action you would like to take with the result of your internet search, such as summarize/ rewrite content or event translate to another language. Using Prompt table allows to reuse longer frequently used prompts in an easy way. You can also easily experiment with different questions using different type of prompts and measure results.

Prompt table in Airtable.

You can also download the base starter for free from here.

Step 2: Creating webhook and starting Make scenario with Airtable button

Our objective will be to create a complete Make.com scenario that will conduct necessary internet search, perform Open AI GPT API request and return data to Airtable.

In case you want to save time and do no want to go through all the steps yourself, you can also download the completed blueprint at a small fee here.

Complete Make.com scenario

If you have not used Make.com — it is a visual automation builder that greatly simplifies connecting to APIS. You can register for a free here.

We need to start the scenario with new webhook.

Create new webhook in Make to use as trigger for scenario in Airtable

We will use this webhook (the URL) inside of Airtable in the button settings. Note that we are adding the dynamically the recordId as parameter at the end.

Using Webhook in Airtable to start an automation

Since the Button field in Airtable is nothing but a glorified hyperlink it will open a window for every click. I personally find this quite annoying so I always add a small trick to force the windows to automatically close, once scenario was successful.

To do that make sure you add a Webhook Response in your Make scenario.

This small chunk of code will automatically cause the window to close.

The code returned in the body is quite simple:

<!DOCTYPE html>
<html>
<head>
<title>Close Window</title>
</head>
<body>
<script>
window.onload = function() {
var openedWindow = window.open("", "_self");
openedWindow.close();
};
</script>
</body>
</html>

Step 3: Getting data from the internet

In the next step we will connect to the internet to retrieve information from the page specified as URL inside of our Airtable. In case we decided to use a text query instead of actual URL — we will use that text and make a basic request to Google search, which will return us the first page of search results.

Note: Google search results are often country specific. In case of Make.com the request is being made by a server that will be located either in US or EU depending on your setting preference indiciated when creating Make account.

Retrieving content of website or search result from the internet.

The results return from this request will a full page in HTML — which contains way too much characters for Open AI GPT model to handle. This is why we need to get rid of HTML by converting the page to text. We will use for that Text Parser: HTML to text module built in to Make.com. Afterwards we are using one more Text Parser: Replace module to remove any links remaining in text except URL. We use here regular expression.

\[(?!\/url\?.*).*?\] 
Transforming website content to HTML and removing unnecessary links (e.g. links to images)

Step 4: Inserting website content as context to GPT model

Before we can make a request to Open AI with content of our website or query added as context, we will need to do a few transformation. We will be also executing a custom request using the new function feature to Open AI API, this means we need to prepare the content of the request ahead.

To make sure that the text from the website does not break the JSON payload used in the HTTP request, we need to escape its content and convert it into JSON. We are using here native Make JSON module Transform to JSON.

Escaping long form text to use with JSON requests in Make.

We will all need to prepare all individual elements of chat GPT API request:

  • system prompt — In our case we just explain “You are a skillful web researcher”
  • user prompt — here it is where we provide our prompt combined with URL or query, we are also encoding it later as JSON to avoid newlines breaking JSON structure
  • search function — this is where we add a function definition so that GPT model can understand it is making and receiving function calls.
I am using Set multiple variables module to make adjusting those variables easier

Step 5 Understanding the Open AI GPT functions

You can read more about how to implement function calling in GPT API requests on the Open AI function’s documentation page. In this scenario we are using functions, however implemented with a shortcut. One could argue that you could also feed content of the website directly to the GPT model context without function calling at all. Yes, that would be true, but since it is relatively new feature I wanted to demonstrate this option, so that you can make this determination for yourself.

Here is example of value we provide in this scenario as search function:

{
"name": "Get_information_from_internet",
"description": "Get necessary information from the internet",
"parameters": {
"type": "object",
"properties": {
"q": {
"type": "string",
"description": "Search query, that will be passed to the search engine"
}
},
"required": [
"q"
]
}
}

As you can see the function JSON contains following parameters:

  • name — will be used by GPT model to reply which function to call.
  • description — this is essential for GPT model to understand in which context this function should be called
  • parameters — described using JSON Schema ( for more visit https://json-schema.org/)

The shortcut

In our particular use case, since we know that all prompts are asking GPT to something with content of the website, it would always reply to our HTTP request with following response ( showing the core part shortened for brevity):

  {
"role": "assistant",
"content": "",
"function_call": {
"name": "Get_information_from_internet",
"arguments": "{\n \"q\": \"{{18.`URL or query`}}\"\n}"
}
}

This is basically a request from Open AI GPT to execute a function call. GPT is aware of functions, what they can do and can use their data, but is not able to make the function request on its own. That is why whenever it decides that a function should be called to provide more data — it will respond with:

  • empty content property
  • function_call property — which contains name of the function and arguments to be passed to that function

It is up to us to execute the function requested by GPT and provide response value of the function back to GPT.

Making direct HTTP request to GPT API
As part of completion messages set we add expected response of GPT assistant and results of the function after

You can investigate the complete request body below.

{
"model": "gpt-4-0613",
"messages": [
{
"role": "system",
"content": "{{11.system}}"
},
{
"role": "user",
"content": {{14.json}}
},
{
"role": "assistant",
"content": "",
"function_call": {
"name": "Get_information_from_internet",
"arguments": "{\n \"q\": \"{{18.`URL or query`}}\"\n}"
}
},
{
"role": "function",
"name": "Get_information_from_internet",
"content": {{10.json}}
}

],
"functions": [
{{11.`search function`}}
],
"temperature": 1
}

You can see that we are making equivalent of 3 steps of chat conversation in a single call (user request, assistant making function request, function result returned to assistant).

GPT AI (as a proper REST API) is stateless, it doesn't know or care if you have made requests to it previously or not. Above request is a proper request that you would have to execute anyway, if you would make a prior request only with the user prompt.

The above shortcut basically removes one HTTP request in make. It will work well for this specific use case — getting information from internet. However if you are interested in making request to multiple functions or would like to give GPT freedom to decide which function to call — you might want to explore below video about making custom GPT functions with Make and Airtable.

Step 6: Saving the output of GPT API response to Airtable

Finally time to save the response from Open AI GPT API to Airtable. In our case we are using a simple Update a record Airtable module. The Record Id is being passed to the scenario during the initial button click and this is what we are using to identify record for update.

We are saving both — the raw website content — in order to have visibility about the content we have feed into the request, as well as the final content response from API

Saving responses from Open AI GPTAPI to Airtable

The scenario is starting with webhook, so it is of the instant type, meaning that whenever we press the trigger button in Airtable — Make will automatically pick up the signal and start execution. All you need to do is turn the scenario ON in Make and start your questions in Airtable!

Some of the examples where our clients have found it useful is:

  • creating website summaries at scale
  • researching internet for new facts (not know to GPT based on training)
  • rewriting content for podcasts, social media and newsletters
  • finding and summarizing DIY recipes
  • finding social media handles
  • helping evaluate products and services

What other ways you think you could use internet search together with the power of chat GPT API? Do let us know in the comments below, as well as what other use cases you might need.

Visit our Gumroad store for a ready made blueprints and as always do not forget to checkout our Youtube Channel for all things related to automating your business!

Business Automated is an independent automation consultancy. If you would like to request custom automation for your business, visit us at https://www.business-automated.com

If you like our tutorials — buy us a coffee☕: https://www.buymeacoffee.com/business

Follow us on Twitter🐦: https://twitter.com/BAutomated

Watch more on YouTube ️📺: https://www.youtube.com/c/BusinessAutomatedTutorials

--

--

Greg Vonf @ Business Automated

Greg is the founder of Business Automated, an agency helping small businesses streamline and simplify their processes. For more visit www.business-automated.com