By Oscar Frith-Macdonald, 5 November 2025
If you attended Reconnect 2025, you will have seen our presentation on integrating natural language queries with FileMaker. If you missed it, consider saving the date for next year. These companion articles build on that session, offering more detail and letting you explore the concepts at your own pace.
But first the original presentation:
The session explored how natural language queries can enable users to interact with their solution and data using the “Perform SQL Query by Natural Language” script step to create a chatbot. But it's not as simple as just plugging in the script step and giving it a chatbot interface. For example, here we do have a correct answer in that there are 5504 records in our contacts table; however, in our system, we use soft deletes, so 1500 of those contacts are not visible in the system and should not be counted.

We walked through:
In this article, we cover the first two points, first going over the script step, followed by how it works and interacts with the selected LLM. In the second article, we cover the different ways that you can improve the results.
This script step lets you query your database using plain language instead of writing SQL by hand. It uses an AI model to interpret your prompt, understand the database structure, and generate the right SQL queries. Once those queries are run, the model can return the results in natural language.
The process typically works like this:

This is a large script step with a lot going on, a minimum of 11 different options you can set, so we broke this down into each of the different options.

Before any of the new AI script steps are used, you first need to call the “Configure AI Account” script step once. This script step really only needs to be called once at file open, and then the account name used in that call can then be used for all the other AI script steps.
![]()
This script step allows you to set an account name (plain text), select your model provider (a drop-down list of the following providers: OpenAI, Anthropic, Cohere, or Custom), and also enter an API key (plain text).
The model is the Large Language Model (LLM) that you want to use for this script step. This is entered in plain text, and must be a model from the provider that was specified in the “Configure AI Account script step.

This can be a variable or a field, and is simply the question you want answered. From our example, this would be: “How many contacts in the system?

There are a couple of different ways to set the options. First, there is the “From List” option, which gives you access to a dropdown list of Actions, a few different ways to select tables that the script step will have access to, and also shows the“Stream” checkbox option (which we will cover later in this article). The second way to set the options is via a JSON Structure, which allows a more dynamic way to setup the script step. You can generate the JSON in the script and pass it as a variable to the script step, this means that you don't need to edit the script step when you want to do things like change the tables that you want access to.
There are 5 different actions available. Most of these are to enable you to see what the LLM is sending back in response to the script step being run, but one action is to expose some of the information that you are sending through to the LLM.


To select the tables that you want to be able to query, there are three different options: From list, By name, and By DDL.

From list: Select specific tables from a list.
This allows you to select any table occurrences in your current file. This is an important thing to note if you have a data separation model (a user interface file and a data file). You will need to make sure that you have a table occurrence of any tables from your data file if you want to have access to them.

By name: Provide a list of table names as a text expression.
Separate multiple table names with carriage returns. A simple text list of the table occurrence names. Again, these have to be in the file you are currently in.
By DDL: Provide a custom DDL text expression.
This allows you to control exactly which schema information is sent to the model.
We will discuss DDL further on in this article, but in short, the other options for selecting tables will automatically send the DDL for the tables you select through to the LLM to allow it to understand the data structure. If you select the “ByDDL” option, then you will need to construct the DDL for the tables yourself, and this is what is sent through.

The stream option determines how the response is delivered when the Action is a Query.

We will look at the prompt templates in more detail later in this article, for now, you just need to know that you can use the “Configure Prompt Template” script step to expose FileMaker's default prompt templates and name and edit these, or simply create your own prompt templates. Once the “Configure Prompt Template” script step has been called, you simply enter the Template name that you chose into this option on the “Perform SQL Query by Natural Language” Script Step.
![]()

This is a JSON object that consists of key-value pairs for additional parameters that are supported by the model provider. For example: `{"temperature": 0.7, "seed": 42}`. You should refer to the model provider's documentation for key names of supported parameters and their valid ranges.

This is simply the specified field or variable that you would like the response to be saved into.

This option allows you to specify a Web Viewer object and a function to call in that Web Viewer. This is how we displayed the conversation in a messaging window.
The “Perform SQL Query by Natural Language” script step works a little differently from most script steps we are used to. Usually, we call a script step and expect that step to do just one thing. e.g.
The “Perform SQL Query by Natural Language” script step has a bit of back and forth with the LLM to get a result.

You can think of the LLM as an intern or apprentice; it will do its best to produce the correct SQL statement the first time, but it may not always get it correct. This is why there is a feedback loop here, the first statement may be wrong, it may have some valid SQL that FileMaker cant process correctly, as FileMaker's ExecuteSQL function only supports a limited subset of SQL syntax. So the result is fed back to the LLM to allow for another attempt to get the answer to the prompt.
In this first part, we’ve taken a closer look at how the “Perform SQL Query by Natural Language” script step actually works. We have broken down each of the available options, explored how FileMaker communicates with the LLM, and walked through the sequence of steps that happen behind the scenes when a query is run. What looks simple on the surface is in reality a layered process involving schema definitions, prompt handling, and multiple rounds of interaction between FileMaker and the model.
In Part 2, coming soon, we’ll move on to the practical side — looking at how to improve the results you get from this script step using LLM comments and prompt templates to get more consistent and reliable answers.