By Oscar Frith-Macdonald, 6 November 2025
If you haven’t yet read Part 1 of the Reconnect article, we suggest you read that first. Part 1 covers the Perform SQL Query by Natural Language script step, the different available variables and how to set them. We also cover how the script step works and interacts with the selected LLM.
In this article, we are diving into the different methods you can use to improve the results of the script step. We will look at how you can set up LLM comments in your database and the kind of information that needs to be in these LLM comments. We will also be covering how you can set up different prompt templates and the sort of information that you can include in them.
Now I am sure there is some confusion around the DDL at this point, so let's try and clear that up. In my opinion, the DDL is misnamed. What DDL actually stands for is Data Definition Language, which makes the “Get DDL” action the equivalent of a “Get English” action. What you are actually getting with the “Get DDL” action is the database schema, expressed in the Data Definition Language, for the tables you’ve allowed the model to access. So really, a “Get Table Schema” action would make more sense.
Even Claris’ own documentation describes it this way:
“Returns the database schema (in Data Definition Language) that this script step generates and sends to the model.”
(Claris Help – Perform SQL Query by Natural Language)
When you run the Perform SQL Query by Natural Language script step, FileMaker builds a DDL structure for each table you’ve selected.
The DDL includes:
For example:

If you haven’t set up any LLM comments, the entire schema for every selected table is sent through. This is rarely what you want. In larger systems with wide tables, this can mean hundreds of fields and long comment strings being transmitted — adding unnecessary tokens and irrelevant detail for the model to process.
FileMaker provides a way to control what is actually sent to the model by marking specific comments as LLM comments. These are just ordinary field comments in the Database Manager, but when you prefix them with `[LLM]`, FileMaker recognises them as AI-specific instructions.
Example:
![]()
Once you add even one LLM comment to a table, FileMaker changes its behaviour, and only fields with LLM comments are included in the DDL; all other fields in that table are excluded
This has three key benefits:
From Claris’ own best-practice guidance:
“Add the [LLM] tag to limit fields included”
(Schema Best Practices for SQL Generation)
Because the model only sees schema and comments (not live data), a well-written LLM comment needs to include all the context a human developer could infer from the data itself.
That means describing:
Good example:
![]()
Poor example:
![]()
The first one gives the model usable context to generate the right SQL, even when multiple fields or tables could be relevant.
Using LLM comments turns that schema into a controlled vocabulary for the model, ensuring it understands what each field means while keeping your solution efficient and private.
If you’re building anything more than a quick demo, start by adding LLM comments to your most important fields. It’s the easiest way to shape how the model sees your data — and just as importantly, what it doesn’t see.
A few habits make a big difference when setting up LLM comments and managing what goes into the DDL:
Being deliberate with your LLM comments keeps the schema lightweight, protects private data, and gives the model just enough context to produce accurate SQL without guesswork.
Now that we’ve covered the DDL and how the model understands your schema, the next question is: how does it know what to do with it?
That’s where Prompt Templates come in.
Prompt Templates act as predefined instruction sets that tell the LLM how to behave when interpreting a query. They sit between the user’s natural language prompt and the database schema, giving you a consistent and controllable framework for how SQL is generated and how responses are phrased.
You configure them using the Configure Prompt Template script step, and then reference the template name inside Perform SQL Query by Natural Language. Each template can hold two optional components — an SQL Prompt and a Natural Language Prompt — which are sent along with every request.
Each template has two prompt options you can enable:
They do very different things.
SQL Prompt
The SQL Prompt gives the LLM a list of rules on how to structure its SQL. This is essentially the system prompt for the query generation part. It defines things like:
Here’s an example based on one of my templates:

During testing, I found the model kept returning queries that used the DATE_TRUNC function, which isn’t recognised in FileMaker. So I added a rule to tell it not to use that function. These prompts will probably evolve as you discover little quirks or unsupported syntax that cause errors.
You can also use the SQL prompt as a bit of a safety net. For example, you might add information about a field like “record_status” so that if a full LLM comment is missed, the model still knows how that field should be used.
Natural Language Prompt
The Natural Language Prompt is much simpler. It just tells the model how to return the final answer — the format and style of the response, not the SQL itself.
For example:
![]()
You can think of this as the “presentation layer” for the model’s response. In most cases, you won’t need to edit it much — once you’ve got a format you’re happy with, you’ll likely leave it as-is.
Using Prompt Templates in Practice
Prompt templates are stored in memory once configured, so I normally call Configure Prompt Template early in the process — usually during file open or before the first query is made. After that, I just pass the template name into any Perform SQL Query by Natural Language script steps that need it.
Because the templates are separate from the scripts, you can update and refine them without touching any user-facing code. This is especially handy when you’re trying to fix small issues — you can adjust your SQL rules, add or remove restrictions, or fine-tune the tone of the model’s answers without needing to edit multiple scripts.
Here are a few things that have worked well in practice:
The new Perform SQL Query by Natural Language script step opens up a completely different way to interact with FileMaker data — but it’s not magic out of the box. To get meaningful, accurate results, you need to teach the model how your system works.
The DDL defines what the model can see. LLM comments let you refine that definition, focusing only on the fields that matter and stripping away the noise. Prompt templates then shape how the model interprets that schema and writes SQL, giving you consistent, predictable behaviour across different contexts.
When you combine clear [LLM] comments with well-tuned prompt templates, you turn FileMaker’s “natural language” query from a novelty into a genuinely useful interface for exploring data — one that feels simple for the user but is carefully engineered behind the scenes.