Prompt Engineering – Part 4: Template pattern

When working with the LLM model, sometimes we need the model to respond exactly to a specific structure, template, or format to use that better reading or use results for another purpose. For example, JSON format to utilized in some applications. In this article, I will show how to guide LLM to respond exactly format that we want.

Back to the original behavior of LLM, it continues writing what we providing them. For example, We provide “1 two there,” then LLM will respond “four five six…”. GPT and many other models that adding a special guide to turn standard models into Q&A models.

Assistant: You are a helpful assistant... 
User: <user's questions>
Assistant:

If we don’t provide any further guidance, LLM will provide answers in the way that it thinks is best to match your questions

For example, I asked ChatGPT to generate a list of 5 user information. Here is what I get

User: Generate a list of 5 users information
Assistant:

Certainly! Here is a list of 5 users with their information:

User 1:

Name: John Smith
Email: john.smith@example.com
Phone Number: +1 123-456-7890
Address: 123 Main St, Springfield, IL 62701, USA
Occupation: Software Engineer

User 2:
.....

Let’s add guidance to change using the output template

User: Generate a list of 5 users information, use this template for output. CAPITALIZED WORDS are my placeholder. Here is the template:

Name: NAME
Email: EMAIL
Phone: PHONE
Address: ADDRESS

Here is the result

Sure, here is the information for 5 users using your provided template:

User 1:

Name: John Smith
Email: john.smith@example.com
Phone: +1 123-456-7890
Address: 123 Main St, Springfield, IL 62701, USA

....

Or you can guide it to return in JSON format. Give it a try!

To use this pattern, your prompt should make the following fundamental contextual statements:

  • I am going to provide a template for your output
  • X is my placeholder for content
  • Try to fit the output into one or more of the placeholders that I list
  • Please preserve the formatting and overall template that I provide
  • This is the template: PATTERN with PLACEHOLDERS

In conclusion, the template pattern is super common when you build internal agents, where outputs must be in a specific format or follow standards.

A quick tip: Open AI has JSON mode, where we can force models to respond only in JSON format. It’s helpful when we want to use output as input for another application.


Comments

One response to “Prompt Engineering – Part 4: Template pattern”

  1. […] I mixed Chain-of-Thought with Template pattern to make AI still think before producing the answer but only output the final […]