An Iterative Cycle For GPT Prompt Engineering

Daniel Rosehill - Jul 24 - - Dev Community

Image description

The "Iterative Cycle of GPT Prompt Engineering" describes a workflow I have developed for achieving iteratively better GPT outputs.

It allows the user to move from naive prompting to more mature prompting and obtain progressively better outputs from Generative Pre-trained Transformer (GPT) models such as ChatGPT.

Here's how it works:

Develop A Prompt Engineering Custom GPT

Warning: things are going to get a bit meta here (no pun intended).

The first step in this process is to use the GPT you're working with to:

  • Develop a custom GPT for the purpose of conducting prompt engineering on that GPT
  • Create a configuration text for that custom GPT

Step 1

Use the GPT to create a configuration for the custom GPT that we're going to use for prompt engineering.

In ChatGPT, I might prompt as follows:

Please provide a configuration script for a custom GPT built on the ChatGPT platform. 

The purpose of this custom GPT should be to guide the user in prompt engineering for ChatGPT. 

This GPT should ask the user to copy and paste their draft prompt into the chat. 

Next, the GPT should analyse the prompt. 

The GPT should then provide specific suggestions for improving the prompt. 

In this context, improving means yielding a more detailed and correct output. 

Finally the GPT should ask the user if he would like to receive an updated draft of the prompt.

If the user response affirmatively, the GPT should output an updated version of the prompt

Enter fullscreen mode Exit fullscreen mode

JSON for ChatGPT or another GPT:

{
    "name": "PromptOptimizerGPT",
    "description": "A custom GPT designed to help users improve their prompts for the most useful output. The GPT will guide users through uploading a prompt document or copying and pasting a prompt, analyzing it, offering suggestions for improvement, and providing an option to download the optimized prompt.",
    "prompts": [
      {
        "prompt": "Welcome to PromptOptimizerGPT! I am here to help you optimize your prompts for the best possible results. Please upload a prompt document or copy and paste your prompt below:",
        "responses": [
          {
            "type": "file_upload",
            "description": "Upload your prompt document here."
          },
          {
            "type": "text_input",
            "description": "Or copy and paste your prompt here."{
                "name": "PromptOptimizerGPT",
                "description": "A custom GPT designed to help users improve their prompts for the most useful output. The GPT will guide users through uploading a prompt document or copying and pasting a prompt, analyzing it, offering suggestions for improvement, and providing an option to download the optimized prompt.",
                "prompts": [
                  {
                    "prompt": "Welcome to PromptOptimizerGPT! I am here to help you optimize your prompts for the best possible results. Please upload a prompt document or copy and paste your prompt below:",
                    "responses": [
                      {
                        "type": "file_upload",
                        "description": "Upload your prompt document here."
                      },
                      {
                        "type": "text_input",
                        "description": "Or copy and paste your prompt here."
                      }
                    ]
                  },
                  {
                    "prompt": "Thank you for providing your prompt. I will now analyze it and offer suggestions for improvement. Please wait a moment...",
                    "responses": [
                      {
                        "type": "processing",
                        "description": "Analyzing the prompt..."
                      }
                    ]
                  },
                  {
                    "prompt": "Here are my suggestions for improving your prompt:\n\n{{suggestions}}\n\nWould you like to download the optimized prompt?",
                    "responses": [
                      {
                        "type": "button",
                        "description": "Yes, I would like to download the optimized prompt.",
                        "value": "yes"
                      },
                      {
                        "type": "button",
                        "description": "No, I do not want to download the optimized prompt.",
                        "value": "no"
                      }
                    ]
                  },
                  {
                    "prompt": "Great! Here is the updated prompt:\n\n{{optimized_prompt}}\n\nPlease copy and paste this text into ChatGPT to get the most useful output.",
                    "responses": [
                      {
                        "type": "text_output",
                        "description": "Copy and paste the optimized prompt text."
                      }
                    ],
                    "conditions": [
                      {
                        "if": "user_response == 'yes'",
                        "then": [
                          {
                            "action": "output_text",
                            "text": "{{optimized_prompt}}"
                          }
                        ]
                      },
                      {
                        "if": "user_response == 'no'",
                        "then": [
                          {
                            "action": "end_conversation",
                            "text": "No problem! If you need any further assistance with your prompts, feel free to reach out."
                          }
                        ]
                      }
                    ]
                  }
                ]
              }

          }
        ]
      },
      {
        "prompt": "Thank you for providing your prompt. I will now analyze it and offer suggestions for improvement. Please wait a moment...",
        "responses": [
          {
            "type": "processing",
            "description": "Analyzing the prompt..."
          }
        ]
      },
      {
        "prompt": "Here are my suggestions for improving your prompt:\n\n{{suggestions}}\n\nWould you like to download the optimized prompt?",
        "responses": [
          {
            "type": "button",
            "description": "Yes, I would like to download the optimized prompt.",
            "value": "yes"
          },
          {
            "type": "button",
            "description": "No, I do not want to download the optimized prompt.",
            "value": "no"
          }
        ]
      },
      {
        "prompt": "Great! Here is the updated prompt:\n\n{{optimized_prompt}}\n\nPlease copy and paste this text into ChatGPT to get the most useful output.",
        "responses": [
          {
            "type": "text_output",
            "description": "Copy and paste the optimized prompt text."
          }
        ],
        "conditions": [
          {
            "if": "user_response == 'yes'",
            "then": [
              {
                "action": "output_text",
                "text": "{{optimized_prompt}}"
              }
            ]
          },
          {
            "if": "user_response == 'no'",
            "then": [
              {
                "action": "end_conversation",
                "text": "No problem! If you need any further assistance with your prompts, feel free to reach out."
              }
            ]
          }
        ]
      }
    ]
  }

Enter fullscreen mode Exit fullscreen mode

Now, we've gotten GPT to draft a configuration script for creating a GPT specifically to help us with prompt engineering for that platform.

Our next step is to go ahead and create the custom GPT with the configuration text supplied.

Assuming that the process went as expected, you've now produced your prompt engineering GPT. Yours should look something like this:

Image description

Developing A Workflow And Folder Structure To Support Iterative Prompt Engineering

I call this workflow iterative prompt engineering because its value is enabling the user to iterate through progressively better prompts for the GPT.

Those users who engage in "prompt engineering" tend to use an approach we might call (simply) trial-and-error. This process is retrospective.

They try a prompt. Identify deficiencies. And run it again until they've got something like a polished production-ready version.

The problem with this method is that it's wasteful if you're accessing the GPT programatically, such as via an API. By frontloading prompt engineering into its own workflow, we can eliminate costly unnecessary generations.

If you're working on prompt engineering in a team you can built out a very simple folder structure housing the prompts you're working on as markdown or .txt files:

prompt1

  • v1.md
  • v2.md

Etc.

Now that you have your custom GPT built, you simply run each version through it.

A prompt might be:

Please review the following prompt for ChatGPT and process it optimising it according to your instructions [prompt]

Although you can also be less elaborate and just do something like this:

Fix this prompt [prompttext]

As we've created a custom GPT with a saved configuration we don't need to repeat the detailed instructions on every run.

An Example In Practice

Prompt V1:

Please create a briefing document for Daniel Rosehill who is an online commentator keen to share knowledge about the evolution of GPTs, especially ChatGPT. Daniel is excited about the potential of GPTs to bring about positive change in the world and sees huge potential in custom GPTs as assistants to streamline workflows.

The output should be entitled Daniel’s Weekly GPT News Brief. Please only include items from the past 7 days in the output. 

Please search for items discussing the following subjects:

GPTs
The growth of GPTs
Emerging business use-cases for GPTs
Emerging personal use-cases for GPTs
GPT regulation
Rates of GPT adoption
Developments in GPT technology 
Large language learning models - development and technology

An “item” may be any of the following:

An article
A blog
A YouTube video
A podcast

For every item that you include in the output please include the following details:

Publication date
Author
Where the article was published. Please also include a summary of the publication.
A summary of the item.
A link to the item

Please include at least 10 items in every brief.

After generating the brief, please ask the user whether he would like to download the brief. If the user responds affirmatively please generate a downloadable link to the brief. 


use-cases for GPTs
Emerging personal use-cases for GPTs
GPT regulation
Rates of GPT adoption
Developments in GPT technology 
Large language learning models - development and technology

An “item” may be any of the following:

An article
A blog
A YouTube video
A podcast

For every item that you include in the output please include the following details:

Publication date
Author
Where the article was published. Please also include a summary of the publication.
A summary of the item.
A link to the item

Please include at least 10 items in every brief.

After generating the brief, please ask the user whether he would like to download the brief. If the user responds affirmatively please generate a downloadable link to the brief. 

Enter fullscreen mode Exit fullscreen mode

V2:

Please create a briefing document for Daniel Rosehill, an online commentator eager to share knowledge about the evolution of GPTs, especially ChatGPT. Daniel is enthusiastic about the potential of GPTs to bring about positive change in the world and sees immense potential in custom GPTs as assistants to streamline workflows.

The output should be titled "Daniel’s Weekly GPT News Brief" and should only include items from the past 7 days.

Search for items discussing the following subjects:

GPTs
The growth of GPTs
Emerging business use-cases for GPTs
Emerging personal use-cases for GPTs
GPT regulation
Rates of GPT adoption
Developments in GPT technology
Large language learning models - development and technology
An “item” may be any of the following:

An article
A blog post
A YouTube video
A podcast
For every item included in the brief, please provide the following details:

Publication date
Author
Source (including a brief description of the publication or platform)
Summary of the item
Link to the item
Please include at least 10 items in each brief.

After generating the brief, ask the user if they would like to download the document. If the user responds affirmatively, generate a downloadable link to the brief.
Enter fullscreen mode Exit fullscreen mode

And so on and so forth.

Recommended Reading

Prompt Engineering for Generative AI: Future-Proof Inputs for Reliable AI Outputs

Happy prompting!

. . . . . . . . . . . . . .