A large language model (LLM) is a computer program that takes in text and outputs predictions (completions) about the next word in the text. For example, if you give the language model the text “The quick brown fox”, it might output the prediction “jumped”.
What is zero-shot prompt engineering?
Zero-shot prompt engineering is the process of creating a writing prompt that can be used without any prior knowledge or training. In other words, the system can generate new examples of what it should do without being given examples beforehand. This is opposed to k-shot prompt engineering, which requires some prior knowledge or training to be used effectively.
Zero-shot prompts are different from k-shot prompts because they do not require training data, to be effective. In a k-shot prompt, you would provide several examples such that the LM completes the task by adding more examples.
Zero-shot learning allows you to learn from a wider variety of data sources. With k-shot learning, your outputs will be limited to the style of the examples you provided in the prompt. This is similar to overfitting, where better outputs are missed because they are overfitted to the examples or training data.
example zero-shot prompt
Zero-shot prompt engineering is
Output 1: Zero-shot prompt engineering is a field that studies how to generate natural language prompts (e.g., questions or descriptions) for a task such that a model trained on those prompts can be used zero-shot on other tasks.
Output 2: Zero-shot prompt engineering is the process of designing a prompt that does not require any specific prior knowledge in order to be answered.
example k-shot prompt
Building language models is hard. Designing generative models are hard. Inference is easy. Zero-shot prompt writing is
When are zero-shot prompts best?
For most people, zero-shot learning is better than k-shot learning.
K-shot prompts are when you give the LLM a set of examples as instructions, and it follows the format of those examples to complete a task. This can be useful for highly repetitive tasks that don’t require much creativity.
Zero-shot prompts do not require previous examples. Instead, zero-shot prompts can be written similarly to how you would communicate with another human. This can be useful for people who want to integrate LLMs into any workflow since they don’t have to keep providing a k-shot worth of information whenever they want the LLM to do something new.
So, while k-shot prompts are best for highly repetitive tasks, zero-shot prompts allow people to integrate LMs into any workflow.
How do you write zero-shot prompts?
So how do you do zero-shot prompt engineering? First, you develop a task or question you want the language model to answer. For example, let’s say we want the language model to generate ideas for new businesses. So our task might be: “Generate a list of ideas for new businesses.”
Designing a zero-shot prompt requires careful consideration of the structure and wording of the prompt itself. The goal is to make the prompt as clear and concise as possible, while still providing enough information for the respondent to answer accurately. Additionally, zero-shot prompts should be flexible, allowing them to be easily adapted to different contexts and settings.
How can I learn more about writing zero-shot prompts?
Prompt engineering, or “prompt writing” as I call it, is in its very early stages. And just like what to call it, experts have many disagreements about how to do it and whether it’s even necessary.
While some people suggest sitting back and relaxing until AI can predict your every desire, I prefer to adventure out and try to use AI by prompt writing right now.
To my surprise, very little was written about how non-researchers can use writing prompts to utilize AI like GPT-3.
The most useful content seems to be examples of other people’s prompts. However, these are hard to come by.
I’ve found one person on the bleeding edge of prompt writing, Riley Goodside, who has posted examples of prompts that stretch the imagination of what is possible.
Besides Mr. Goodside, an OK source for prompt examples has been scanning research papers that mention the topic of prompts. Many people are already familiar with the “Let’s think step by step” sub-prompt that was proven to increase the accuracy of LLMs by this research paper.
Besides these two sources, I suspect that the best prompt examples are currently hidden behind software-as-a-service (SaaS) applications like Jasper and paywalled content specific to those SaaS apps.