OpenAI’s GPT-3 and GPT-4 are powerful tools that can generate human-like text, answer questions, and provide insights. However, the quality of these outputs depends heavily on how you frame the input, or prompt. Efficient prompt engineering ensures you get the right answers by designing inputs that guide the AI towards relevant, clear, and useful responses. Let’s find out how to craft effective prompts with examples.
Prompt engineering is the art of creating prompts that guide AI models towards specific and desired outcomes. The way a question is framed can significantly affect the AI’s response. By carefully structuring and refining your prompt, you can minimise ambiguity and maximise relevance. Here’s an example.
- Simple prompt: “Explain climate change.”
- Engineered prompt: “Explain how human activities contribute to climate change, specifically focusing on carbon emissions and deforestation.”
In the engineered prompt, we’ve added more details and specific topics for the AI to cover, resulting in a more focused answer.
How the normal prompt works and its difficulties
When people first interact with AI models, they tend to use simple, broad prompts. While these can produce useful results, they often lead to overly general or vague answers. Let’s take an example.
- Normal prompt: “Tell me about renewable energy.”
The response to this prompt may include a lot of unnecessary information, or it may not focus on the aspect of renewable energy you’re interested in.
The key challenge here is that broad prompts lead to responses that may be too generic or unfocused. By adding specificity and structure, you can avoid this.
- Engineered prompt: “Explain the environmental benefits of renewable energy, with examples of how wind and solar power reduce carbon emissions.”
Prepare your input
To create an effective prompt, the first step is to prepare your input. This involves clearly defining what you want the AI to focus on and adding any necessary context to narrow down the response. Here’s an example.
- Before prompt engineering: “What is artificial intelligence?”
- After engineering: “Provide a brief explanation of artificial intelligence, focusing on its use in healthcare for diagnostic purposes.”
Notice how the engineered prompt includes the specific application of AI (healthcare) and the context (diagnostic purposes), which will guide the model towards a more targeted answer.
The important tips are:
- Be specific about the subject.
- Provide relevant background information.
- Define the focus area or scope of the response.
Plan the output
Planning the output involves thinking about what you expect in return from the AI, including the length, structure, or format of the response. An example is given below.
- Without planning: “Summarise the history of computers.”
- With planning: “Summarise the history of computers in three bullet points, focusing on key technological milestones from the 20th century.”
By planning the output, you can guide the AI towards generating a response that matches your needs—whether you want a list, a comparison, or a summary.
The key considerations here are:
- Length: Specify if you want a brief or detailed answer.
- Format: Ask for the output in bullet points, paragraphs, or a numbered list.
- Tone/style: Indicate if the response should be technical, formal, or conversational.
Play around with the output by testing different inputs
Experimenting with various prompts can help you understand how different phrasing or constraints affect the AI’s response. Slight changes in how you ask a question can lead to drastically different answers. Let’s look at two examples.
Example 1
- First prompt: “What are the benefits of exercise?”
- Response: A general answer listing a variety of benefits like weight loss, mental health, and heart health.
- Tweaked prompt: “What are the mental health benefits of regular exercise, specifically for people with anxiety?”
- Response: A more focused answer that centres on the connection between exercise and anxiety management.
Example 2
- First prompt: “Explain machine learning.”
- Tweaked prompt: “Explain machine learning in simple terms for a high school student.”
By changing the input, you can guide the AI towards different levels of complexity or different kinds of answers.
Here are a few useful tips for experimenting.
- Rephrase questions to see how different wording affects the output.
- Try adding constraints, such as asking for specific examples or a particular tone.
- Use multiple versions of a question to test the range of responses.
Efficient prompt engineering
Efficient prompt engineering saves time and helps you get the right answer without unnecessary back-and-forth. To do this, you can apply best practices that enhance clarity, specificity, and context. Here’s an example.
- Inefficient prompt: “How does the internet work?”
- Efficient prompt: “Explain how the internet works, focusing on the role of routers and IP addresses, and describe the process in 200 words.”
The efficient prompt is specific, sets clear expectations about the topic focus, and even defines the desired length of the response.
Here are the best practices.
- Be explicit: Clearly define the scope, length, and format of the response.
- Iterate: If the first response isn’t what you’re looking for, refine and test new prompts.
- Test variations: Explore different ways of asking the same question to find the most effective phrasing.
Getting structured output: JSON format responses
In some cases, you may want OpenAI to provide a structured response, especially if you’re building applications that require data to be formatted in a specific way. OpenAI can generate responses in formats like JSON, which is useful for assessments, forms, or APIs.
When requesting answers in JSON format, it’s essential to specify both the structure and the content requirements clearly within the prompt. This ensures the AI understands the desired format and the type of data it needs to include.
Here’s an example prompt for the JSON format.
- Prompt: “Generate three multiple-choice AI questions for an assessment, along with four answer options for each question, and provide the correct answer in JSON format. The JSON format should follow this structure:
{ “questions”: [ { “question”: “Question text”, “options”: [“Option A”, “Option B”, “Option C”, “Option D”], “correct_answer”: “Correct option” } ] }
Provide the data in this format for each question.”
The example response is:
{ “questions”: [ { “question”: “What is a neural network?”, “options”: [ “A system designed to mimic the human brain”, “A type of network used for data transmission”, “A social media platform”, “An algorithm for decision trees” ], “correct_answer”: “A system designed to mimic the human brain” }, { “question”: “Which of the following is a type of machine learning?”, “options”: [ “Supervised learning”, “Quantum computing”, “Database management”, “Firewall configuration” ], “correct_answer”: “Supervised learning” }, { “question”: “Which company developed GPT-3?”, “options”: [ “Google”, “Microsoft”, “OpenAI”, “Amazon” ], “correct_answer”: “OpenAI” } ] }
Why does this work? By specifying the exact structure of the JSON in the prompt, you ensure that OpenAI generates a response that can be easily parsed by a system or used in automated workflows. This approach is particularly helpful when working with AI in application development or form-based assessments.
The best practices are:
- Specify the format: Be explicit about the data structure (in this case, JSON) and the keys you want in the response.
- Test the output: Ensure the AI’s response aligns with the format by tweaking the prompt if necessary.
- Use with applications: This is ideal for integrating AI-generated data directly into APIs or databases that require structured input.
Getting the right answers from OpenAI’s models comes down to how you frame the questions. With efficient prompt engineering, you can guide the AI to provide clear, relevant, and targeted responses. By preparing thoughtful inputs, planning the desired output, and experimenting with various prompts, you can unlock the full potential of OpenAI’s language models. Remember, the more specific and structured your prompts, the better the answers you’ll receive.
Disclaimer: This article heavily used OpenAI services to generate the content and improve quality because OpenAI knows the context of prompt engineering better than us to get the right answers. LOL!