Welcome back to our prompt engineering for software developers series!
In the previous article, we delved into the fundamentals of prompt engineering. We highlighted its significance in enhancing the performance and utility of large language models (LLMs).
We discussed how clear and well-structured prompts can significantly improve AI-driven applications’ accuracy, relevance, and efficiency.
The second article of this llm prompt engineering for developers series will focus on the first principle of prompt engineering: writing clear and specific instructions.
This principle is crucial as it minimizes ambiguous and irrelevant outputs from AI models.
In this part of our course will:
- Explore how and why clarity and specificity in prompt engineering is paramount
- Provide practical implementation steps
- Demonstrate these concepts through real-world examples.
Additionally, we will introduce a key tactic for effective llm prompt engineering for developers: using delimiters to enhance the readability and structural clarity of your prompts.

Principle 1 of LLM Prompt Engineering for Developers: Write Clear and Specific Instructions
Importance:
Clarity and specificity in prompts minimize the likelihood of ambiguous or irrelevant outputs from AI models. When prompts are vague, the model may generate many responses, often including undesirable or off-topic content.
Specific instructions help guide the model in focusing its generative capabilities precisely where needed.
How to Implement:
- Define the Task Clearly: Specify exactly what you need from the model. For example, instead of saying, “Write code for a tennis for two app,” specify, “Write a clean CSS code for a 2D game similar to the renowned tennis for two game.”
- Use Concrete Details: Include pertinent details to help the model generate more accurate responses. If the task involves analyzing text, provide context or specify which aspects to focus on.
- Avoid Ambiguity: Use language that leaves little room for interpretation. This ensures that the model does not make unwarranted assumptions about the task.
The best way to learn is through real practice!
Let’s learn the first method if LLM prompt engineering for developers and begin by setting up our environment.
Setting Up Your Development Environment for OpenAI API Usage
This is a crucial step for any AI developer looking to leverage Large Language Models for various applications.
Step 1: Installing the OpenAI Python Package
First, ensure that your Python environment is ready and capable of interacting with OpenAI’s API. This involves installing the OpenAI Python package, which provides a convenient way to make API requests to OpenAI services.
Step 2: Acquiring Your API Key
To authenticate your requests to OpenAI’s services, you’ll need an API key. This key is a unique identifier that grants you access to OpenAI’s API under your account.
Here’s how to obtain it:
- Go to the OpenAI platform: OpenAI API Keys
- Log in with your credentials or sign up if you haven’t already.
- Navigate to the API keys section and generate a new key or copy an existing one if you’ve previously set it up.
Remember, your API key is sensitive information. It grants access to OpenAI’s API and will incur costs if misused. Keep it secure and do not share it publicly.
Step 3: Configuring Your API Key in Your Python Script
Once you have your API key, you must configure it in your Python script to authenticate your API requests.
By setting openai.api_key, you authenticate your Python session to send requests to OpenAI’s API. This setup is essential before you proceed with sending any API requests.
# Import the openai library to interact with the OpenAI API
import openai
# Set your OpenAI API key here. This key is used to authenticate your requests to the API.
openai.api_key = "YOUR_API_KEY"
# Define a function to get a completion from a model. This function will be used to generate responses based on a given prompt.
def get_completion(prompt, model="gpt-3.5-turbo"):
# Create a list containing the prompt wrapped in a dictionary that specifies the role as "user"
messages = [{"role": "user", "content": prompt}]
# Call the OpenAI API to generate a chat completion. The function takes several parameters:
# model: specifies which model to use for generating the response
# messages: the history of conversation, which here is just the prompt
# temperature: controls the randomness of the response. Set to 0 for consistent, deterministic responses.
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0
)
# Extract and return the content of the response from the first choice in the list.
return response.choices[0].message["content"]Tactic 1: Using Delimiters in Prompt Engineering
Understanding the strategic use of delimiters is essential for crafting effective prompts for AI models. Delimiters help clearly define the boundaries of different segments within a prompt. This assists the model in parsing and interpreting the input accurately.
Let’s explore how to use delimiters effectively through a detailed example.
Delimiters for Clarity and Structure
Purpose of Delimiters:
Delimiters are symbols or sets of characters used to separate distinct parts of the input in a prompt. They can take various forms, such as triple backticks (“`), triple quotes (“””), angle brackets (< >), custom XML-like tags (
Their main purpose is to enhance the readability and structural clarity of the prompt. In turn, this directs the AI to process the text in a manner aligned with the developer’s intentions.
Example of Using Delimiters:
Consider you’re tasked with creating a prompt asking an AI model to summarize a lengthy and complex text.
To ensure that the AI focuses only on the specific text you intend without confusion, you can use triple backticks to enclose the text.
How will you structure such a prompt?
text = """
Clear instructions are crucial when directing an AI model, as they \
ensure the responses are directly relevant and highly accurate. Detailed \
directives not only guide the model precisely but also enhance the quality \
of the outputs. It is vital not to equate the simplicity of a prompt with \
efficacy. Often, more elaborate prompts that include comprehensive details \
and context can significantly improve the model's performance by providing \
clear directions and expectations.
"""
prompt = f"""
Summarize the text delimited by triple backticks \\
```{text}```
"""
response = get_completion(prompt)
print(response)the output response is:
text: <br><br>"Clear and detailed instructions are essential for directing an AI model effectively, improving the accuracy and quality of its outputs by providing comprehensive details and context."The choice of delimiter (“` in this case) is crucial. It clearly segments the text that needs processing from instructional or contextual parts of the prompt.
This prevents the model from confusing the parts of the input that are mere instructions versus those that require action.
Tactic 2: Asking for Structured Output in AI Development
LLM prompt engineering for developers is highly beneficial to specify the expected output format. This practice ensures consistency, readability, and ease of integration of the AI’s output into other systems or workflows.
Here, we will explore how you can structure prompts to request outputs in specific formats like JSON. The latter’s widely used for data exchange.
Imagine you are developing a feature for a streaming service that suggests new movies to users based on generated descriptions. You want the AI model to generate a list of made-up movie titles along with their directors, genres, and a brief synopsis.
The output should be in JSON format for easy integration into your service’s database.
prompt = f"""
Generate a list of three made-up movies along \
with their directors, genres, and a brief synopsis. \
Provide them in JSON format with the following keys: \
movie_id, title, director, genre, synopsis.
"""
response = get_completion(prompt)
print(response)
```json
{
"movies": [
{
"movie_id": 1,
"title": "The Midnight Masquerade",
"director": "Elena Blackwood",
"genre": "Mystery/Thriller",
"synopsis": "A series of mysterious murders occur during a lavish masquerade ball, throwing everyone into a maze of suspense and deception."
}
]
}Tactic 3: Conditional Instruction Verification in AI Development
In AI development, instructional content especially, it’s essential to guide the model to verify whether certain conditions are met in the input before generating an output.
This tactic involves crafting prompts that direct the AI to process information and check if the input fits a certain criterion. Containing a sequence of steps or instructions is a good example of such criteria.
Let’s discuss how to implement this tactic using an example where we transform a descriptive process into a structured, step-by-step format.
Example: Structuring Text About Setting Up a Home Office
Imagine you have a text that describes the process of setting up a home office. The goal is for the AI to analyze the text and, if it identifies a sequence of actionable steps, reformat them into a numbered list.
The AI should return a specific message if no clear instructions are identified.
Here’s how you could set up the prompt for this task:
text_1 = """
To set up a basic home office, start by choosing a quiet corner of your house.
"""
prompt = f"""
You will be provided with text delimited by triple quotes.
If it contains a sequence of instructions, \
re-write those instructions in the following format:
Step 1 - ...
Step 2 - ...
...
Step N - ...
If the text does not contain a sequence of instructions, \
then simply write \"No steps provided.\"
\"\"\"{text_1}\"\"\"
"""
response = get_completion(prompt)
print("Completion for Text 1:")
print(response)The output response:
"Completion for Text 1:<br>Step 1 - Choose a quiet corner of your home with minimal distractions.<br>Step 2 - Ensure there is ample lighting, preferably natural light.<br>Step 3 - Select a desk that fits the space and can accommodate your essentials.<br>Step 4 - Invest in a comfortable chair that supports good posture.<br>Step 5 - Organize your supplies and technology to keep your workspace clean and efficient."Tactic 4: Few-Shot Prompting in AI Development
The few-shot prompting is another essential weapon for LLM prompt engineering for developers. It’s especially useful when preparing a language model to handle requests in a specific format or style based on just a few guiding examples.
Let’s explore another example where we use few-shot prompting to teach an AI how to provide advice in a conversational format, focusing on practical life skills.
Example: Giving Advice on Time Management
Suppose we want the AI model to give advice on managing time effectively, akin to how a mentor might advise a mentee.
We’ll use the few-shot technique to first show the model how to respond to a question about staying organized. Then, we’ll ask it to generate advice on managing time.
prompt = f"""
Your task is to answer in a consistent, advisory style.
<mentee>: How can I stay organized?
<mentor>: Staying organized is all about prioritizing your tasks and knowing what to tackle first.
<mentee>: How can I manage my time better?
"""
response = get_completion(prompt)
print(response)The generated response is:
"<mentor>: To manage your time better, it's important to set specific goals and create a daily schedule. Break down your tasks into smaller, manageable chunks and allocate time for each one. Avoid multitasking and eliminate distractions to stay focused. Remember to take breaks to avoid burnout and maintain productivity throughout the day."Principle 2: Give the Model Time to Think
This principle is metaphorical in the AI and computer science context.
It refers to structuring prompts and interactions with the model in a way that allows it to “think” or process the information more thoroughly.
For computational models, this might involve crafting prompts that encourage deeper analysis. It could also mean using techniques that push the model to consider multiple aspects of a problem before responding.
How to Implement:
- Iterative Prompting: Instead of expecting the model to deliver a perfect output in one go, consider using an iterative approach. Start with a basic prompt, evaluate the output, and then refine the prompt based on the insights gained.
- Layered Prompts: For complex tasks, break down the prompt into smaller, manageable parts. This approach allows the model to handle one aspect of the problem at a time. It can be particularly useful in tasks requiring synthesizing diverse data points.
- Encourage Detailed Responses: When appropriate, prompt the model to provide explanations or reasoning for its answers. This not only gives the model “time to think” but also forces the model to process information at a deeper level, potentially leading to higher-quality and more insightful responses.
Tactic 1: Specifying Steps for Task Completion with a New Example
Breaking down complex tasks into clear, structured steps can significantly improve the effectiveness of AI prompts. By clearly outlining each step, software developers guide the model more effectively.
This tactic is about breaking down the task into manageable parts and specifying what the model should do at each step.
Let’s explore an example to demonstrate this LLM prompt engineering for developers method. We’ll use a scenario where the goal is to process and summarize environmental data.
Example: Summarizing and Analyzing Environmental Data
Suppose you want an AI model to analyze data about recent temperature trends in various cities and then generate a report. The data includes temperature readings over the last month, and your goal is to get insights into temperature trends and any anomalies.
# Define the data text
text = """
The temperature data for the past month across various cities is as follows:
"""
# Create a structured, step-by-step prompt
prompt_1 = f"""
Perform the following actions:
1 - Summarize the temperature trends for each city using the data provided.
2 - Identify any unusual temperature fluctuations and note which city they occurred in.
3 - Generate a brief report in JSON format that includes average temperatures.
Separate your responses with line breaks.
Data:
```{text}```
"""
response = get_completion(prompt_1)
print("Completion for prompt 1:")
print(response)Completion for prompt 1:
1 - Temperature trends:
- New York: Averaged 15°C with a peak at 25°C on the 15th.
- Los Angeles: Averaged 19°C with no significant fluctuations.
- Denver: Saw an unusual spike to 20°C mid-month, typically averaging 10°C.
2 - Unusual temperature fluctuations occurred in Denver with a spike to 20°C mid-month.
3 - Report in JSON format:
{
"New York": {
"average_temperature": 15,
"highest_temperature": 25,
"anomalies": []
},
"Los Angeles": {
"average_temperature": 19,
"highest_temperature": null,
"anomalies": []
},
"Denver": {
"average_temperature": 10,
"highest_temperature": 20,
"anomalies": ["Spike to 20°C mid-month"]
}
}Reducing Hallucinations in Large Language Models (LLMs)
Hallucinations are instances where the model generates plausible but incorrect or nonsensical information.
AI Hallucinations compromise the reliability and trustworthiness of machine learning tools. It’s even more impactful in scenarios requiring high accuracy such as medical, legal, or reporting sectors.
Here are several prompt engineering techniques to reduce hallucinations in LLMs:
1. Rigorous Pre-training and Fine-Tuning
Pre-training:
Enhance the initial training phase of your LLM by using a diverse and extensive dataset. A well-rounded dataset helps the model develop a robust understanding of language and facts. This can reduce the likelihood of generating incorrect information.
Fine-Tuning:
Fine-tune the model for your specific needs by training it on datasets tailored to the tasks it will perform. This customization ensures better alignment with your application’s requirements.
This targeted training can help the model learn the nuances of the domain and improve its accuracy in generating relevant responses.
2. Incorporating External Verification Systems
Implement systems that cross-verify the model’s outputs with trusted external databases or APIs.
For instance, integrating real-time data checks from verified sources can help ensure the information the model uses are current and accurate.
This is particularly useful in dynamic fields like finance or news where facts can frequently change.
3. Using Prompt Engineering to Guide Model Outputs
Crafting prompts effectively can significantly reduce the occurrence of hallucinations.
Design prompts to be clear and direct, specifying the kind of information needed and the context in which it should be framed. This guidance can help the model generate responses that are more aligned with factual accuracy.
- Example Prompt: Instead of asking, “Tell me about climate change,” ask, “What are the scientifically verified impacts of climate change according to the latest IPCC report?”
4. Implementing Feedback Loops
User feedback helps improve the model continually. You can adjust the model’s training data and response generation mechanisms by analyzing cases where the model hallucinates.
Regularly updating the model based on feedback helps it learn from its mistakes, progressively reducing the frequency of incorrect outputs.
5. Ensemble Methods
Use ensemble techniques where outputs are generated by multiple models or runs of the same model, and the most common output is chosen. This method can average out unusual errors or hallucinations that occur in a single run, leading to more accurate and reliable outputs.
6. Human-in-the-Loop (HITL) Systems
HITL, or Human-in-the-loop, refers to the integration of human expertise in the prompt engineering process. This approach is particularly important in applications where the outputs are critical or seemingly inconsistent.
This not only catches hallucinations but also adds an extra layer of reliability to the system.
HITL is commonly used by software engineers and developers to ensure the delivered code is clean and accurate.
Final Thoughts
This article focused on the importance of writing clear and specific instructions as a fundamental principle of LLM prompt engineering for developers.
We discussed how clarity and specificity in prompts can significantly improve the accuracy and relevance of AI-generated outputs.
By defining tasks clearly, using concrete details, and avoiding ambiguity, developers can guide AI models more effectively.
We also introduced the use of delimiters to enhance the readability and structural clarity of prompts, providing practical examples to illustrate these concepts.
In the next article, we will introduce iterative prompt refinement and how it can be used to continually improve the performance of your AI models.
Discover more from AI For Developers
Subscribe to get the latest posts sent to your email.