Welcome back to our prompt engineering for software developers series!
The previous article delved into the iterative process of prompt development. We highlighted the importance of refining and enhancing prompts to improve the effectiveness of generative AI models.
This article will explore the basics of LLM inference.
Inferring involves taking a text input and performing various types of analysis. These include extracting labels and names or understanding sentiment.
Traditional machine-learning workflows for these tasks can be complex and resource-intensive. However, LLMs simplify the process with prompt-based inference.
They provide immediate results and streamline development.
We will explore how LLMs can efficiently handle multiple tasks, speeding up and making your development process more effective.
The Basics of LLM Inference
LLM inference is about taking a text input and performing various types of analysis. These include extracting labels & names or understanding sentiment.
Traditional machine learning workflows for these tasks usually require time and effort. Collecting labeled data, training models, and deploying them, are parts of the process.
This process can be complex and resource-intensive.
On the other hand, LLMs simplify this with prompt-based inference. It provides immediate results and streamlines development.
In other words, you can use one model and one API for multiple tasks. You’ll easily speed up your development process and insrease efficiency.
Simplifying Sentiment Analysis with LLM Inferring

LLMs make sentiment analysis much easier with simple prompts. Instead of training and deploying separate models for sentiment analysis, you can just ask the model directly.
For example, if you want to know the sentiment of a product review, you can prompt: “What is the sentiment of the following product review?”
The model quickly responds like, “The sentiment of the product review is positive,” making it straightforward to gauge customer feedback.
# Product review text
coffee_maker_review = """
I recently bought this coffee maker and it has been a great addition to my kitchen.
The design is sleek and modern, and it brews coffee very quickly.
However, the water reservoir is a bit small and needs frequent refilling.
The machine arrived with a small scratch, but the customer service team was
very responsive and sent a replacement part immediately. Overall, I am happy
with this purchase and would recommend it to others.
"""
# Sentiment (positive/negative)
prompt = f"""
What is the sentiment of the following product review,
which is delimited with triple backticks?
Review text: '''{coffee_maker_review}'''
"""
response = get_completion(prompt)
print(response)
prompt = f"""
What is the sentiment of the following product review,
which is delimited with triple backticks?
Give your answer as a single word, either "positive"
or "negative".
Review text: '''{coffee_maker_review}'''
"""
response = get_completion(prompt)
print(response)
response:
The sentiment of the product review is generally positive. The reviewer mentions that the coffee maker is a great addition to their kitchen, with sleek and modern design, and brews coffee quickly. They do mention a couple of minor issues such as the small water reservoir and a scratch on the machine, but overall they are happy with the purchase and would recommend it to others.
PositiveExtracting Emotions and Specific Information
LLMs excel at extracting specific information from text, like identifying emotions in a review.
For instance, you can prompt the model with: “Identify a list of emotions that the writer of the following review is expressing.”
Include no more than five items in this list.
This is particularly useful for customer support. It helps you quickly determine if a customer is angry, delighted, or experiencing other emotions. Accordingly, it enables you to respond appropriately and efficiently.
# Product review text
vacuum_review = """
I bought this vacuum cleaner a month ago and it has exceeded my expectations.
It has powerful suction and is very easy to maneuver around the house.
The only downside is that it is a bit noisy.
I contacted customer support about a minor issue with the attachment,
and they were extremely helpful and resolved my problem quickly.
Overall, I am satisfied with this purchase and would recommend it to others.
"""
# Identify types of emotions
prompt = f"""
Identify a list of emotions that the writer of the \
following review is expressing. Include no more than \
five items in the list. Format your answer as a list of \
lower-case words separated by commas.
Review text: '''{vacuum_review}'''
"""
response = get_completion(prompt)
print(response)
# Identify anger
prompt = f"""
Is the writer of the following review expressing anger?\
The review is delimited with triple backticks. \
Give your answer as either yes or no.
Review text: '''{vacuum_review}'''
"""
response = get_completion(prompt)
print(response)response:
satisfaction, excitement, disappointment, gratitude, recommendation
NoInformation Extraction Techniques
Information extraction is another strength of LLMs. It can pinpoint details such as product names and manufacturers.
You might prompt the model with, “Identify the item purchased and the name of the company that made the item.”
This technique is handy for e-commerce sites looking to summarize reviews, track product sentiment, and monitor manufacturer reputation.
# Product review text
coffee_maker_review = """
I recently purchased the BrewMaster 5000 coffee maker, and it has been a game-changer for my mornings.
The coffee tastes fantastic and the machine is incredibly easy to use.
I had a small issue with the water reservoir, but the customer service from CoffeeCo was top-notch and they replaced the part immediately.
"""
# Extract product and company name from customer reviews
prompt = f"""
Identify the following items from the review text:
- Item purchased by reviewer
- Company that made the item
The review is delimited with triple backticks. \
Format your response as a JSON object with \
"Item" and "Brand" as the keys.
If the information isn't present, use "unknown" \
as the value.
Make your response as short as possible.
Review text: '''{coffee_maker_review}'''
"""
response = get_completion(prompt)
print(response)response:
{ "Item": "BrewMaster 5000 coffee maker", "Brand": "CoffeeCo" }Advanced LLM Inference with Combined Prompts
LLMs can handle more complex prompts to extract multiple pieces of information simultaneously.
For example, a single prompt like: “Identify the following items: extract sentiment, is the reviewer expressing anger, item purchased, company that made it,” allows for comprehensive data extraction in one go.
This enhances efficiency and data richness, enabling you to gather detailed insights with minimal effort.
# Product review text
smartwatch_review = """
I recently bought the TechTime X200 smartwatch and I am very impressed.
It has a sleek design and a variety of useful features like heart rate monitoring and GPS.
Although I had a small issue with the strap, TechTime's customer support was excellent and resolved my problem quickly.
"""
# Extract product and company name from customer reviews
prompt = f"""
Identify the following items from the review text:
- Item purchased by reviewer
- Company that made the item
The review is delimited with triple backticks. \
Format your response as a JSON object with \
"Item" and "Brand" as the keys.
If the information isn't present, use "unknown" \
as the value.
Make your response as short as possible.
Review text: '''{smartwatch_review}'''
"""
response = get_completion(prompt)
print(response)response:
{ "Item": "TechTime X200 smartwatch", "Brand": "TechTime" }
Topic Inference from Text
LLMs can also infer topics from longer texts, such as articles. This capability is valuable for summarizing and categorizing content.
For example, you can use a prompt like: “Determine five topics being discussed in the following text.”
This helps manage and index large datasets of textual information. This makes searching and analyzing content based on the inferred topics easier.
# Story text
story = """
TechInnovate, a leading technology company, recently announced its quarterly earnings report.
The company reported a record revenue of $10 billion, driven by strong sales of its latest smartphone, the TechPhone 12.
CEO Jane Doe stated, "We are thrilled with the performance of our new product line and the positive reception from customers worldwide."
The earnings report also highlighted a 20% increase in software sales, attributed to the growing demand for cloud services.
However, the company faced challenges in its supply chain due to global semiconductor shortages, which impacted the production schedule.
Despite these challenges, TechInnovate's stock price rose by 5% following the announcement, reflecting investor confidence in the company's growth prospects.
"""
# Infer 5 topics
prompt = f"""
Determine five topics that are being discussed in the \
following text, which is delimited by triple backticks.
Make each item one or two words long.
Format your response as a list of items separated by commas.
Text sample: '''{story}'''
"""
response = get_completion(prompt)
print(response)response:
{
"Item": "TechTime X200 smartwatch",
"Brand": "TechTime"
}Practical Applications of Topic Inference
LLMs can automate content categorization and set up alert systems by leveraging inferred topics.
Prompting the model to determine whether an article covers specific topics can be useful for businesses. This allows them to set up alerts for new content related to those topics.
This automation ensures timely updates and effective content management. For example, if you’re interested in NASA news, the system can notify you whenever a new article on NASA appears.
Creating reliable inference systems requires robust prompt design and consistent output formats, like JSON.
Experimenting with different prompt structures and formats helps optimize results.
Experimenting with various prompts and formats will help you build more effective and reliable inference systems. This approach ultimately enhances the performance and accuracy of your AI applications.
Final Thoughts
This article explored the basics of inferring with large language models (LLMs). We discussed how LLMs can simplify prompt engineering tasks such as sentiment analysis, extracting emotions, and specific information from the text.
We demonstrated how to perform these tasks efficiently by leveraging prompt-based LLM inference. This approach eliminates the need for complex and resource-intensive traditional machine learning workflows.
Practical examples illustrated how LLMs inferring can quickly provide valuable insights from text. This is essential to enhancing your AI applications’ performance and accuracy.
The following article will explore advanced prompt engineering techniques for transforming text using LLMs.
We will cover various tasks such as language translation, tone transformation, data format conversion, and grammar and spell checking.
Discover more from AI For Developers
Subscribe to get the latest posts sent to your email.