Simplifying Prompt Engineering for Everyday Tasks

A Practical Checklist for Consistently High-Quality AI Responses

TL;DR: Effective communication with AI models, particularly GenAI, relies heavily on prompt engineering, which involves crafting precise instructions to elicit desired responses. Despite the slight performance differences between newer models, how we interact with them significantly impacts the quality of outputs. Key elements of a prompt include assigning a role to guide the AI's persona, providing context to tailor responses, suggesting a response format for clarity, and offering examples to improve accuracy. While advanced techniques exist, these basic elements can significantly enhance the quality of AI responses, making prompt engineering a crucial skill in leveraging AI capabilities effectively. 

I recently came across an article highlighting that while newer AI models show only minor differences in performance, the way we communicate with them has a profound impact. Prompt engineering is crucial for effective interaction with GenAI, yet it often receives little attention because these models can generate responses to almost any input. However, assuming these responses are always optimal is a misconception.

Personally, I initially struggled to leverage AI effectively due to my lack of skill in crafting good prompts. When I was either too lazy or in a rush, I received superficial answers that required additional time to refine. But, in reality, creating effective prompts isn't overly complex.

To achieve high-quality responses, I focus on including four key elements in my prompts. While there are more advanced techniques available, primarily for developers integrating LLMs into applications, these basic elements can significantly enhance the quality of AI outputs. This post explores the essential components of effective prompts, with a follow-up post planned to delve into advanced techniques like chains of thought and trees of thought.

Elements of every effective prompt

Imagine you're considering investing in the stock market for the first time. You call a friend for advice on which stocks to purchase and how much to invest. If your friend lacks investment experience or isn't a professional financial advisor, you might end up making costly decisions.

Now, let's apply this scenario to Large Language Models (LLMs). These models are trained on a vast array of texts, covering everything from medicine to finance to poetry. The challenge lies in knowing which aspect of their knowledge base you're interacting with, or essentially, which "persona" of the GenAI you're seeking advice from.

To address this, you can assign a role to the AI. This helps narrow down the specific area of expertise you want the AI to draw upon, ensuring more relevant and accurate advice.

Assign a role

One of the most important techniques in crafting effective prompts is assigning a role to the AI. This simple step can significantly improve the quality of the response from a Large Language Model (LLM).

To illustrate this, consider an example where I asked ChatGPT, "What's the best way to format a document titled 'How to grow houseplants?'" The response I received was informative, but it lacked specificity. By assigning a role, you can tailor the AI's response to fit a particular context or expertise, leading to more targeted and useful advice.

Chat GPT's response with no role assigned

While the initial response was satisfactory, it became even more valuable when I assigned a specific role to ChatGPT. By adding "You're an SEO expert" to the beginning of my question, the response transformed into a highly relevant and specialized piece of advice. This demonstrates how assigning a role can elevate the quality and relevance of the AI's output, making it more suitable for specific tasks or domains.

Chat GPT's response with a role assigned

Chat GPT's response with a role assigned

Notice the difference: with the role assigned, the response is exactly what you'd expect from a well-crafted SEO post, complete with extra optimization tips. Both responses are technically correct, but when it comes to specific tasks like creating an SEO post or writing a research paper, it's crucial to ask the right "expert." For SEO content, you want advice from an SEO specialist, while for academic work, you should seek input from an academic. Therefore, always ensure that you assign a relevant role to your AI prompts to get the most appropriate and effective responses.

Explain the context

Context is essential for Large Language Models (LLMs) to generate responses that are relevant and tailored to your specific needs. To illustrate this, consider the previous example where we asked an LLM to create an SEO post about houseplants. However, if this post is intended for a blog focused on grow lights, the context changes significantly. Just as a human writer would adapt their content based on this new context, an LLM also needs this information to produce a more appropriate and effective response.

ChatGPT's response when a role and a context is assigned.

The updated post now includes a section focused on light requirements, which is perfectly suited for a grow light company's website. Context can vary in scope, ranging from a brief statement to a comprehensive document, webpage, or even a transcript from a YouTube video. It serves as the foundation for the AI to generate accurate and relevant responses. If you're automating the process and prefer not to manually insert this context, using formats like JSON can simplify the task. You can simply request the AI to work with this format, making the integration more efficient.

Suggest response format

I find it more effective when Large Language Models (LLMs) fill in the blanks of a predefined template rather than generating text from scratch. While this approach may not be suitable for all scenarios, I've consistently observed that suggesting a specific response format leads to better outcomes.

There are two primary methods I use to achieve this. The first involves providing a template for the LLM to complete with its response. This structured approach helps ensure that the output aligns with my expectations and is easier to understand.

In my prompts, I often provide a partially completed answer with placeholders for the specific details I need the LLM to fill in. This approach resonates with me because I've mentally structured the format, making it easier for me to comprehend the output compared to generic text generated by the AI. The results are consistently well-organized, which is why I prefer this method over traditional responses.

To date, this prompt engineering technique has proven to be highly effective and reliable. However, there are times when I require the response in a specific format, such as JSON, to facilitate easier integration or analysis. Additionally, I often find it more convenient to receive information in a table format rather than a lengthy text, as it enhances readability and understanding. For instance, let's take our previous blog post example and request it in JSON format this time.

Provide examples

Another effective way to enhance the performance of Large Language Models (LLMs) is by providing examples in your prompts. This technique is closely related to concepts like zero-shot, one-shot, few-shot, and n-shot learning, which all involve using examples to guide the AI's responses.

Zero-shot learning involves asking the LLM to respond without providing any examples. This approach is suitable for open-ended questions, such as "Why is December colder?" or tasks like translation, where examples are not necessary.

One-shot learning involves providing the LLM with a single example to guide its response. In certain situations, just one example can significantly improve the outcome. A notable application of this technique is in structured data extraction. When dealing with large amounts of unstructured data, instructions alone might not be enough. However, a single example can help the LLM accurately locate and extract the required information.

Few-shot learning involves providing a Large Language Model (LLM) with multiple examples to enhance its performance on a specific task. This technique is particularly useful for tasks like classification, where the model might not have prior exposure to your specific dataset or context. For instance, if you're working with a private knowledge base, providing a few examples can significantly improve the model's accuracy in classification tasks. Additionally, few-shot learning is beneficial when tackling complex problems, as it helps guide the LLM to generate more accurate and relevant responses. For example, I use few-shot prompting to refine my answers by asking GPT to square them, ensuring the output aligns with my expectations.

Other elements of a good prompt (when others would read the response.)

While assigning a role, defining a context, and suggesting an output format are generally sufficient for me to obtain a good response, there are additional elements that can be beneficial in certain situations. I prefer to keep things straightforward, but these extra components can be valuable when the response will be read by others.

For instance, if you're using GPT to draft an email to a customer, the default response might lack the desired emotional tone. You might need to convey a message about a defect in a way that is both apologetic and convincing, but the AI's response might not convey the right sentiment. To address this, I always specify the audience, tone, and style when crafting responses intended for others.

Audience specification is crucial as it allows you to tailor the content to match the audience's level of understanding, interests, and expectations. For example, when I ask the same question but target a different audience, the response from GPT is notably different.

Tone setting involves specifying the emotional tone you want the response to convey. For example, you might request a "professional and authoritative" tone when drafting a message to disagree with a colleague, or a "warm and empathetic" tone when trying to win over a client.

Style setting is similar but focuses on the non-emotional aspects of writing. This could involve asking the AI to use "technical terms and industry-specific language" when communicating with a superior, or to employ "short and simple sentences" when writing for a broader audience.

Final thoughts

In conclusion, if there's one skill that can significantly enhance your career prospects, it's mastering prompt engineering. As GenAI becomes increasingly integrated into various aspects of work, understanding how to communicate effectively with these models is crucial.

While advanced techniques like Chains of Thought (COT) can yield high-quality results, they are often unnecessary for routine tasks. For most everyday applications, simpler yet well-crafted prompts can be just as effective.