Unleashing LLM Potential: Prompt Engineering Techniques for BA Tasks

Karthick TVM
Analyst’s corner
Published in
5 min readFeb 1, 2024

--

Photo by Mojahid Mottakin on Unsplash

Large Language Model (LLM)

A language model is a machine learning model that aims to predict and generate plausible language. Autocomplete is a language model, for example. (Reference: Google Dev LLM)

Prompt Engineering

Prompt engineering is the art of asking the right question to get the best output from an LLM. It enables direct interaction with the LLM using only plain language prompts. (Reference: Google Dev Prompt Engineering)

LLM is part of Generative AI (GenAI), which generates plausible text outputs. GPT is an LLM that is popular today, and GPT-3.5 is the foundation model for ChatGPT’s free version. Similarly, Gemini is the foundation model for Google Bard.

Besides the myriad of applications being explored, LLMs are being considered as a tool for boosting workforce productivity.

Unlock the power of LLMs

LLM can assist with the following and more, allowing business analysts to be quicker and more thorough with their analysis:

  1. Requirements analysis: Analyse user needs, analyse existing systems, and identify potential issues, all with the help of an LLM.
  2. Data analysis: Uncover hidden trends, draw insightful conclusions, and generate reports tailored to your specific needs.
  3. Documentation and report generation: Save time and ensure consistency by using LLMs to draft documents and reports based on your input.
  4. Framing questions: Craft effective survey questions, interview prompts, and user experience design inquiries with the help of LLM suggestions.
  5. Market research: Gain valuable insights into your target audience by analyzing competitor data and social media trends using LLMs.

Learning to prompt LLM is the way to add GenAI to your toolkit.

Prompting Techniques

A prompt is any input you give LLM applications like ChatGPT or Bard. Prompt Engineering elevates it to an art form to engage with this beautiful mathematical product (LLM).

While GenAI can generate responses for almost anything you ask, it does not have consciousness or the ability to decide if the response is correct. If you give it bad input, you get a bad output. This is where Prompt Engineering helps frame questions in a way that gets a better response from LLM. The more inputs and context you give LLM, the more relevant and appropriate its response would be.

Some of the prompting techniques are:

  1. Zero-shot prompting
  2. One-shot prompting
  3. Few-shot prompting
  4. Chain of Thought prompting

You can find more techniques in the Prompting Guide.

Zero-shot prompting

In zero-shot prompting no context is provided. LLM offers a plausible response to the prompt.

Example: “What is prompt engineering?”

ChatGPT: “Prompt engineering involves crafting specific instructions or queries to obtain desired responses from natural language processing models like GPT-3. It focuses on optimizing prompts to achieve accurate and relevant outputs. This iterative process helps users refine and fine-tune interactions with the model for various tasks.”

One-shot prompting

In one-shot prompting one context is provided to the LLM. LLM adapts the response to the provided context, which can be anything — your background, your need, formality, length of response, tone of response, etc.

Example: “What is prompt engineering? Can you explain it like Donald Trump?”

ChatGPT: “Prompt engineering, folks, it’s tremendous! It’s all about asking the perfect questions to get the best results from these language models. You’re like a conductor, orchestrating the perfect symphony of words to make these machines sing your tune. You ask, they deliver — it’s that simple, and let me tell you, it’s fantastic. We’re talking about optimizing your questions, making them great, so you get the most tremendous responses. That’s what prompt engineering is all about — winning with words!”

Few-shot prompting

Provide more context to the LLM. LLM is fine-tuned with more context provided. LLMs are generically trained or fine-tuned to be generic. If you want a suitable response, you can fine-tune it by setting the context as detailed as possible.

Example: “What is prompt engineering? Give me 2 examples of prompts from the marketing and software development industry. Present the response in a table format with fields: Industry, Sample Prompt, Sample Response. Keep the sample prompt and response to one line.”

ChatGPT:

Chain of Thought Prompt

Set an example context to guide the LLM and then ask your question. LLM can be trained with a chain of thoughts to solve the problem.

Image Credit: Prompting Guide — cot

Analyse a feature using ChatGPT

This is an example of how I tried prompting in ChatGPT to analyze a feature to identify the acceptance criteria for the user story. I engaged in a conversational style with ChatGPT to set the context to generate the acceptance criteria. Though a single prompt with few-shot prompting can help, building a contextual understanding would help you validate and correct the understanding of ChatGPT. It is a step-by-step approach rather than providing all context in a single prompt and wondering where the context went wrong.

Click here to access an example in ChatGPT with few shot prompting.

Click here to access the example in ChatGPT in a conversation style using multiple prompting techniques. Here I would have corrected the context and provided more context during the conversation before identifying the acceptance criteria.

Note:

  1. Do not share sensitive and confidential data with any LLM models. Use them prudently without revealing any confidential data.
  2. The response generated changes every time.
  3. Prompting techniques listed here works well for GPT. Every foundation model is different and the effectiveness varies in other models.
  4. BA tools are already available in the premium version of ChatGPT. I haven’t tried them yet. The aim of the article is to help learn prompt engineering to boost productivity even on free versions.

--

--