Prompt Engineering Techniques
Hi, Namaste, Hello, Hola, Kon’nichiwa 😄
It’s been a while I posted an article and since life is boring right now, I thought of writing something. Well let me rephrase that, life hasn’t been exactly boring. I am observing every nerd in town entangling with Generative AI.
So, I have also been trying to learn about it in bits and pieces as I get the chance. It turns out, Prompt Engineering is a very big part of utilising any of these Generative AI tool. So, in this article, I will try to share some techniques and methods that I have learnt from various sources.
Honouring the age hold tradition of introductions, let’s start with a brief introduction of these tools.
Introduction to Generative AI Based Tools
Artificial Intelligence (AI) is a field of study that focuses on developing computer systems that can learn, reason and act in ways that would normally require human intelligence. This is a definition generated by an AI model, by the way (talk about being self-aware). A subfield of AI is Machine Learning, which contains another subfield called Deep Learning that encapsulates Generative AI. Generative AI tools learn from large amounts of training data across sources primarily from the internet and try to generate output based on it when requested information about a particular topic.
Deployment Parameters
Before we dive into some of the techniques related to prompt engineering, we would like to consider some factors that you need to consider at the deployment time of the foundational model itself. These deployment parameters can affect the way an AI application responds by a great degree.
- System Prompts: These prompts are defined by the developer for an AI application built on top of a foundational model. It gives the model a description about the role and the functions that the application needs to perform. An example, of a system prompt can be “You are a customer assistance chatbot for a Sports goods ecommerce websites. Your job is to share recommendations for the best products to the visitors according to their requirements. Also, you have to be persuasive to turn them into customers.”
- Temperature: Temperature defines the level of creativity of the model and it can take the value from 0 to 1. More, the temperature, more creative will be the response.
- Top P: Top P value defines the number of probable words that the model is likely to choose from while generating a response. It can again take a value from 0 to 1. A lower Top P value means more conservative and more coherent response, whereas a higher Top P value means more creative and less coherent response.
- Top K: Top K value similarly determines the absolute number of probable words that the model can choose from while generating its response. It can take a value from 0 to 500+ as supported by the model that you are using. Similar to Top P, a lower Top K means more conservative and coherent response and vice versa.
- Length: The Length attribute determines the maximum length of the output that you want the model to generate. The value again depends upon the model and subscription that you are using.
- Stop Sequences: Stop sequences are tokens or phrases that you define in order to tell the model to stop generating output for a give prompt. For example, if you are asking a model for recommendations for a program and have asked it to add comments as well, a possible stop sequence can be “Program Ends”.
Prompt Engineering
Prompt engineering refers to the technique of writing refined prompts to ensure that you get the most accurate and best possible output from a GenAI tool.
It involves developing, designing, and optimising prompt instructions to include as many details and instructions as possible. There are various techniques that can be used to enhance your prompts, but there are some baseline measures that you should include in all your prompts.
Prompt Enhancement
There are some factors that you can include in all your prompts to ensure that the prompt delivers the most desirable output: The key factors are:
- Objective: This factor refers to telling the model about your end goal that you want to achieve with the help of the prompt.
- Instructions: This includes describing the task that you want the model to do in as great of a detail as possible. Suppose there are several methods to achieve an objective. In that case, it will be helpful to detail out the preferred method that you would want the model to take in that scenario.
- Context: Adding context involves detailing out all the circumstances around the objective which need to be considered while completing the task.
- Input Data: This includes any specific data that you would want to be referenced while completing the task.
- Example: Any example related to the objective that you would want the model to reference while generating the output.
- Output Indicator: This involves sharing details about the attributes of the output that you want the model to consider. For example, you can share the details about the format (paragraphs or bullet points), the length of the output, the conditions that you need to take care of while generating the output.
We will further consider these factors in detail while talking about an example at an end.
Some Refined Prompting Techniques
We will take an example of a scenario where we want to write stories about Tom and Jerry using an AI Chatbot. Some common techniques that you can use to write better prompts and their differentiation for this task are as follows:
Zero Shot Prompting
This involves giving the least amount of details to the Chatbot. We don’t provide any additional examples or share any context to the underlying model. An example in this scenario can be:
“Write a story about Tom and Jerry.”
Few Shots Prompting
Now let’s assume that the model generates a story where both Tom and Jerry are high school friends, they meet, share a piece of pizza, chat and then happily go their ways. However, that’s not the kind of story that we were expecting. So, in this scenario, we can share some examples and additional contexts to get a more refined story. This makes the technique to a few shots prompting. An example prompt can be:
“Write a story about Tom and Jerry.
Tom is a cat. Jerry is a mouse.
Tom always chases Jerry.”
Negative Prompting
The few shots prompting gave examples to the model, but the chatbot got creative and let Tom win, chasing Jerry and ending the storyline. Now, let’s say you are not happy about it. In that scenario, you can define restrictions in the form of negative conditions and what not to include for the same. This technique will thus be called Negative Prompting. An example prompt in this scenario can be:
“Write a story about Tom and Jerry.
Tom is a cat. Jerry is a mouse.
Tom always chases Jerry.
Don’t let Tom win.”
Chain of Thought Prompting
You can have a greatly refined story with the above techniques. Let’s imagine that the model is generating a story where Tom is chasing Jerry because Jerry disturbed Tom’s sleep. However, you don’t want that to be the reason for their cat and mouse chase. In this scenario, you can include a sequence of steps to write a prompt. An example prompt in this scenario can be as follows:
“Write a story about Tom and Jerry.
Tom is a cat. Jerry is a mouse.
Tom was eating biscuits.
In the meantime, Jerry sneaked and stole cheese.
Tom noticed.
Tom is chases Jerry.
Jerry escapes cleverly.”
Role Based Prompting
Suppose you have generated a story, but you don’t like the narration style. Maybe the model is describing the story as if it’s able to see everything, but you want to make it an observer from a window only. In that scenario, you can tell the model about the role that you want it to play. An example in this scenario can be:
“You are a stranger observing Tom and Jerry from a bedroom window and describing their chase around the room.
Tom is a cat. Jerry is a mouse.
Tom was eating biscuits.
In the meantime, Jerry sneaked and stole cheese.
Tom noticed.
Tom is chases Jerry.
Jerry escapes cleverly.”
Interactive Prompting
Now let’s say you have the storyline, and you want to present this story in front of an audience. Now, in this scenario, you may want to know which questions may be asked about it. In that scenario. You can ask a follow-up prompt and have a dialogue about the story, asking the model which is behaving as a listener. At that time, you will be using Interactive Prompting. An example for the same is as follows:
… Story generated. …
“You are a member of audience to whom I am narrating the story from the last prompt. Ask me questions based on the story.”
Template Prompting
Now suppose the model generates the question, but its very limited and you are not able to reply consistently. You want the questions to be posted on a blog along with answers and in a particular format. In that scenario, you can share a template for the answers and ask the model to generate relevant answers as a follow up to the questions that the model generated. This technique will then be called template prompting. An example for it can be:
… Questions Generated …
“Please share the answers for the above questions and create a table describing the following components for the answer:
Why? Reason for this Particular Answer | Relevance from Storyline | Alternative Explanations”
Multi Turn Prompting
You have the story, but you want to know more about why the story was generated in that way. Let’s say the model generates the storyline in which Jerry hides under a cup while Tom keeps looking for him. You can ask follow-up questions about it. That makes the technique as Multi-Turn Prompting. For example,
… Story Generated …
Prompt 1: “Why did Jerry hide only under the cup for escaping?”
Prompt 2: “Why didn’t Tom look under the cup?”
Self Consistency Prompting
Let’s say you have the story, the answers to your follow ups but you want to be a bit more creative. You want to explore options. In that scenario, you can ask the model to generate the story in multiple ways (like different ways for Jerry’s escape) but at the same time, you want the model to be consistent with itself. In that scenario, you can use a technique called as self-consistent prompting. An example of this technique is as follows:
… Story Generated …
“List down five alternate ways that Jerry can use to escape from Tom, then explain which one will be the best considering the creativity of the story.”
Hypothetical Prompting
Now, you have the story, you have answers, and you have also evaluated possible alternatives. However, you still want to test your creativity as well. In that case, you can generate ask the model to generate the story based on hypothetical situations as well. For that, you can use “What if” or other speculative scenario phrases. This is called Hypothetical Prompting:
… Story Generated …
“What if Jerry was ill, unable to run and couldn’t find a good place to hide? Write a story for a clever Jerry escapes after being caught by Tom.”
So, these are some of the common prompting techniqeues that you can use to generate a much more refined output and create a Tom and Jerry story that will be loved by all.
Now, let’s move on to a much more technical example.
Shift Left Security - Prompting Example
Remember when I said while describing Prompt Enhancement about an example later on? Let’s take into consideration that example.
The scenario is I want to pitch ten most important benefits of shift left security processes to a group of early career developers. We need to consider the factors for writing enhanced prompts that include Objective, Instructions, Context, Input Data, Example, and Output Indicators and write the best prompt using any of the above mentioned prompting techniques.
An example prompt can be as follows:
I want to pitch the benefits of Shift Left Security practises to early career developers who have one or two years of experience. A number of them have been onboarded to our team recently and they have reached out to get information and benefits of shift left security practices. They already have a basic understanding of Secure Development Lifecycle, and related practises. For example, they have recently developed a customer service request ticket tracking system. Their day to day development practises also involve using third party components and their integration. Generate as many benefits as possible and list down the ten most important ones in the form of a bulleted list. Each benefit should follow the template of title of the benefit being described, followed by brief descriptions of about three or four lines each. Write the benefits from the point of an engineering manager whom they report to. At the end, also write a small paragraph (Not more than 400 words) about where if the Shift Left Security Practises are not implemented and followed in an organization. Additionally, keep the language formal and technical and avoid any informal phrases or slangs.
While the above prompt may not be perfect in itself, it contains far more information than simple prompt like “List down the benefits of shift left security.” You are free to improve it accordingly. Also, try to analyse the various factors I took care of while writing the prompt, the techniques I used and what I missed.
At the end, for me, the response from the ChatGPT for the above prompt was as follows (read about Shift Left Security as well):
Certainly. Below is a formal and technically oriented summary that includes ten key benefits of implementing Shift Left Security practices, written from the perspective of an Engineering Manager addressing early-career developers. The section is followed by a concise analysis of the risks associated with not adopting such practices.
🔐 Top 10 Benefits of Shift Left Security Practices
- Early Detection of Vulnerabilities By integrating security checks in the initial phases of development—such as during code writing and unit testing—security flaws are identified sooner. This reduces the risk of major vulnerabilities propagating into production environments.
- Reduction in Remediation Costs Fixing security defects early in the SDLC is significantly less expensive than post-deployment remediation. Integrating static analysis, dependency scanning, and policy enforcement tools during coding can avoid expensive downstream fixes.
- Enhanced Code Quality Embedding security concerns in the development phase enforces better coding standards. Secure coding practices often overlap with robust software engineering practices, leading to overall higher code quality and maintainability.
- Faster Development Cycles Proactively addressing security concerns minimizes the number of rework cycles needed later. This results in more predictable and streamlined development timelines without last-minute delays due to security bottlenecks.
- Improved Developer Autonomy Providing developers with tools and training to address security issues early empowers them to resolve common vulnerabilities without always relying on specialized security teams, fostering independence and ownership.
- Secure Integration of Third-party Components When security scanning is built into dependency management and CI/CD pipelines, vulnerabilities introduced via third-party libraries or APIs can be detected and mitigated at the point of integration.
- Reduced Compliance and Audit Risk Automated enforcement of secure development policies aids in meeting industry and regulatory compliance standards (e.g., ISO 27001, SOC 2, GDPR). This reduces exposure during audits and simplifies reporting.
- Continuous Feedback and Learning Developers receive real-time security feedback through tools such as static application security testing (SAST), which facilitates continuous learning and adaptation of secure coding practices.
- Stronger DevSecOps Alignment Shift Left principles inherently align development, security, and operations teams. This fosters better collaboration, shared responsibility, and unified security ownership across the SDLC.
- Competitive Advantage and Customer Trust Delivering secure products builds customer trust and enhances brand reputation. Early integration of security reduces the risk of breaches, downtime, and data loss—key factors in retaining client confidence.
📉 Risks of Not Implementing Shift Left Security Practices
Failing to adopt Shift Left Security practices exposes the organization to several operational, financial, and reputational risks. Security vulnerabilities, when detected late in the development cycle or post-deployment, require considerable resources to remediate and often cause delays in release timelines. The absence of early security gates can lead to insecure third-party integrations, unmonitored code injection points, and non-compliance with data protection standards. These risks are especially critical in systems like customer service platforms, where sensitive user data is processed. Furthermore, without a Shift Left approach, developers become heavily reliant on downstream security reviews and external interventions, resulting in silos, knowledge gaps, and missed learning opportunities. This also weakens DevSecOps alignment, as security becomes a reactive rather than proactive function. Ultimately, this fragmented model increases the likelihood of security breaches, financial penalties due to non-compliance, and erosion of customer trust. Implementing Shift Left Security is not merely a technical choice, but a strategic imperative for secure, scalable, and agile software delivery.
Let me know if you’d like to include a slide deck summary or visuals to assist in presenting this to your team.
For now, this wraps up this article. Will keep updating this blog with more tips.
Suggestions are welcome.
Happy & Secure Development.