What is an AI prompt and what are its components?

AI prompts

AI has spawned several buzz words, the most important being AI prompts. Everyone wants to know what an AI prompt is. They also want to know how the prompt should be framed to get the desired result.

Let’s start with the definition first. An AI prompt is the instruction given by a user to an AI application like ChatGPT or Gemini to generate a response or perform a particular task.

The AI applications, which are large learning models (LLMs), break down the prompts or parse the text to extract the task or question asked by the user. They also note the relevant context or constraints placed by the user in the prompt.

Depending on the task, the LLM may either generate a response from scratch or retrieve relevant information from its training data or external sources. This data is converted into natural language and presented as the output to the user.

It is possible that the user is not satisfied with the response. In such cases, the user should modify the prompt by adding fresh context and make it more specific. The AI application will then redefine its response to meet the information needs of the user.
This process of repeatedly updating or refining the prompt is called prompt engineering.

Depending on the LLM, the application can generate text, code, images, videos, music, illustrations, etc.

Structure of a prompt

It is important to understand the structure of a prompt if you want to generate the best results for your query.

The structure can vary depending on the specific task and the LLM you are using, but there are some common elements that you will see frequently.

1. Task / Instruction: This is the most important part of the prompt and explains what the model is supposed to do. It can be a question, a command, or a statement outlining the task at hand.

Function: Tells the LLM what specific action you want it to perform. Examples:

  • “Translate the following text into Hindi.” (This instructs the model to translate the text into Hindi.)
  • “Complete the code to sort the given list in ascending order.” (The model is directed to create the list in the ascending order.)

2. Context or Constraints (Optional): This sets the scene for the LLM by providing background information or relevant details. It can include things like a story snippet, character descriptions, or a specific situation.

Function: Sets the stage and provides background information for the LLM. Examples:

  • “In a world where robots coexist with humans, a young programmer named Bard discovers a hidden code…” (This sets the context for a science fiction story).
  • Given the following passage about climate change…” (This sets the context for an article on environment.)
  • “Given the following conversation between two characters, continue with what the third character might say” (This gives the LLM a hint as to what to write.)

3. Few-Shot Learning (Optional): Sometimes providing a few examples can help guide the LLM towards the desired output format or style. This is particularly useful for creative tasks like writing different kinds of content. These examples can be in the form of input-output pairs or just sample outputs.

Function: Provides samples to guide the LLM towards the desired format or style. Examples:

  • “Here are some greetings a customer service agent might use: ‘Hello, how can I help you today?’ or ‘Welcome to our store!'” (This gives the LLM examples of customer service greetings).
  • Here are some examples of correct translations: [Example 1], [Example 2], [Example 3]. (This gives the LLM a roadmap to create the output on desired lines.

4. Question (Optional): This is a popular form of prompt and is used to obtain direct answers. It is important to phrase the question clearly and provide context, if needed, for understanding.

Function: The prompt directs the LLM to find the answer. Examples:

  • “What is the capital of France?” (This is a straightforward question for the LLM to answer).
  • “When did India attained attendance?” (The model will give the correct year.)

5. Role (Optional): You can specify a role for the LLM to take on, like a teacher explaining a concept, a customer service representative answering questions, or a creative writer composing a story.

Function: Specifies a persona for the LLM to adopt during its response. Example:

  • “As a doctor, explain the symptoms of a common cold.” (This instructs the LLM to answer from the perspective of a medical professional).

6.Output Format (Optional): In some cases, prompts specify the desired format or structure of the output. This ensures that the model’s response meets certain criteria.

Function: The prompt defines the style of the response. Examples:

  • “Provide the answer in bullet points.” (This tells the model as to how the output is to be presented.)
  • “Make sure the output is written in formal language.” (This tells the model what the tone of the output should be.)

These elements can be mixed and matched depending on the specific task requirements and the desired outcome. Please note you don’t need to include every element in every prompt. Focus on clarity and provide the information most relevant to the task. As you gain experience, try different structures to see what works best for the specific LLM and task you’re using.

Read also:
What is Generative AI, how it works and how you can use it
11 reasons to switch to AI tools to generate content

About Sunil Saxena 334 Articles
Sunil Saxena is an award winning media professional with over four decades of experience in New Media, Social Media, Mobile Journalism, Print Journalism, Media Education and Research.

Be the first to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.