AirOps Academy
Prompting
How to write conversational prompts
This is some text inside of a div block.

How to write conversational prompts

Understanding conversational prompting

Conversational prompting is a powerful technique for content writing. It differs from instructional and analytical modalities by leveraging an ongoing back-and-forth between user messages and the AI’s own responses. This approach helps refine tone, style, and accuracy over multiple turns.

Recommended writing models

To get started with conversational prompts, use one of these models:

  • Anthropic’s Claude Sonnet or Claude Opus
  • Google’s Gemini 2.5 Pro
  • OpenAI’s ChatGPT-4

Core components of a conversational prompt

Every prompt consists of three parts:

  • System message: Sets global instructions or guidelines.
  • User message: The human’s request or input.
  • Assistant message: The AI’s response, which can then be reused as context.

Leveraging the assistant message enables models to build on their own previous replies and improve subsequent outputs.

Extending the conversation

Unlike a single-response prompt, conversational prompting continues beyond the model’s first answer. This extension can:

  • Automate an entire dialogue that would otherwise happen interactively
  • Guide the AI toward a more polished final result

Technique 1: User-Assistant Pairs (Few-Shot Prompting)

This method provides one or more examples of desired inputs and outputs before requesting the final content.
Steps:

  • User message: Original request (e.g., “Please write a social post for Ramp about CoreTrust as a new partner.”)
  • Assistant message: Example response illustrating tone and style
  • User message: Actual request (e.g., “Now please give me another post in the same tone, but write about Ramp’s new $16 billion valuation.”)

Providing that example reduces hallucinations and ensures consistent voice. Without it, the model may invent details or shift style dramatically.

Technique 2: Chain-of-Thought Prompting

Chain-of-thought prompting asks the model to reason or outline its thinking before producing final content. It can also be treated as an ongoing conversation:

  • Copy the assistant’s previous response into a new assistant message.
  • Add a follow-up user message requesting modifications (for example, “That was great, but it doesn’t reflect Ramp’s tone and style enough. Please try again.”).

This lets the AI refine its work in successive iterations.

Modifying and controlling output

By alternating user messages (feedback or new instructions) with assistant messages (previous AI responses), it’s possible to:

  • Correct mistakes
  • Adjust tone and style
  • Guide the model toward a higher-quality result

Manipulating both user and assistant messages effectively “puts words into the model’s mouth” and yields content that is better than a one-and-done prompt.

Search

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
No results found