How to write analytical prompts
Understanding reasoning models
Reasoning models differ from non-reasoning models by using reasoning tokens to break prompts into multiple internal steps. They can:
- Execute multi-step conversations
- Reflect on and verify their own responses
- Carry out additional reasoning under the hood
Because they process more tokens, they tend to be more expensive than standard models.
Selecting the right reasoning model
Choose based on task complexity and cost considerations:
- O1 Pro: highest performance, ~$150 per million tokens
- O1: top-tier reasoning, ~$15 per million tokens
- O3: powerful yet cost-efficient, ~$2 per million tokens
- O4-mini: balanced option for medium-to-hard tasks
- Claude 4 thinking mode: strong reasoning capabilities, experimental
Aim to be cost-conscious, especially with the latest models, since they use more tokens and can be more expensive.
Structuring your analytical prompt
Build all prompt components into the user message:
- SOP (Statement of Purpose)
- Context or background information
- Examples to illustrate desired outputs
- Rules or constraints to guide the model
- Output format or schema
This framework provides a solid starting point for most analytical tasks.
Guiding the model with question sequences
For deep, nuanced analysis:
- Lay out a series of questions or logical tasks that force the model to reason step by step.
- Order those questions to mirror your own thought process.
- After the reasoning steps, request the final output or conclusion you need.
Sample: Persona search intent analysis
When performing SERP research, consider questions like:
- Who are the types of people searching for this keyword?
- What are the most important follow-up questions they expect answered?
- Which pain points are driving their search behavior?
- What gaps or unanswered questions remain in top-ranking articles?
Through experimentation, you’ll discover the logical sequence you naturally follow and can replicate it in prompts.