All Articles
/
Building with AI

4 New Insights About Anthropic's Claude 3 Model Family

Claude 3 Models continue to open up new growth use cases

May 22, 2024
AirOps Team

Claude Opus is now a top 3 model

The world of large language models (LLMs) is moving fast, with the top leaderboard positions changing month to month. One model that has quickly become a fan favorite is Claude Opus from Anthropic. It currently ranks in the top 3 on the LMSys leaderboard with an impressive ELO score of 1246. As we've been helping growth teams build with LLMs at AirOps, we were excited to learn more about Opus and the Claude 3 model family in a recent episode of The Cognitive Revolution podcast.

Source: LMSys Chatbot Arena Leaderboard

4 New Insights about Claude 3 Models

Here are some of the key takeaways from the insightful discussion with Alex Albert, developer relations lead at Anthropic:

1. Extraction with Haiku + Reasoning with Opus > RAG?

Using Haiku (their smaller, cheaper model) to extract key information from large datasets, then feeding that into Opus to reason about it, can work better than traditional retrieval-augmented generation (RAG) or embeddings approaches. This could be a powerful technique for use cases like summarizing and featuring relevant customer reviews.

2. Claude 3 Opus shines in creative writing

Opus really shines when it comes to creative writing and content generation use cases. The model is exceptionally good at instruction following for these tasks. Interestingly, they emphasized that including writing samples in the prompts makes a bigger difference than many people realize.

3. How to think about the next model

A potential rule of thumb is that the average performance of the next version of a model will be similar to the peak performance you occasionally see from the current best model. So an output you only get 1 time out of 10 now, you might get 5 to 7 times out of 10 in the next release. Exciting stuff!

4. 1M Context Length + Fine-Tuning soon

Anthropic is working on expanding Claude's context length up to 1 million tokens, which are currently offered by some of Google's Gemini models, and they plan to offer custom fine-tuning of the model in the near future. Both of these could open up a wide range of marketing use cases.

Source: Anthropic

Looking forward

While it's easy to get caught up in the hype around LLMs, the most important thing for marketing teams is to focus on practical applications that leverage their business's unique perspective. At AirOps, we've seen firsthand how models like Claude Opus can raise the bar when it comes to the quality of content creation. If you're not sure where to start, consider booking an intro call with our team. We can help you identify high-impact use cases. With the right strategy and approach, LLMs are a powerful addition to your growth toolkit.

Build your Organic Growth Engine

Scalable AI workflows that drive organic growth. Use 40+ AI models, your data and human review.

Start Growing