All Articles
/
Best Practices

ChatGPT was the tip of the spear: Our 2023 generative AI predictions

2022 saw generative AI become mainstream helped by ChatGPT, Jasper AI and DALL·E 2. What comes next?

Published on Jan 17, 2024 by Alex Halliday

In late 2022, we saw the quality and availability of AI models skyrocket as OpenAI made their latest class of APIs available and launched ChatGPT.

Google Trends shows this inflection point well, just look at search interest in the term “OpenAI” between December 2017 and December 2022:

So if 2022 saw generative AI start to enter the mainstream, what can we expect from 2023? 

The AirOps team and I have a few thoughts on that – here are our top 10 predictions for generative AI in 2023. 

1. New models arrive and everything changes, again

While the Twitter rumor mill pontificates on how big of a leap the GPT-4 class of models will be, there are certain things that are all but locked in. 

Firstly, the models will be trained on a materially increased amount of data (perhaps as much as 10x) which will improve accuracy and knowledge depth, unlocking a long tail of niche applications and higher ROI use cases. Increased prompt sizes are likely coming too, which will allow for richer context to be passed in and larger outputs. 

Any one of these vectors of improvement would be significant for the generative AI ecosystem, but together they are seismic and will spawn a new generation of startups tackling increasingly valuable use cases.

These are the outcomes I anticipate in each area:

Greater training context 

Outcome: AI tools can handle more niche prompt challenges; specialist industries can suddenly start to build applications; whole new categories of product emerge.

Greater reliability 

Outcome: The current generation of AI tooling is productised as “assistive,” meaning it can get you 40-70% of the way through your task. 

When reliability improves beyond a certain point, there is a shift to truly magic experiences that can handle certain tasks through to completion. 

Larger token capacity 

Outcome: Each generation of GPT model has a token limit that restricts that number of characters that can be in the combined input and output. 

Increased input and output size will allow models to receive dramatically more context and output dramatically more content. This will increase both the complexity and scope of tasks that completions can be applied to. 

One additional effect of larger prompts is that prompt construction becomes an area of value creation. Constructing a context-rich 20,000 character prompt is non-trivial and playbooks and marketplaces for buying and selling prompts will grow in popularity. 

2. The “Jasper effect” plays out in more industries 

Copywriting has fundamentally changed since the emergence of tools like Jasper and Copy.ai.

The fear that “the machines are taking our jobs” has been replaced by the reality that “quality really matters”. Good copywriters are becoming higher leverage and great copywriters are in higher demand than ever. 

Google’s ‘Helpful Content Update’ underscores this, prioritizing quality content written by humans over bot generated content. Brands are responding by investing serious $$$ into real writers, originating real content.

As more industries (law, graphic design, paid marketing, video production) become 2-5x more efficient thanks to generative AI models, the top 10% will become more sought after, the middle 50% will become wildly higher leverage, and the bottom 20% will need to rethink their business models.

3. Basic implementations of generative models will be the new normal 

Assistive AI will appear across the startup ecosystem first, and later in larger tech products that have more sunk cost and more sensitivity to risk. Value will be delivered by incumbents integrating directly with NLP (natural language processing) models, and in other cases startups will deliver value via plugins, APIs, and Chrome extensions.

As consumers adjust to a “new-normal” we will see a decreased consumer response for basic prompt wrappers and increased competition. The price for basic use cases will tend toward the cost of tokens over time.

A good example is Notion AI, which is a very primitive implementation of basic GPT-3 prompts. It’s an incredibly basic implementation of the technology, yet it still earned Product Hunt Product of the Day and Week.

4. Generative AI becomes a bigger thorn in the side of established platforms 

The out the box versatility of GPT-3 is nothing short of astounding. 

Much like electric car tech will take the value of combustion engine IP to zero over time, generative AI will decimate hundreds of millions of dollars of R&D invested by established players, leaving them with a predicament: Double down on proprietary tech or start from scratch?

This is an amazing opportunity for startups. They’ll unlock the ability to create competing experiences with 1/1000 of the investment, possibly even offering them for 1/10th the cost, too.

Additionally, expect a rust-belt backlash against “shared models” on the grounds of security/privacy.

5. The execution bar goes up significantly

Wrapping a public model with a simple UI (Jasper, etc.) will no longer be sufficient to gain market share. Competition will be too high and consumers will expect more.

The levers of value creation in this new reality will emerge and start to be better understood. 

Productization, failure handling, brand, thoughtful inline workflows, realtime prompt construction, and proprietary fine-tuning sets will become the talk track of the venture world as startups pitch their ideas to lean in on the generative AI gold rush.

6. AI management infra tools are tried, few gain traction 

A large number of middleware platforms aimed at helping startups manage their generative API usage will emerge. They will help with prompt versioning, quality tracking, A/B testing, usage, and optimization. 

Basically a “LaunchDarkly” style featureset but for Generative AI APIs. 

A small subset will gain traction, most will not work out as teams look to bring this tech in-house and finely customize the implementation to their unique needs. 

With each new generation of AI model, a large chunk of the value of this layer gets wiped away, making it hard for these middleware platforms to create sustained value. The core strategic question for new entrants will be, “How do we create enduring, sustainable value?”

7. Model usage consolidates to market leaders

The top 2-3 model providers will take the vast majority of the market as they enhance their lead and set themselves on a clear path to becoming 10x better than their peers. OpenAI is going to be a clear winner and could possibly be joined by new entrants, presumably including Google in Q2.

OpenAI has a rare mix of exceptional talent, a head start, and a much higher tolerance for imperfection. Google, in contrast, can move incredibly fast.

8. Reliability continues to create enterprise friction 

Generative AI is nothing if not confident in its delivery, even when wrong. This is humorous at times, but it’s a non-starter in fields (such as medical/legal) where the stakes are high.

As enterprise applications look to capture value from this technology, reliability, hallucinations, and overconfidence create a lot of caution. This will limit the categories and mechanism of deployment to assistive use cases longer than the remainder of the market. Governance and privacy concerns that are currently vague will crystallize, and addressing them will become critical to enterprise adoption.

9. Business demands BYO API Keys

There will be an emergence of platforms enabling BYO API Keys to key models. 

As enterprises start to see fine-tuned data sets as an asset, we’ll see improved understanding of how to capture and retain that value.

10. Proprietary data grows in importance

In a world with massively capable models, data and context will become key differentiators. As token support grows this becomes even more important. 

Companies will increasingly want to safeguard proprietary data. 

11. Google’s “Code Red” is just the start

Big G is worried. ChatGPT’s ability to answer questions materially better than Google’s search experience poses the first credible threat to the Alphabet golden goose. 

But this is just the start. 

The underlying models powering ChatGPT are capable of doing so much more than just generating well-tuned facts and answers to questions. As ChatGPT gets more capable and more customized interfaces show up, more companies will go through a similar fire drill.

When these models are able to execute code, interact with APIs, browse the web, perform recurring tasks, and have longer memory, the number of companies that become threatened grows materially.

Toward the end of 2023, it will start to look like a total reset in many industries.

Final thoughts

At AirOps, we’re all in on Generative AI and see it as a key technology to help users create amazing things with their data. We believe that lowering the barrier to understanding and building with data is a powerful force multiplier for high growth teams and that AI is a key tool in achieving this. 

We’ll closely follow all the great work happening in this space. We’d love to connect with other teams working to unlock amazing value with the current and future generations of generative AI models. Don’t hesitate to reach out to me directly on LinkedIn to start the conversation.

Discover AirOps AI Apps

Powerful AI apps you can customize & run at scale: write SQL, run NLP, draft personalized content, and more.