Why Incumbents LOVE AI
The Reverse Innovators Dilemma
The current instantiation of AI (with the use of LLMs) is frankly mind blowing. ChatGPT opened our eyes to the form factor that Transformer architectures could be easily used for. Since then, every board meeting and team in every industry has been thinking about how to incorporate LLMs into their product to improve the user experience.
With all this happening, the natural conclusion is that we’re in a massive phase shift where innovator’s dilemma will catch up to existing companies. This could not be further from the case.
As a reminder ChatGPT was released in December 2022. That’s 6 months ago. In that time we’ve seen an explosion of startups like Jasper, Writer AI, Stability AI, Langchain, PineconeDB, AutoGPT and more growing rapidly and raising at large valuations. While all this has been going on have enterprises been asleep at the wheel?
So why are incumbents and enterprises able to move so quickly? I want to lay out some brief thoughts that I have below. Mainly because I strongly believe in them but also want to pressure test these thoughts against the smart readers I have!
LLMs are Not A New Platform
A lot of folks are saying we’re entering the AI era. Sure, but is this actually a platform shift? Think back to Mobile or Cloud. These were massive tech AND org shifts. Not only did you literally have to develop on a different platform but also you had to organize teams differently, have a different process for building product and QA. New knowledge had to be learned and knowledge transfer processes had to be put in place. This is why it took so much time for enterprises to make the shift. The tooling needed to be built out, the org structure had to be changed, the ROI on the spend had to be determined.
Now contrast that with AI. In <6 months, multiple enterprises from all industries have shipped products or utilized Open Source & proprietary models to create content. In a recent podcast episode withof Chick-Fil-A (granted they’re a very forward thinking enterprise in lots of things but especially tech), you can hear how Brian and his team are thinking about deploying and using LLMs. There is not a massive platform shift occurring but instead an enablement shift. It allows enterprises to look at the data they already have and figure out how to best utilize it. In many cases, folks are simply paying whatever OpenAI asks to get self-hosted model access to train models on their own data without exposing it to others. When it's as simple as utilizing an API and putting down enough money to buy your own instance, that actually benefits incumbents a whole lot (who have more data, money, and resources)!
Talent Retention is Hard…Except When AI is Involved
Everyone loves to work on exciting new stuff. And what could be more exciting then drawing up, building and implementing a new AI-enabled or native product!! Literally that’s the dream. Now at many incumbents you can imagine employees have been there for awhile, stock prices have gone down, and folks may start to wonder what’s next and is this current job the best use of their time. Now incumbents get to tell those engineers, “hey go explore LLMs and figure out how we can use them to augment our product”.
AI is a retention tool
If there’s one thing I would take away from this post, it’d be that above. For incumbents, the best thing to happen is being able to tell the best engineers who have been around for awhile that they get to work on something new. Ask around to your friends who are engineers at companies and why they’re sticking around. Chances are they’re working on something cool with AI.
What is the Opportunity Ahead
So given all of that, should founders just say well Shomik said AI is not a Moat and now he’s saying Incumbents Love AI so why should we build anything in this space. Well 1) I’m not that important so that will never happen but 2) no there is still tons of opportunity!
What I want to get across to founders reading and thinking about building in the AI space is don’t think about what’s hot now but look ahead to how the world will look in 2 years, make an opinionated bet on that and then build for that world.
Let’s look at just in the AI space and when important or potentially important companies to the progress of LLMs were founded:
Hugging Face (2016)
Langchain (October 2022, 1 month before ChatGPT was released)
Google Transformer Paper (2017)
It takes time to build for the future and yes there is risk that you will miss on timing but the largest outcomes start by building for the futures that others don’t see. AI security is a hot topic right now, but in the boldstart portfolio for example, Protect AI was founded in early 2022, about 1-1.5 years year ahead of when ChatGPT was released/getting more mainstream traction.
If founders are going to build for what’s hot now, then focus on how utilizing LLMs can create a different architecture to attack incumbents. Is there something that incumbents fundamentally build around that you can change? If so, you then have a product that truly takes advantage of the Innovator’s Dilemma!
Some people have asked me, Shomik do you even write anymore! I wanted to explain why my last written post was 2 months ago.
I’ve been busy working on Software Snack Bites the Podcast! Since launching in February, we’re at 8 episodes and over 10k listens. Thanks to all of you for the support. Episode 15 has just been recorded and on top of the amazing guests (below) we also have some future episodes planned including a deep dive into Databricks, Supply Chain Security, and AI security.
Episodes To Date: