AI Musings #5 – Opportunists vs Believers

Sharing some observations and working hypothesis on Opportunist vs Believer founding teams in AI.

My biggest challenge as a venture investor in AI right now is figuring out which of the following 2 camps a particular founding team belongs to:

Opportunists – who are trying to leverage this moment in time when the market has massive curiosity about AI.

vs

Believers – who have high conviction, and are truly mission-driven about AI.

This is a critical evaluation point for these early AI deals. As previous super-cycles have shown us, a bubble-bursting trough in the space is inevitable in a few years (perhaps as soon as 3-5 years?). It will be brutal like previous resets – capital will get reallocated to the winners and dry up for the rest, exits will be on brutal terms, customers will tighten their belts, early-stage talent will flee and the general sentiment will turn from greed to fear.

In my experience, Opportunist founding teams are less likely to survive this trough. It will require grinding out on fumes and focusing on real customer problems vs vanity metrics and perpetual fundraising. It will need gut-wrenching decisions that sacrifice short-term gratification so that the long-term upside can be captured. It will require possibly resurrecting the company many times from the dead.

Being able to do all this requires extremely high conviction deep down in the gut. Founders who are Believers will have this conviction in their DNA, and when the cycle turns negative, this will become their competitive advantage.

Given this is turning out to be a key evaluation point for AI deals, have been thinking through what leading signals can be used to spot Believers with higher probability. Here are some working hypothesis thoughts on this:

[Disclaimer: am just thinking out loud here so please take this with a pinch of salt. This is nowhere near any gospel of truth, nor do I have significant experiential validation around these points given we are literally in the first wave of AI deals].

1/ Pre-ChatGPT AI builders – likely to have been working in AI much before ChatGPT was launched. They were most likely building with ML, NLP, and neural networks in a Big Tech team, a lab, a university, or some sort of R&D/ academic environment.

2/ Pre-AI domain experts – likely to have been working deeply in a specific domain/ industry/ sector/ function from pre-AI days and are now adopting LLMs to carry forward their domain work and solve customer problems that were previously unsolvable or unviable.

3/ Young tinkerers – likely to be fresh grads who started building AI-native products as a hobby during university, maybe as part of a side hustle, or even just out of intellectual curiosity. They would have likely built products and hacked a few early users even without “doing a startup”.

These are only some of the personas I have been thinking through. As I meet more teams, I will keep adding to this list.

If one looks at how the early days of Web 1.0 played out (eg. in eCommerce and Search), most first-movers ended up dying. The 2nd generation companies leveraged both the market that was created by the 1st gen, as well as learnings from their failures, to create new categories and emerge as viable businesses.

History doesn’t repeat exactly but often rhymes, thus requires being even more thoughtful about which companies to back in this 1st generation of AI. In my case, as a US-India corridor investor, there is an additional complexity to think through – how will AI companies being built out of India compete with those in Silicon Valley? Who is most likely to be stronger in which part of the AI stack?

With domestic data being of strategic importance to each country and the rise of country-specific models, is AI going to be an extension of the globally decentralized software product/ SaaS story of recent years? Or will there be opportunities in ring-fenced, domestic AI in each major geography?

These questions and unknowns are what make the present times in AI investing both interesting and challenging at the same time. To manage this context, I am trying to be open-minded, learn fast, and think from first principles as much as possible. But at the same time, balancing this default-optimism stance with being non-trigger-hungry, consciously thoughtful, and taking the time to build personal conviction on each opportunity.

PS: check out the previous post #4 in the AI Musings series – How To Differentiate As An AI Applications Startup?

Subscribe

to my weekly newsletter where in addition to my long-form posts, I will also share a weekly recap of all my social posts & writings, what I loved to read & watch that week + other useful insights & analysis exclusively for my subscribers.

AI Musings #2 – OpenAI DevDay

Sharing some quick observations from the landmark, first-ever OpenAI DevDay’23.

Quick observations from Sam Altman’s opening keynote of OpenAI DevDay:

1/ Amazing GPT-4/ Turbo upgrades and new features announced. In particular, loved the ability to upload docs into ChatGPT. Also, the ability to choose pre-programmed voice modalities that sound significantly more realistic than any current digital alternatives.

Was also awesome to see Coke’s campaign that lets its customers programmatically create Diwali cards using DALL.E 3.

2/ The icing on the cake was the introduction of ‘GPTs’ or agents. Users can now build AI agents within ChatGPT that absorb a set of instructions and then take specific actions while leveraging the GPT-4 expanded knowledge base.

3/ Building GPT agents in natural language is the democratizing aspect of Generative AI and something that was missing in the earlier voice-to-action apps/ personal assistants in the mobile paradigm.

Sam’s natural language demo reminds me of all the bottlenecks we faced while building first-generation mobile search/ deep-linking at Quixey + all the work my friend, the late Rajat Mukherjee, did on voice-to-actions at Aiqudo. AI is on track to solve all those engg./ product challenges.

4/ OpenAI also showcased the GPT Store, which will feature the best GPTs built by developers on a revenue share model. This AI app store is a natural extension of the democratized-agent strategy.

5/ The developer Playground demo was really interesting, demonstrating capabilities like threading, function calling etc.

Essentially, any developer can now build agents within their app for their customers. These agents can have all advanced GPT-4 capabilities that power specific use cases like trip planning, navigation, splitting expenses etc., each of which is presently done by separate siloed apps.

Was awesome to see the demo agent communicate in a Jarvis-like voice modality.

6/ Finally, stoked to see the love Satya Nadella showed OpenAI and Sam during a friendly on-stage banter.

It looks like the OpenAI partnership has given a new lease of life to Azure and maybe even a game-changing competitive advantage against other cloud providers. In parallel to all the work that OpenAI is doing on the model side, Azure is building a new end-to-end, AI-native cloud infrastructure and compute stack to support the development and GTM of these efforts.

It was also heartening to see Satya underline security as one of the core focus areas for the partnership:

We are grounded in the fact that safety matters. Safety is not something you care about later but it’s something we do shift-left on.

Satya Nadella at OpenAI DevDay

My TLDR take:

The rollouts in this first-ever DevDay by OpenAI are clearly important milestones in this rapidly evolving space. AI is becoming easier to use, more powerful, and more accessible at an exponential pace. Personally, this is the first time I am seeing a potential v0.1 of what has been a larger-than-life but fuzzy vision of AGI.

Kudos to Satya and Microsoft for what’s turning out to be a generational business bet on OpenAI that frankly, seems to have caught the other Big Techs a bit flat-footed. However, expect strong responses from Google, Meta, and AWS in the coming months.

Finally, I have met many founders over the last few months who have been building nifty micro-products on top of OpenAI. A few of them have been touting how these are large, venture-returns opportunities. This DevDay has already shown how many of these startup ideas have already become point features within the OpenAI ecosystem.

This aggressive feature rollout by OpenAI once again brings to the fore strategic questions around moats, right-to-win, feature vs product vs platform, and access to 1st party training data. All this is significant food for thought both for founders and VCs.

As Big Tech, OpenAI, and other hyper-scalers like Anthropic continue to dominate the infra and model layers, for new startups, things like sharp domain expertise, deep understanding of specific customer problems, access to proprietary 1st party data as well as industry or audience-specific distribution channels, can become important sources of sustainable competitive advantage and drive a valid case for why a startup should exist.

Note: for more analysis on AI incumbents vs startups, check out ‘AI Musings #1 – How The Odds Are Stacking up?‘.

Subscribe

to my weekly newsletter where in addition to my long-form posts, I will also share a weekly recap of all my social posts & writings, what I loved to read & watch that week + other useful insights & analysis exclusively for my subscribers.

AI Musings #1 – How The Odds Are Stacking Up?

From OpenAI getting close to $100Bn valuation and Anthropic partnering with Amazon, to Google and Meta doubling-down on their LLMs faster than ever before, the AI chess game is getting more intriguing by the day.

In this post #1 of the ‘AI Musings’ series, I share a few running thoughts on the odds for each category of players.

**This is the first post in a series called ‘AI Musings’ that I hope to write regularly over the next few months. The idea is to periodically analyze major developments and milestones in AI, both from a startup and BigTech perspective.

Frantic activity around AI continues in the US. Just in the last week, OpenAI is looking at a $80-90Bn valuation for a secondary sale of existing employee shares. Even as Anthropic announced a strategic collaboration with Amazon last week, which includes up to a $4Bn investment, there is news today of the company raising another $2Bn from Google and others at a $20-30Bn valuation. This is a 5x jump from its last round valuation in March.

Greylock has gone AI-first with its newest early-stage fund. The Nvidia stock continues to rip (read my post on how it illustrates The Bunches Principle). Dharmesh Shah (Co-founder and CTO of Hubspot) is back to coding and selling, building ChatSpot over a weekend of hacking as a first step towards making his CRM AI-powered.

Amidst all this action, I have been meeting academics, founders, investors, and BigTech operators working on the frontiers of AI, trying to refine my hypothesis on the space. Here’s a working version of some of my thoughts:

1/ High confidence that AI is real and here to stay

Though the space is definitely in a financing hype cycle, to me, it’s now beyond doubt that AI as a platform shift will be transformative for the world. Unlike Web3, progress around AI has been driven by large tech companies since the very beginning. These companies are much too shrewd and tracked to spend significant resources on something that is merely a low-probability moonshot. Therefore, they have been focused on driving real commercial value from LLMs from Day 0.

OpenAI first launched ChatGPT on Nov 30, 2022. The fact that Generative AI capabilities are already integrated into mainstream products like the MS Office suite, Google Search, LinkedIn, Notion etc. in less than a year just goes to show that this particular platform shift is happening significantly faster than the Internet, Mobile, and Cloud.

Another confidence booster for me personally has been the commercial revenue traction of AI-native hyper scalers. Here are some numbers based on my research:

CompanyStartedLatest Valuation Current Revenue Traction (Est.)Source
OpenAI2015~$80-90Bn, reported as of Sep’23$80Mn est. MRR (~$1Bn annualized), reported as of Aug’23Reuters
Anthropic2021~$20-30Bn, reported as of Oct’23$200Mn proj. revenue in 2023, reported as of Sep’23 Information
Cohere2019~$2.1Bn, reported as of Jun’23Sub $50Mn proj. revenue in 2023, reported as of Aug’23Industry Sources
Hugging Face2016~$4.5Bn, reported as of Aug’23$30-50Mn est. annualized revenue, reported as of Aug’23Axios

These are tangible business revenues generated from enterprises, SMBs, and individual developers as customers. And the ramp-up over the last 12 months is astonishing. Honestly, looking at the depth of commercial traction these hyperscalers are showing, the valuation numbers don’t look entirely out of whack.

2/ Large incumbents are highly likely to capture disproportionate value from AI

About 9 months back, when Google’s stock was tanking as a reaction to ChatGPT’s growth and OpenAI’s partnership with Microsoft (a botched Bard demo made things worse!), I asked this simple question:

In hindsight, this was a very pertinent question to ask. As various BigTech-AI hyperscaler partnerships are playing out, it’s becoming clearer that large incumbents are strongly positioned to capture a significant portion of market value created from AI. They have a unique combination of the following:

  • Chips and cloud computing infrastructure to train and deploy foundational models, as well as build custom applications that are reliable, safe, and secure.
  • Distribution reach to get Generative AI in the hands of exponentially more customers.
  • Capital to place bets on AI hyper scalers and align with them to leverage their core strengths around faster and more disruptive innovation.

Bill Ackman, who runs Pershing Square and is one of the top-performing hedge fund managers, has been doubling down on Google since its price hit the $80-90 range post-ChatGPT. Here’s his rationale on why Google is strongly positioned in an AI world:

Bill Ackman’s (Pershing Square) pitch on Google’s positioning in AI

Based on my conversations with senior AI operators at the likes of Google and AWS, I believe the AI manifestations we are currently seeing in their mainstream products are not even the tip of the iceberg. Think of them as small experiments or POCs. The depth and range of their pipeline of AI capabilities are beyond regular imagination.

Btw, I am a believer in Bill Miller’s thought – “The economy doesn’t predict the market. The market predicts the economy. Going by how BigTech stocks are ripping amidst a rather cool economic and market environment, the wisdom of public markets also suggests that these incumbents are poised to reap huge dividends from AI.

So, amidst all the noise and hype, if you are trying to figure out a simple, risk-adjusted way to benefit from this AI platform shift, here’s a thought to consider:

3/Early-stage startup plays are still fuzzy

After spending significant bandwidth meeting AI founders, I am seeing that, as opposed to the BigTech and AI Hyperscaler plays, there is significantly more fuzziness in the early-stage ecosystem (and rightfully so!).

Inspired by the recent SaaStr session between David Sacks (Craft Ventures) and Jason Lemkin, here are my running thoughts on 3 categories of AI startups:

(I) Infrastructure

These include LLMs and other aspects of foundational AI infra. This bucket is really challenging to invest in simply because:

  • Building AI infra requires deep technical chops and/ or very specific prior experience, ideally in a particular set of companies. These teams are rare, extremely hard to source, and often get spotted very early by the likes of Sequoia and A16Z.
  • AI infra startups require large amounts of capital and therefore, need major VCs to be in them from very early on. In other words, these companies are hard to bootstrap, and funding them requires playing a very different kind of game that’s hard for a small check writer to play.

(II) Classic vertical SaaS with AI capabilities

The hypothesis here is that given AI is a massive platform shift, does it create new gaps in existing verticals like healthcare, education, sales, customer support etc. that a fresh generation of AI-first startups can exploit?

The hurdle I face while evaluating these startups is – why wouldn’t an existing growth or late-stage company just leverage AI as a new capability in their existing product suite? Incorporating AI features into an existing installed base (eg. what Microsoft is doing with OpenAI) seems like a superior ROI proposition compared to taking a brand-new product to market.

If this generalization is indeed true, it definitely raises the bar for this bucket. However, again to think out loud, there are some contexts where there could be a real commercial case for new AI-powered vertical software. For eg.:

  • Legacy verticals where fewer growth-stage startups of the prior generation have entered – say transportation? Or construction? The argument here is that it’s easier to beat old incumbents by using AI as tech leverage, compared to other late-stage startups who might be equally good at incorporating it.
  • Verticals where brand new paradigms are opening up, which will change the game itself – given winner-takes-all dynamics in tech, most incumbents are hard to beat at their own game. But, if the game itself changes (often due to a tech inflection), then David has a better chance against Goliath (read my post “David (Microsoft) vs Goliath (Google)“). Eg. using AI in genomics, drones, automotive etc. to solve problems and deliver work in totally new ways.

(III) Job co-pilots

The hypothesis here is that AI will spawn a generation of job-specific assistants called co-pilots, that will make a specific job more efficient and effective. So everyone from a doctor and lawyer to CFO and marketer will have a co-pilot that does everything from workflow automation to insights generation, all in a conversational UX.

This seems to be an extension of the productivity-software thesis that many VCs followed over the last 5 years. Sounds interesting and plausible, though I am still not able to build conviction on what a winning company in this space could potentially look like, how it would need to be capitalized and built, and whether it can generate venture returns.

I am learning new thesis, approaches and frameworks every week, especially related to the early stage startup plays in AI. More to follow in AI Musings #2…

Subscribe

to my weekly newsletter where in addition to my long-form posts, I will also share a weekly recap of all my social posts & writings, what I loved to read & watch that week + other useful insights & analysis exclusively for my subscribers.