A few weeks ago I attended a VC summit in San Francisco. Upon arrival I took a stroll with an investor I respect immensely.
It’s relevant to know that while I'm the CEO of Joyous my preferred work-nouns are product person (she-pp) / strategist (her-s).
Apparently the term "AI-native" has been in use since around 2022. However it’s an understatement to say it’s become more popular recently. So popular, that many tech-shops are doing a quick and dirty re-brand to ride the hype.
So, this is typical of me. Until that walk a few weeks ago I had not heard it. Midway through our stroll the investor looks at me and casually says: “Ruby, with Joyous being an AI-native product…” He then goes on to make comments about value-based pricing and other relevant topics.
Wait, what’s that now? Oh well. Guess I was too busy solving a series of critical problems by building a software product with a killer combination of AI and UX to pay attention to the latest buzz words.
Anyway, for the 1% who might be lagging me on this I’ll clarify with my own interpretation.
"AI-native" companies are those who demonstrate an ability to productionise AI continuously, quickly and effectively to solve specific customer problems.”
Guess Joyous is AI-native then. Better lean in.
We are an AI native solution that helps execs ask pointed questions directly to a large workforce at scale. Basically for the impatient exec who doesn't want to be buried in five layers of PowerPoint and bureaucracy to understand what's happening on the ground in real-time.
We use AI to run action focused one-on-one chats at a large scale initiated via any channel, and the output is a detailed and prioritised action plan within days – completely automated with AI. Simply put, we orchestrate many discrete and complex applications of AI in a way that feels a lot like magic when strung together.
Our dashboard is super easy to understand and explore, execs are loving it - some also use it to share our insights directly back to their entire workforce in near real-time. Something that’s pretty much unheard of in companies that have achieved meaningful scale. Most are phasing out the use of traditional surveys because they are too rigid, time consuming and not that useful.
A top ranking Fortune 500 company, for example, has been a client for four years, and they are aggressively expanding their use cases of Joyous org-wide after seeing first-hand how fast our insights come together and accelerate adoption and outcomes.
None of this would be possible if we hadn’t been bashing our heads against a wall for the last several years working out 1* what the problems were that needed solving alongside our customers, and 2* identifying and prioritising applications of various types of AI and UX to solve each discrete problem 3* in a simple way that’s easy for humans to both trust and understand.
Another interesting observation is the huge shift in demand for AI products in the last six months. Many global companies – across industries - have started creating senior roles specifically to accelerate the adoption of AI. Finally, the competitive advantage of adopting a product like Joyous has become obvious. Yay for us. And thank f**k we’ve finally arrived here!
Lately, we’ve been helping the more progressive ones by breaking away from the traditional enterprise SaaS model and slow moving vendor procurement processes, so they can get going faster.
What a wonderful phase we’re approaching! But value-based pricing and the imminent death of traditional enterprise procurement is a blog for another day.
Back to the topic at hand.
You know what surprises me? How naive many people still are about Product Market Fit (PMF). Specifically about how to define, measure and ultimately achieve it. So, before I go on to talk about Artificial Intelligence Market Fit (AIMF), let me first riff on the former.
At that same VC Summit one speaker asked the audience what PMF meant to them. A few spoke up – they probably shouldn’t have.
At the start of the session the speaker also asked everyone to stand. Then he asked those who had not achieved PMF to sit down. I sat down. Half of the audience remained standing – I was shocked.
I speculate those who stood there simply don’t have a good mental model for PMF to reference. Considering most are unlikely to be product people – I guess this is understandable.
But, it highlights the importance of sharing frameworks with other founders and CEO’s so that we can all build better products, faster.
Now, I’m not going to pretend that I have invented a good framework for this. But what I have done is found a couple of good frameworks and applied them.
My favourite is really a composition of four frameworks pulled together in a workshop by Rob Snyder.
This is more than just a definition and way to measure PMF - it’s a how-to-guide on the path to product market fit. So, if you’re early on your learning journey I encourage you to start here.
The most valuable mindset shift you will gain from this – assuming you’re not already there – is adopting the ‘demand vs supply’ mindset to proving what people actually want to buy and creating a case study to get there.
This alone can shave years off of your journey to both PMF and a successful GTM. The post above shares the full 91-slide workshop Rob used to present it at Harvard Innovation Labs.
In terms of assessing where you are Rob suggests if you are pre-revenue you are at L1. If you have a few happy customers you are at L1-L2. If you consistently add a number of customers (using a number that makes sense for your business model) per month in the same way, with the same pitch and a high conversion rate then you are at L2-L3. If your customers consistently renew / upgrade / refer then you are at L4. And as for L5, you’ll know when you’re there. Few ever make it.
You can see why I was surprised when all those founders remained standing. Because PMF is not a binary state.
Using this basic mental model Joyous is realistically between L3-L4. So, not quite holding on for dear life just yet, but also further than most ever get.
Another framework that I like is the one by First Round Capital – it’s well described in this podcast with Todd Jackson. Actually the same Investor I took that walk with introduced me to it a while ago. I think it’s wonderfully complimentary to Rob’s.
Like Rob’s it also have levels of PMF, in their case only four. What I enjoyed about Todd’s take on the podcast was his way of assessing which level you are on. His is more specific.
Important sidebar: If you’re stuck here for more than 18 months, or your regretted churn is more than 20%, or you’re struggling with long sales cycles and losing deals in the late stages - then it’s not a great sign and you might be stuck.
If so, the main takeaway is: you may need to do a 200% pivot, not a 10% one. Which not only gives you a measure of PMF, but also a wake-up call.
Todd mentions on the podcast that as many as 70% of companies get stuck at L1 or L2. So that means only roughly 30% of companies get to L3. Once you get to L3 it means you have a real shot.
This framework also has a further three dimensions to consider and four levers to help you get unstuck. This is super helpful to tickle your thinking if you’re part of the majority stuck on L1 or L2.
If I was to assess Joyous against this framework, I’d say we are on L2. But on the very cusp of L3. So, that’s both encouraging and disheartening depending on how you look at it. I predict it’s our rapid acceleration with AI that will catapult us into L3 and beyond within the next few months.
Okay, we’re back to the VC summit. It’s a different session now and a panel has been pulled together to talk about adopting AI. Specifically Generative AI (or LLM’s) into productionised commercial products.
About four guys are sitting on stage. One warns the audience against making calls based on engineers building quick prototypes of AI features. Why? Because Gen AI is geared for an easy and impressive demo. And extremely hard to productionise thereafter. He cautions that months of precious engineering time can be wasted on something that was never viable to begin with. They all agree.
Another talks at length about how important it is for models to have context. And how - without context - you will never achieve sufficient quality for a feature or product to succeed. So, training the models is a big topic and the importance of constant and exhaustive training data being a critical success factor.
Again, I found this super interesting.
Firstly, we have taken a product lead mindset to adopting AI. Engineers are not making up gimmicky features for the sake of it. Product people are working with customer people to manually perform tasks to solve real and immediate problems.
For each task - once we understand the manual effort well enough - a data scientist builds a prototype that we swap out for the manual work. It’s worth noting this is all done outside of the product. We’ve not used any engineering time yet.
Once that prototype consistently gets us at least a 50% return on efficiency, we productionise it. We put it behind a toggle and we also ensure a human can moderate it. And I don't mean moderate it in a hacky way. I mean in a production quality long-term feature way - directly the core product.
After that we shift to real-time quality monitoring, manually at first, as we level up the quality metrics to 80-90%. These metrics are different for each task depending on what it’s trying to achieve.
And then once we’re there we switch to automated monitoring and double down on getting to the real deep value. That extra layer of elusive product magic that has the potential to take people’s breath away.
This has been an outstanding and enjoyable approach to solving real problems with AI quickly.
Secondly - as eluded above - we have been able to build prototypes and successfully productionise them. To be fair, getting our first significant feature to production was very hard - in fact it took us years. That’s because we were trying to solve a hard problem in a unique way.
By far the majority of the time was spent defining the problem clearly.
Once we had that definition - along with a series of verified datasets that represented what good looked like - the rest of that first feature was relatively straight forward.
It was also helpful that Gen AI really hit it’s straps around the same time. After that one hard earned - and terrific - success my mind exploded. The possibilities of what else we might achieve became instantly tangible and obvious. And so, we just got on with it. Over and over again.
Thirdly, context. The assumption that models need context (as in model training on customer specific data and use cases) turns out to be false. At least for our use case. I'm most proud of this.
From the very beginning of our journey in 2021 I decided that I wanted to build something that leveraged AI - without requiring training on customer data. I saw it as a master stroke in avoiding risk.
I wanted it to work out the box. On any data. For any customer. In any industry. Anywhere in the world. Instantly.
And, eventually we did it. Lost a few fingers and toes along the way. Nearly gave up once or twice. But we did it.
So, that’s why I found that session interesting. It made me feel - perhaps falsely - ahead of the game.
As we scaled our adoption of AI to production I documented our emerging approach. I wanted to codify it to help us stick with what felt like something truly special. No doubt this approach will increase the chances of us looking back on what we create and be immensely proud.
There’s plenty more we do in this regard. Currently, we are working towards being fully model agnostic. In other words, allowing our customers to have the exact same Joyous experience using their preferred cloud and AI infrastructure. Want Joyous on AWS with Claude, or on Azure with Open AI or GCP with Gemini? Or some other combo? No problem.
So, considering all that lets move on to AIMF.
Great question! First I need to put a big disclaimer here. I’m now entering the murky territory of creating my own variant of a PMF framework. There isn’t much out there on this topic. At least not that I’m aware of. Also what works for me might not work for you.
But a complimentary take on PMF in an AI context is more than warranted.
I’m of the view that companies that master AIMF will dominate in 2025. By delivering not just AI features - but AI-native transformations.
AIMF in short represents the shift from building AI products for novelty to creating indispensable tools. Ones that seamlessly fit, improve and grow within the markets they serve.
Just like PMF levels, even at L4 you must still work hard to maintain your competitive advantage. Once again if I was to assess Joyous against my own quick and dirty AIMF framework we are currently on L3 stretching into L4.
One interesting thought as I reflect on where we stand across the two complimentary models is that our AIMF looks like it might be a leading indicator of PMF. Our AIMF is one level ahead of our PMF.
Another interesting thought: there is no simple, popularised frame of reference for a product's AIMF to be assessed against. Why not?
So, maybe this could be a start for that model to emerge. It could well play a helpful role in tracking progress.
Not only your own AI-native product's progress. Perhaps it could also be a meaningful yard stick for companies to assess prospective vendor's AI maturity and progress against.
I’ll end on one last (hopefully) helpful note based on our learnings at Joyous.
If you found this blog helpful, please let me know! I’m considering writing my third book and might make a new way of working (Basically V2 of Joyfully ) based on building AI-native products.
P.S Chat GPT did not write or edit this article for me. So, please excuse my typos and instead view them as proof that this written by a real human. Even if that is kind of ironic considering the topic.
P.P.S I did not use AI to generate the blog header image either. That glorious photo was taken with my iPhone on Great Barrier Island just two days before I went to San Francisco. It's not even edited aside from a vignette!
Ruby is a comedian-turned engineer, previously leading product at two global tech companies, she has been CEO at Joyous for 4 years. Her passion for making a positive impact on people’s lives is perfectly matched with the mission of Joyous to make life better for people at work.
She enjoys working across all parts of the organization and is passionate about product direction and data science.
She is the co-author of Joyfully, a book about shared leadership, modern organizational structures, and a new way of working. Her second book Pathways, is a guide to help woman and other under-represented people get a job in technology in six months or less.
She was the Winner of the Product Category for Women Leading Tech Australia 2022 and a finalist in the Inspiring Individual Category of the HiTech awards in 2023.