[ad_1]
If you work in SaaS, you’ve likely already been part of a conversation at your company about how your customers can benefit with increased value from your products infused with generative AI, large language models (LLMs) or custom AI/ML models.
As you hash out your approach and draw up the product roadmap, I wanted to call out an important aspect — one that I couldn’t help but draw an analogy to the good ol’ California Gold Rush. Don’t show up to the gold rush without a shovel!
Similarly, don’t overlook the monetization aspect of your SaaS + AI. Factor it in at the outset and integrate the right plumbing at the start — not as an afterthought or post-launch.
Two years ago, I wrote about the inevitable shift to metered pricing for SaaS. The catalyst that would propel the shift was unknown at the time, but the foundational thesis was intact. No one could have predicted back in 2021 that a particular form of AI would serve to be that catalyst.
SaaS + AI — what got you here won’t get you there!
First thing to realize is that what is required is not merely a “pricing” change. It is a business model change. Traditionally, SaaS pricing has been a relatively lightweight exercise with a simple per seat model and a price point set sufficiently high above underlying costs to attain desired margins.
Don’t show up to the gold rush without a shovel!
A pricing change would be a change in what you charge; for example, going from $79 per user/month to $99 per user/month. A monetization model change is a fundamental shift in how you charge, and with AI as a consumption vector, it inevitably requires a need for accurate metering and usage-based pricing models.
There’s already a handful of great examples of companies leveraging usage-based pricing to monetize AI, including OpenAI and all companies that provide foundational AI models and services, and the likes of Twilio, Snap, Quizlet, Instacart, and Shopify that are integrating with these services to offer customer-facing tooling.
Why usage-based pricing is a natural fit for generative AI
One challenge of monetizing generative AI is that the prompts and outputs vary in length, and the prompt/output size and resource consumption are directly related — with a larger prompt requiring greater resources to process and vice versa.
Adding to the complexity, one customer may use the tool sparingly while another could be generating new text multiple times daily for weeks on end, resulting in a much larger cost footprint. Any viable pricing model must account for this variability and scale accordingly.
On top of this, services like ChatGPT are themselves priced according to a usage-based model. This means that any tools leveraging ChatGPT or other models will be billed based on the usage; since the back-end costs of providing service are inherently variable, the customer-facing billing should be usage-based as well.
To deliver the most fair and transparent pricing, and enable frictionless adoption and user growth, companies should look to usage-based pricing. Having both elastic front-end usage and back-end costs position generative AI products as ideal fits with usage-based pricing. Here’s how to get started.
Meter front-end usage and back-end resource consumption
Companies leverage prebuilt or trained models from a plethora of companies and may further train them with their custom dataset and then incorporate them into their technology stack as features. To obtain complete visibility into usage costs and margins, each usage call (be it API or direct) to AI infrastructure should be metered to understand the usage (underlying cost footprint).
[ad_2]
Source link
Comments are closed.