Agentic Systems’ Sales Cycles
As software startups begin to sell agentic systems, the procurement process will change. Unlike classical software, where the application either meets the criteria (price, integration into other software, particular features) or doesn’t, agentic systems operate on a performance continuum.
Here’s a recent evaluation table for Codestral, Mistral’s open-source code generation AI. All of these benchmarks are machine-generated : HumanEval & HumanEvalFIM are not human testers – but open-source projects that evaluate AI code.1
This type of evaluation works well for broad sense of relative performance. But what if a business writes code in a particular language? Or with particular performance characteristics in mind?
What if an AI-powered customer support agent needs to be able to manage very technical telecom queries? Or a marketing AI needs to be culturally sensitive to a particular region?
The generic tests probably won’t work, which translates to slower sales cycles as prospective buyers understand the system’s performance in their own context.
In addition, agentic systems in the future will operate for longer periods of time without human intervention. The greater the autonomy, the greater the potential for errors. Benchmarks may not be enough; buyers may want to see how the system performs in their own context over time.
Startups – as they always do – will find ways to accelerate the evaluation. They might develop their own standards much the way that OpenAI has, or partner with third-parties to offer those third party evaluations for particular use-cases.
Imagine a modern day Gartner for Agentic Systems, a company that maintains a diverse pool of human evaluators & computer scientists skilled in various the evaluation of agentic products.
Alternatively, the most sophisticated organizations could create standards that then become broadly adopted. Banks could publish open-source standards for regulator-compliant customer support chatbots.
This purchasing behavior does exist elsewhere. Backtesting is the norm in trading algorithms & marketing optimization. Within the most sophisticated security organizations, security labs exist to test machine learning-based security products and performance before deploying them.
In certain cases, the business need will overwhelm the procurement process. This happens in classic software & it will happen with AI but it’s rarer.
However the problem is solved, agentic systems will evolve the procurement process & startups will need to navigate it.
1 OpenAI created both of these tests to measure the accuracy of its code generation model & now it’s a standard for evaluating AI code generation models.
—————
Boost Internet Speed–
Free Business Hosting–
Free Email Account–
Dropcatch–
Free Secure Email–
Secure Email–
Cheap VOIP Calls–
Free Hosting–
Boost Inflight Wifi–
Premium Domains–
Free Domains