Field Notes

AI Sales Paradox: When Knowing Less Means Asking Fewer Questions

Most customers lack the knowledge to evaluate the risk that the AI in your product poses to their organization. This gap slows the sales cycle and places deals at risk of legal, data protection and security delays based ...

AI Sales Paradox: When Knowing Less Means Asking Fewer Questions

Most customers lack the knowledge to evaluate the risk that the AI in your product poses to their organization. This gap slows the sales cycle and places deals at risk of legal, data protection and security delays based on uncertainty rather than actual risk.

It’s a difficult issue for builders to address - many in risk management are afraid of revealing their lack of knowledge and so won't ask the questions that would help inform themselves to make better decisions.

Others stride in confidently, not realizing that their knowledge is full of gaps and misconceptions. Either way, it’s a barrier to the kind of open dialog that best addresses concerns and the time spent explaining keeps you from focusing on the value your product delivers.

Generative AI is a new field for most people and the technology is different enough from traditional systems that many risk management practitioners don't even know what questions they should ask.

“Will you be training on our data?” is probably the most common AI related question from prospects but it is completely irrelevant from the risk perspective without the appropriate context.

There’s a myriad of potential follow ups that would add value but are rarely asked:

  • What data do you use for training?
  • Are you training shared models?
  • If so, how do you ensure that you don’t leak confidential data across tenants?
  • How do you secure the data retained for training outside of the production environment?
  • Who has access to it and under what conditions?
  • How are deletion requests handled for data trained into a model?

The fact that so few customers ask these questions speaks to the critical problem - most people don’t understand the technology well enough to know what they should be worried about, why they should be worried about it and what things can be done to mitigate those risks.

A potential customer, worrying about the wrong things.

Teach people to worry... about the right things

The lack of understanding makes sales enablement and educational white papers much more important than they have ever been in the past.

Remember how long people were afraid of having their data in the cloud, far past the point where it became clear that AWS runs infra better than our own teams?

Now, rather than the conceptually simple change of infrastructure running in someone else’s data center, there’s an entirely unfamiliar technology with whole new classes of vulnerabilities and risks to be concerned about. This drives much greater uncertainty than the move to the cloud ever could.

The fear of Jeff Bezos personally checking out the pricing data stored on your EC2 instances has been replaced by a nebulous fear of what AI could do, driven heavily by a lack of understanding in how these systems work.

Young Jeff Bezos, looking for Target's low prices on inflatable unicorns to illegally undercut

How can you best address these fears and the underlying knowledge gap that causes it?

Start by asking and answering some basic questions yourself. They should be questions that can drive your internal design and operational principles and will provide a foundation for your customer facing collateral.

It could be questions like:

  • What data will you collect?
  • How will it be used?
  • How do you handle personal and sensitive data that may interact with models?
  • How will you use data to improve system outputs?
  • How will you test the system?
  • What will you be testing for?
  • What will you build to help you understand what’s happening in the system?
  • What will you do when things go wrong?
  • Do we understand the trust boundaries?
  • Have we thought through how our system could be affected by the OWASP LLM Top 10 threats?

Now do some threat modeling - what could go wrong with your particular system? How could it go wrong? What are you doing to prevent it?

Think hard about this. Your customers don’t know what questions to ask so you need to go above and beyond to ask difficult questions and provide clear answers to them. This is how you are going to inform customers and prospects, providing clarity as to the real risks that exist and how you’re addressing them.

Make it nice

All this information should then be synthesized into customer facing language. Make it readable. Don’t give the task to your engineers to write without help.

Prospect, reading an engineer authored whitepaper

It should be understandable by non-engineers so collaborate with a technical writer to make it digestible. Bring in the team that handles external collateral to make it look nice and add clear diagrams to illustrate what you’re talking about. Have some non-technical people read it and point out what doesn’t make sense to them. Clarify based on this feedback.

Humans have context windows too, so rather than adding this content to an existing security white paper make it a highly readable, purpose built document focused on a single topic with a single goal - raising the trust customers have in your platform’s ability to implement and operate AI safely and securely.

Got all that? Great! Now run trainings for the sales teams, the solutions architects and your support teams. They'll be happy to learn more about the topic and it will improve their ability to speak confidently to customers. This frees your teams up from having to answer repetitive, redundant and often irrelevant questions so you can focus on building a better product.