AI and authenticity
Many of us are reflecting on the disruptive potential of AI and ChatGPT across various realms. What about B2B sales?
One way to think about this is through the prism of trust. Salespeople spend a lot of time attempting to earn the trust of prospects and clients. Trust is their primary professional currency, because without it they lack the credibility to make a commercial recommendation. But trust is arguably in shorter supply because of shrinking attention spans and a deeply-rooted wariness of sales professionals.
What is the basis of trust, and how might AI support it? According to Harvard professors Frances Frei and Anne Morriss, trust has three foundations: capability, empathy, and authenticity. People trust you if they respect your knowledge and skills, if you’ve proven you understand them, and if they feel they’re getting the real you. Could AI help to build and strengthen these foundations?
For capability, it seems the obvious answer is “yes.” AI could amplify the competence of a seller by synthesizing information at a speed and scale that are beyond unassisted human capacity. Sellers could weave original and surprising insights into their written communications, increasing the impact of their messaging. Until now, no sales professional could justify the time required to review and distill every public pronouncement from an executive prospect over, say, a four-year period. With the help of AI, this would be a straightforward task.
In terms of empathy, the potential benefits seem equally obvious. Effective empathy could be dramatically enhanced. I’m adding the word “effective” to acknowledge the algorithmic dimension here. After all, a computer, not a person, would notice unique individual patterns. But this would probably still feel like empathy. We don't particularly care that Amazon used a machine to send us the perfect product suggestion. We simply feel as if we’re well understood if the suggestion is sound. It’s the outcome, not the process, that checks the empathy box. AI could help sellers speak to much deeper levels of personal interest, drawing from a plethora of publicly-available data sets.
On the level of authenticity, things start to get a little freaky. Initially, this seems like the wobbliest leg of our trust triad. If we imagine a world in which AI-assisted communication has become the norm, won’t most of us be on guard against “inauthentic” messaging. Won’t there be a higher burden of proof to satisfy our need to be dealing with a real person? Won’t there be a greater premium placed on evidence to establish the seller’s humanity?
Quite possibly. But what if “humanity” and “authenticity” become fundamentally re-defined? Clearly, this is happening already. When we receive a “Happy Birthday” wish, for example, we know it usually springs from an electronic reminder, not from the well-wisher having internalized the significance of our date of birth. We still appreciate the message, and we don’t discount the sender’s sincerity. In this case, and in many others, the fusion of person and machine is well underway.
Descartes’ I think, therefore I am was the axiom that launched the Scientific Revolution, establishing the individual as the primary protagonist in our relationship with reality. It’s now clear that unaided reason is no longer our primary navigation system. Increasingly, what we believe and how we relate to others is shaped by a dynamic partnership between humans and technology.