Hi, I'm Scott Barton and this is what I wish
I knew about AI. What is AI and how can it help you forecast
revenue and be more effective with managing complex incentive compensation programs? I want to introduce our guest today, Joel
Shapiro. Joel is Chief Analytics Officer at Varicent
and Professor of Data Analytics at Northwestern University Kellogg School of Management. Joel's work has been featured globally as
a contributor to CNBC, Harvard Business Review, Forbes and other top journals and p
ublications. Joel, welcome. Scott, good to be with you. Joel, you know we hear all the time about
AI, I certainly do, and all the wonderful things that it's bringing to our lives. In real simple terms, what is artificial intelligence? What is AI? Yeah, you're right. It's ubiquitous, certainly both its use and
sort of the terminology. So the way that I think about AI is that it
essentially is a capability of, let's call it computational sophistication. So it's using really sophisticated computing
techniques that allow us to essentially tease out trends. That's really sort of the foundation. I think, as we'll talk about it today, that's
going to become pretty important. It teases out trends in data and phenomena
and so forth, so that we can understand what's likely to happen next. That's kind of the key to all of this: AI
allows us to take what's happened in the past, and make a really good guess about what's
happened in the future. And when we know what's likely to happen in
the future,
we can typically do something to leverage that knowledge. And that's not terribly different from the
way that human intelligence actually works, right? It's all about trend identification and processing
phenomena so that we can actually do something meaningful with that information. Artificial Intelligence is the same thing. We just use computers instead of human brains. Right. Makes sense. So I'm hearing sophisticated, trends, you
know, making sure that we do things that are meaningful. So wha
t are good applications for AI? Where does it make sense to use AI? You know, how can companies make good decisions
about, investing in this technology? So, you know, I see a lot of companies who
decide that they want to be, let's call it data driven, or use advanced analytics. So they want to get into machine learning
or AI or whatever you want to call it. A lot of these things are sort of variations
on the theme but oftentimes, we see companies that jump right in before they really understand
how the analytics, or the AI, or the machine learning, or whatever it is, how it's going
to actually help them. And this is sort of hugely problematic, right? I mean, if you're going to invest, even a
nickel and AI typically doesn't have to be super expensive, but it's going to cost you
at least something. You really want to know what problems you're
trying to solve, or what questions you're trying to answer, or what opportunities you're
trying to address. And so I get really nervous when I see
companies
say we want to transform, we want to you know, do our digital transformation because we want
to embrace AI. We know that this is important and cutting
edge and so forth. And when they sort of lead with the transformation
or they lead with the technology, I get a little nervous. And so, you know, my my sort of take, or typical
spiel, when I talk to a company is like, Well tell me about what the problem is that you're
trying to solve. And if they can do that, like at a real level
of spec
ificity, we want to grow market share. We want to, you know, retain more customers
or whatever it might be. If they can get specific on it, then I feel
pretty good. Okay, good. You got a direction that you're going. But when it's just sort of this vague notion
of we want to be more competitive or vague notions of personalization and customization. Without more than that, I get nervous because
it all starts with what's your business problem or your business question or your business
opportunity?
And from there a data strategy, whether it
includes AI or not, can flow pretty nicely. Yeah, yeah. Makes sense. What problem are we trying to solve? You know, having worked in sales operations,
revenue operations for years, the problem often was around just surprises, you know,
and part of the solution was mitigating those surprises. So these are things like, you know, deals
that were in the pipeline that you know, few days before the end of the quarter, like grew
three or 5x and value, you know
, thereby retiring somebody's annual quota, or, you know, sales
campaigns for cross sell, that, you know, never really got traction. And you know, people are surprised by this,
right. So, how can AI be used to to mitigate or or
limit these types of surprises that seem to come up all the time in a in a revenue operations
environment? Yeah, so look, I mean, AI can be helpful,
sort of foundational to what you're talking about. Avoiding surprises is being able to better
predict the future. But I don
't want to sort of overstate it because
the fact is that sometimes surprises happen. And there are certain things in the world
that are unpredictable. There are chance occurrences, there are unfortunate
or fortunate accidents, and things change. What AI is really good at is finding some
of that typically, so if we know that there are certain trends of things that have happened
in the past, certain salespeople working on certain kinds of deals with certain products
in certain geographies, and in
the past, we've seen disproportionately, frequently, a big
deal, you know, come of that when we didn't expect it. AI can be pretty good at teasing that out. Say, hey, you know, in this particular set
of circumstances, we're more than likely to see that kind of thing happening, or we're
really unlikely to see a big deal come in whatever it might be. So because AI is sort of foundationally about
prediction, that is almost definitionally helping you to sort of mitigate those unforeseen
consequences
or to mitigate those surprises. You know, the thing that sometimes people
get challenged with is they get as I started with, they get a little overly confident with
AI. You know, AI can move the needle. It is never perfect, especially when you are
talking about social systems. And sales is kind of like the ultimate social
system, it's people and it's people doing things trying to bring about some well-defined
outcome. Doesn't mean that we ever get things like
perfect predictions, but what we wa
nt with AI is to make things a little bit more predictable
than without that computational sophistication. Right? So it's not going to eliminate the surprises
altogether, but it can kind of augment, right, certain pieces of information. I mean, if you know the example I gave of
the huge deal seemingly coming out of nowhere, we could perhaps see that that is a function
of a certain type of customer. And a single a single rep isn't going to know
that price, because they don't have access to you kn
ow, all the customers out there necessarily. Yeah, exactly. And it's this confluence of events and that's
where that's where computational sophistication just kicks human beings butts, right. I mean, we can look at we can spend, we're
to pretty smart guys, we can sit there and we can look at data for a pretty long time. Lots of salespeople, lots of different kinds
of customers, different points of the year, in different contexts, whatever it is, and
we just can't process information in the same
way. So figuring out those trends that say, in
this unique context, we're likely to see an uptick of frequency of this big deal come
in, you know, we'd have to get pretty lucky to be able to see that even if we couldn't
see it. We probably wouldn't be all that confident
what we're seeing. That's why we use machines, because they're
really, really good at exactly that. At two in the morning. I'm probably going to miss a few. A few details in terms of the sales pipeline,
and, you know, I just thin
k of all the responsibility that typically falls on the part of a rep
and the information that he or she has to put into that pipeline. And it seems like a lot of the kind of critique
around the quality of the data in a pipeline and ultimately falls on the rep. It's just, you know, one person and they're
going to miss things, right? Sure, yeah. So how can sales managers just better manage
the pipeline using AI? It's getting really specific in terms of that
pipeline piece of it, and it's, you kno
w, it's a rolling forecast. Yeah, look, I mean, so the way that I think
about it, you know, when somebody says to me, Where does something like AI really shine? Broadly speaking, even if we go beyond sales
pipelines and so forth, the way that I tend to think about it is it makes us really efficient. And what I mean by that is we can be really
good at customizing what we do,the right things, at the right time, the right people... to
bring about the desired outcomes. And I think that's very true.
The pipeline, that's how I think about it. And so, the example that I often use, goes
kind of like this. Imagine that we had an AI model tool that
allowed us to take any given person anywhere within a sales pipeline and come up with a
let's call it a predicted likelihood that they would actually buy so that conversion,
if you will, and let's just imagine... let's grab three of them at random. And let's imagine that one person, according
to our AI tool, has a 95% likelihood of buying, so pretty l
ikely to buy. Another person has let's say, a 50% likelihood
of buying and then another has a 5% likelihood of buying, right, so our AI tells us that
with these three people, we've got one very likely, one very unlikely, and one in the
middle. And I like to ask salespeople and sales leaders,
I'll say, look, you've got limited resources, of course. Where do you want to spend your most time? Where do you want to spend your valuable time
and resources if you could only choose one of those three pro
spects to engage with? Which one would it be? Just out of curiosity, Scott, which one do
you think people mostly tell me that they want to spend their time with the 95%? The 50 or the five, curious what your take
is? Probably the five. Well, I don't know about that. So it's funny, I'm glad that glad I asked
that question because actually what people will do is they'll say to me that they want
to spend time with the 95% person. And the reason for that, I think is kind of
straightforward. They say
look, if I've got somebody who really
wants to buy I want to make sure that I'm doing whatever I can to get them across the
finish line. Now the way that I think about it, is look,
if you've got somebody who's really likely to buy, you've got to make sure you complete
it, but you want a pretty light touch. And the reason for that my opinion is because
but if I've got somebody very likely to do what I already want, I'm not going to spend
or at least I shouldn't spend a whole lot of time trying t
o get them to do what they're
already going to do anyway. And typically what I'll tell people is, look,
the 5% person might be just too unlikely to buy it might not be worth your time spending
there. Oftentimes we find it's those people in the
middle that are sort of on the fence where your time is most well spent. But of course, that risks not closing that
deal, spending time, and if you can move someone let's say from 50% likely to close to 60%
likely to close, people get frustrated and say oh
man, I could have done a little bit
of good with that 95% person. However you look at it. The key is if you start with this foundation
of knowing what's likely to happen, and to be able to differentiate, you know, if I have
1000 prospects in my pipeline, AI is going to give me 1000 different pieces of information
about each one of those and whether you buy into what I said or you have a different philosophy
of how you would intervene. The point is, AI lets you intervene differentially,
or diffe
rently with different people. You want to intervene with the people who
are really unlikely to buy at least you know who those are. Empirically, you know, those are as opposed
to guessing. And so sort of circling back to what I said
before I think of AI as being how do you allocate your resources efficiently. It allows you to target individuals go in
and say this person is on the fence. I can even test out some initiatives to figure
out what's more likely to make them buy or move them along in t
he pipeline, or whatever
it is. And that's pretty cool. That's nothing that we've ever been able to
do before until we got that very sophisticated computational abilities. Yeah, yeah. And that's why you know, I find AI just fascinating
in the space of, you know, sales performance management because we're dealing with, you
know, the unpredictable behaviors of customers, and also of reps and just different styles. And, you know, I pick the 5% based on what
I see from a lot of sales professionals w
ho really like a good challenge. And you know, those things that seem like
they're going to happen anyway. Maybe though, they'll let them go, but it
might not be the best use of their time. And, you know, just knowing right from the
perspective of a sales manager, where to best focus your resources so that they can be productive
and successful is really key. Well, that's what I was going to say. That's actually... it's a super important point that you make. If you said to me, Joel, you have thes
e three
prospects and your goal is to close at least one deal. I go to the 95% person of course, because
I got to make sure I have that one. But when you start talking about sales management,
sales leadership, you're not just looking at that one deal. You want to spread your resources so that
you know rising tide lifts all boats, whatever it right to be, you want to make sure that
in the aggregate you are seeing improvement, putting your resources where they do the most
good, and that's where I'
d sort of recommend you'd maybe take it off the 95% you know,
put it to something else. But, you know, the fewer deals someone works
on potentially the more they might be likely to be like I got it close just this one. And I think, maybe out of sort of, maybe it's
a little bit of desperation, maybe it's a little bit of nervousness, maybe it's just
a little bit of you know, self protection, that they go a little too heavy on the deals
that are already likely to close and, and that's hard. By the
way, it's hard. In my experience, it is hard for leadership
and management to tell people how to sort of spread out risk in a way that benefits
the organization as a whole because typically people are incentivized on an individual basis. Yes, that's right. Yeah. And you know, they can be so vested in a particular
opportunity that even though it's 5% now they just have a personal mission to make something
now that but with that low of a probability, there's a good chance that they, you know,
fini
shed the month or quarter, and you know, they don't really have anything objective
to show for it, which is really disappointing. You know, one of the mechanisms that I see
used frequently it's a source of frustration, and that is SPIFs. Are you familiar with that? A little bit, once you define it for me and
I'll I'll tell you to define it for the audience. And then I'll pretend that I know what you're
talking about. Yeah, so this this is a you know, they're
essentially campaigns right. So these
are programs and set outside of the
normal sales incentive plan that are designed to focus sales people's attention on typically
a cross-sell campaign. And the thought is that, you know, we didn't
put the SPIF plans in place and we're not going to get the focus because you know, could
be they're not going to generate enough revenue to warrant separate focus. So there's a lot of these steps that pop up
and a lot of organizations they might have 15 or 20 of these in a year. And for the people tha
t administer these things,
it's painful because you've got to set up essentially just mini, you know, sales compensation
plans. I think for a lot of business stakeholders,
you know, their confidence in the efficacy of these things is really generally waning. So if we just step back to what is the business
problem here, while we're trying to get focused on cross sells, how can we use AI for something
like that so that we're really we're allocating our resources in a smarter way, not just the
peop
le and where they should focus to sell these things. But also, you know, think of all the money
that goes into the smear campaigns, we can, it can be a lot of money. And again, if you're not sure where that money's
going, it's not an effective use of the resource. Well, yeah, so like, there are a lot of different
elements of this that I can talk about, but let me just sort of point out a couple of
things. So number one, if the idea is to do a better
job of cross selling, I think at this point, b
ecause we're measuring so many different
things that people would be pretty foolish not to take advantage of all of those metrics
and our ability to be smart about what we measure and how we process it. To shed some insight on what cross selling
opportunities there are. So that's one of the things that that AI is
great at, it's actually not that hard for AI to be good at it. It's kind of, you know, not terribly dissimilar,
from you know, Netflix that sees that I'm watching a TV show and then the
y try and cross
sell me something else or they have something recommended for me. And while it doesn't work exactly like that
with cross selling products or services, or whatever it might be, there's a similarity
kind of mechanism by that works. And so, I would say first of all, before you
go ahead and craft something like this, make sure it's data informed and AI can be really
good at helping you come up with strategies that are more likely to be successful when
it comes to cross selling. But o
nce you've developed this initiative,
once you've developed your SPIF, you know, I hear a lot of people who express frustration
that, oh, we've got to do this one off, and we don't even know that it's effective and
so forth. But I gotta tell you, that one of the ways
that people succeed with data and with analytics and AI, is they run lots and lots of tests
regularly. It is one of the keys to being good with analytics
and it becomes fuel for AI. And you know, when I say that they run tests
the a
lways the danger with using that word is that people are like, oh, yeah, we run
tests all the time. Sometimes when people say that they run tests,
what they really mean is that they try new stuff, and they're not particularly deliberate
or strategic about how they try it to get information that gives them something meaningful. So that's great that you try new things, but
if you're not really deliberate strategic about how you're measuring things, it's not
going to give you a whole lot of informa
tion. Maybe you see oh, this went up this went down
whatever it is, but the beauty of regular testing of lots of different ideas, isn't
just you know if it works or not, you know, do a little bit of a pilot test before we
invest too many resources. What you're doing is constantly if you do
it, right, we're constantly doing is building data. You're going out to customers, you're trying
new things you're measuring whether it works that allows you to bring those data even if
something doesn't work.
There's value in bringing in data that fuels
AI to tell you how to do things better in other contexts or the next time you go into
that same context. And so you know, I, I hear you talking about
lots of little one offs, they're a little bit painful and so forth. And I would hope that a lot of people see
that as an opportunity not to stop doing it because it's painful, but how to make regular
testing and capturing measures that matter a little bit easier. Because that that is just one of the big
gest
differentiators when you take places that are really sophisticated with great data,
great analytics, great AI, and places that aren't. It's often in their ability to regularly test
stuff, learn from it and implement those learnings. Right? Yeah, so what I'm hearing is, you know, don't
necessarily just stop doing SPIFs, but do it in a way that is more intelligent where
we're you know, collecting and using the data so we get a better idea of what what works
and what doesn't. And I see a lot o
f that happen in in marketing,
you know, the marketing side of the house. Yes, for sure. A lot of it in sales in a way that you know,
it's really treated more like an experiment or a test, versus it comes off as just kind
of a bunch of random campaigns. Well, and right. And so the problem with the random campaign
syndrome, if we call that a syndrome, the problem with that is that when it tells you
when it tells you something that you don't want to hear, it's very easy to dismiss it
because it wa
sn't particularly rigorous. When it tells you something you want to hear. It's easy to embrace it and, you know, nobody
can really, you know, chop it down because there's no rigor behind it, right. It's sort of it's too easy to let it confirm
what you wanted to or, you know, sort of make the point that you hope it makes and the whole
point of being more rigorous and more analytically driven and using AI is to get some of the
politics and the bias out. can't always do it possible to get it all
ou
t of there. But it's the whole idea is to let the best
ideas win. And some good testing to do that. If you're just really deliberate about it. It's important because like, that could be
a good a good candidate for AI and just trying to stick with a theme here that we started
with around being real specific, as opposed to trying to boil the ocean. Yeah. The other thing that comes to mind is be the
last thing I'll ask you about in terms of whether it's a good, you know, a good proposition
for AI i
s, you know, a company I've worked at previously. We go through this, this issue of responding
to situations where the rep was basically resigned. He or she told their sales manager that look,
I've got a great offer. I'm going to leave but they're kind of leaving
the door open right to see if the manager is going to respond with more money, you know,
or stomach or something. And so this set off, right, a series of events,
you know, that was called saves in trying to save the individual. And ther
e are a lot of skeptics in this process
myself, be one person, you know, I'm not really convinced. Whatever we do if it's a big bonus, or we
give them a bunch of our shoes, that that's really going to address the issue of them
being dis-engaged, considering another job. Maybe they stick around for a while, but probably
after a while, you know, 15 months or so, they're going to leave and so my point is,
they've reached the point of disengagement, you know, and all we're doing is just responding.
So what we attempted to do with data to address
this problem is to be more proactive, and try to predict based on a bunch of data, when
reps were likely to disengage, and tell you just, you know, having gone through this,
it was a painful exercise. You know, we're using spreadsheets. We're trying to make sense of a bunch of data. I am staying up till two in the morning trying
to make sense of this stuff. And then you know, in terms of what the managers
got, as a tool, so they can engage in this
process wasn't particularly user friendly. There was no dashboard really, it's just a
bunch of spreadsheets. So the situation like trying to limit churn,
be proactive about very expensive and valuable, you know, sales resources, disengaging, getting
ahead of that problem. How can AI help in a situation like that? Yeah, so predicting churn is hard, whether
it's customer churn or salesperson churn or other employee churn or whatever it is, it's
hard to do. But there are some pretty substantial ups
ide
if you can do it. Again, remember, we're not we're not relying
on these models to be perfect. We just want it to be at least a little bit
better than we could do without those models. And so, you know, if we're talking about identifying,
whatever you got 10 people in a sales team 1000 people, whatever it is, AI can be used
to generate like a let's call it a flag or risk flag, right? We can think of it again, it was kind of like
my prospect. Prospects converting a set of probabilities
0% to 1
00% Someone's 90% likely to churn 10% likely to turn whatever it might be. And so, AI is pretty good at looking at all
the things that happen to all the different salespeople and all the different contexts
that a big deal comes in or falls through or whatever it is and being able to say at
any given point in time, the likelihood that someone is going to quit within I don't know
a three month period or whatever it is, is x percent. The beauty of that is when you know that across
your entire team,
you can actually do some pretty important things with that. Like one of them is even if you couldn't save
anybody. The fact is, if you have a better sense, if
you have a heads up that certain people are going to leave even if you can't save them. It's pretty nice to be able to know that right
to make alternate plans. It's also really nice to be able to know which
ones you're going to try and say if you've got somebody who's super unlikely to quit,
and you've got someone who's more likely to qui
t, kind of the same thing as before. We talked about with allocating your resources
to the right prospects, you know, hey, leave this person alone. They love it. They're doing great. Let's focus our time and effort on this one
person who seems to likely be at risk in the near future. And, you know, when you start talking about
how do you actually intervene with them? AI can be pretty good if you can sort of empirically
tease out what are the factors that are related to someone quitting not only
can you have
advance notice of it intervene. Sometimes it can give you some good ideas
of what you could do. If anything again, some people there's probably
not a whole lot you can do but other people might be like, Oh, if we paid them more, if
we you know, gave them a more time off or whatever it is that we might do, you know,
send them a nice bottle of wine, whatever it is, that can all go into sort of our big
suite of intelligence to figure out how do we keep more people or as I said before,
if
we can't keep them, at least we know that they're going to leave which can be valuable
in and of itself. Yeah. Wow. I mean, super cool. You know, very important. I know for a lot of companies in terms of
just you know, getting ahead of whether those things that tend to disengage people. Yeah, I mean, this this is good stuff. I've learned a lot from this just from the
conversation. And you know, I wish I had known a lot of
these things as I was going about, you know, these tasks that are reall
y data intensive
and require a lot of mental horsepower. Sounds like they don't, they don't necessarily
have to. So this was, this has been great Joel. You know, that the spirit of the series is
on the growth mindset, you know, growing and learning so, for you, you know, what is something
you wish you knew as you go about, you know, your growth journey. You know, there's a thing that I mean, it's
funny because it's something that I actually know a decent amount about, but when I talk
about it, t
each about it, think about it, whatever it is, I feel still like I don't
have a great grasp of it, and I wish I knew more and it basically goes to this. Yeah, I can build pretty darn good model. It's accurate. All the measures of sort of accuracy of an
AI model are met and all that kind of stuff. And it's I know it's a good model. But very often, when we're trying to create
change in an organization, I see a lot of people who specialized data who are like oh,
I built a great model and the result
s and model speaks for itself. Well, it never does. Right. That is not the way people make change in
an organization. Fact is that they need to be convinced and
influenced and you know, cajoled sometimes. Models don't speak for themselves. And so I continue to observe a lot of times
where good analytics and good insights don't live up to their promise. Because the people who actually have to act
on that just aren't convinced. And so what I wish I knew more about is how
people actually make those
decisions. And I've talked to and I've done some observation,
I feel like I have some good insight into it. But it's like it's super interesting. It's a kind of a maybe a combination of behavioral
science or maybe behavioral economics. It's a little bit of armchair psychology,
I suppose a lot of things that don't have any formal training in but try to read up
on but it's hard because you know, AI, is the way that I see it. AI is not meant to replace people. It's meant to supplement their judgme
nt, supplement
the business and make the business decisions better and easier, but at some point people
have to buy into it. And why don't they we
don't follow the same same logic necessarily making those decisions right. Yeah, I I'm still fascinated by that and still
think a lot about that. I wish I wish I knew more about it. Very aspirational. Thanks for sharing. And again, this has been super, super informative. Just any, you know, last words of wisdom,
or you know, piece of advice that you y
ou want our listeners to know about? I guess, you know, I guess the one thing that
I would say is I still, especially with sales demand, I still see a lot of people in organizations
who are adhering to rules of thought, long standing industry knowledge that they sort
of a swear by. And those are really good opportunities to
actually empirically test whether those things are true. You know, whether it be you know, someone
who says, oh, it's best to really simplify the number of top plans that we
have. Maybe that's true. AI gives you the opportunity to perhaps even
create a customized one for each individual. Now, maybe that's overwhelming and ridiculous. Nobody wants to go in that direction. But every time I hear someone say, well, we
just know it's true, or that's been true forever, or that's just a good rule of thumb. There's that part of my brain that always
goes, oh, I want to test that empirically and see if it's really true, and I guess I
I'd like to impart that same level of mayb
e skepticism or curiosity in people because
a lot of information out there that can be in is being measured and some pretty darn
good analytics tools that allow us to not take those things at face value and actually
test to see if they're really right. So that's kind of a fun way to think about
I like it sort of always where my head is. Yeah, there you go. And you know, also to the audience, let us
know what you wish you knew. And we'll dive in hopefully, subscribe, you
know, like, like what she
heard and watched it if you didn't dislike it, and we'll catch
you next time. Thanks a lot, Joel. Thanks, Scott. It's great.
Comments