Podcast

Episode: 383 |
Adam Braff:
Forecasting:
Episode
383

HOW TO THRIVE AS AN
INDEPENDENT PROFESSIONAL

Adam Braff

Forecasting

Show Notes

 

Adam Braff is a former McKinsey partner; he runs a data and analytics consulting firm called Braff & Co. where he advises investors, boards, and senior leadership teams on extracting maximum value from their investments in data and analytics. Today, we talk about forecasting.

Key points include:

  • 02:17: What Adam learned from running a forecasting contest
  • 07:04: The problem with overconfidence
  • 10:42: What differentiates a good from a bad forecaster
  • 16:23: Researching STEM topics to make a prediction
  • 20:42: How forecasting has manifested in Adam’s advisory work.
  • 30:16: Addressing how to get better at forecasting

 

You can learn more about forecasting and check out Adam’s masterclass at https://braff.co/advice.

 

One weekly email with bonus materials and summaries of each new episode:

Will Bachman 00:01
Hello, and welcome to Unleashed the show that explores how to thrive as an independent professional. I’m your host Will Bachman and I’m here today with Adam Braff who is returning to the show. Adams, a former McKinsey partner, he runs a data and analytics consulting firm called Braff & Co. Adam, welcome to the show.

Adam Braff 00:22
Thanks. Will, it’s great to be back.

Will Bachman 00:23
So, Adam, I think we were planning talk about forecasting today. And I’m really interested, you’re telling me before we started recording that you’ve been running a forecasting contest with your friends for six years. Tell me about that.

Adam Braff 00:36
Yeah, it started with Philip Tetlock book, which came out about six years ago, super forecasting, which perhaps you also but I and a bunch of my friends read. And we were all really interested in what telok had to say about forecasting contests and about how to become a better forecaster. So I launched the initial contest in 2016, where I set out a

bunch of propositions for people to predict yes or no, would they happen or not? And got a few dozen entries that year and score them and the rest is history?

Will Bachman 01:15
What sort of questions Did you ask people to forecast?

Adam Braff 01:21
They are binary propositions on a number of different topics. So we will have questions ranging from entertainment, like who’s gonna win the bar? Well, as specific artists win the Grammy Awards, or the Oscars to sports will a certain team or conference win the Super Bowl, to celebrity deaths and business and, and everything in between? So we the goal of picking the topics, is to select topics where no one forecaster would have an inherent advantage because they are a deep expert in politics or business, and to make sure that the propositions would play out over the course of the year so that we have the entertainment value of keeping track of the leaderboard as they go.

Will Bachman 02:11
And what have you learned over the past six years of running this experiment?

Adam Braff 02:17
Well, for one thing, it’s really hard to make these forecasts I should explain a little bit in more detail how the contest plays out. At the beginning of the year in January, I crowdsource these propositions I asked my Facebook and LinkedIn network to give me a bunch of topics for the year to come. And most of them are typically about Donald Trump or about the environment or about certain other celebrities. And I curate those and put them together into a set of 25 binary propositions that are going to play out over the year. So for example, last year, because everything was happening in January, there were no propositions that were directly about the covid 19 pandemic. Nobody even knew what that was in January in the US. So we had propositions like, by March 15, Lori Loughlin will be sentenced to or will reach a plea agreement, entailing at least in Europe a year in prison for her Roland varsity blue scandal. Or there’ll be a prop that says Japan will finish within the top three nations in the 2020 Olympics, total metal count. There was a proposition that on September 1, Scarlett Johansson and college Joe’s to be married to each other. So. So the the process is to set those props out and then I make my own forecasts from zero to 100%, likely that this thing will happen and all the other contestants make their own forecasts of zero to 100%. And we all do this separately. They

submit the forecast through a form. And I put it all together and then maintain a somewhat automatically running leaderboard of who’s ahead and who’s behind based on something called the Brier score, which is the, which is the square of the error in anyone’s forecast. And I can explain that in a little more detail. If you like.

Will Bachman 04:22
Yeah, I’d like to understand that like so it’s one thing if it was just yes or no, it’ll happen. But how do you factor in all these percentages? If someone says something is 60% likely to happen? And it does or it doesn’t then like how does that affect their score?

Adam Braff 04:37
Yeah, so if you if you bet 60% on the scarjo, or I guess the Jojo proposition of Scarlett Johansson being married by September 1, and if it had turned out that they did get married before that date, then that would resolve as a yes. And the true answer then is is one and your forecast of six sieber said is point six. So your distance to the correct answer or the error in your forecast is point four. Because that’s one minus point six. And the Brier score is simply that point four squared, that you would get a score of point one six for that forecast. If they didn’t get married before September 1, which in fact, wasn’t the case, they got married after that date, then the proper resolve is a no or a zero, and your error would be point six. So point six squared is point three, six, and that would be your bryar score. So the lower the better. And, and your goal is to is to try to get a, you know, a Brier score below, let’s say, point one across all of the propositions in order to win the contest.

Will Bachman 05:48
So you really are incentivized to, if you think it’s gonna happen to guess as high as possible? 80% or 90%. Because if you just hedge and guess 50% on everything, you’d get it by definition, you get a point two, five score on everything, and you would lose. So you’re really trying to get as high as possible, if you think it’s going to win, or low as possible, if you think it’s not going to happen.

Adam Braff 07:04
That’s right. It’s a it’s kind of a tax on overconfidence. The fact that we square the error, it’s what’s called a strictly proper scoring system, where you have an incentive to make a truly accurate forecast based on what you really think the probability is. And trying to game the system by cheating toward 50%, or cheating toward the extremes only only punishes you in the long run. But you’re absolutely right, a person who guesses 50% down

the line will guarantee a brighter score of point two, five. And interestingly, every year, more than half of the participants in the contest get a worse Briar score than point two, five, they do worse than if they had simply predicted 50% down the line. And that’s a fascinating point about one of the lessons learned, which is about overconfidence.

Will Bachman 07:51
So you don’t have to be overconfident either. So it does incentivize you if you really have no idea, it just gets 50%.

Adam Braff 07:59
Exactly. Now there are you can make that choice at an individual level. So if there’s a question that comes up at the contest, let’s say, every year there’s some there’s some question about cryptocurrencies, right. So this year, there is an ongoing proposition in the contest, which says one bitcoin will be worth more than 20 troy ounces of gold at the end of the first quarter. So we’re having this conversation in March of 2021. Right, but the value of Bitcoin at the post time when the contest was taking people’s predictions in January was 20 ounces of gold to a Bitcoin basically, that’s whatever that was, at the time.
$30,000. And it has since skyrocketed to about 30 ounces of gold, bitcoins gone up much faster than gold has has done. But that’s the kind of Proposition where a reasonable person back in January would say, I don’t know if bitcoins gonna go up or down. I also don’t know if Gold’s gonna go up or down. I’m not a Currency Trader. And I certainly am not an expert in the relative value of these two financial instruments. So perhaps the right epistemic position to have on that is that’s 50% I’m going to cut my losses and move on and not waste any more time predicting that one. I don’t know if I’m a trend follower or a mean reverter on this thing. So 50% a good guess on that one. You can do that kind of thinking for each individual proposition until you come to a proposition where you feel like you have either some information advantage or some way of making a sharper forecast.

Will Bachman 09:37
So this is a really interesting exercise. Because often in consulting, we are asked to make some kind of forecast on things. And my feeling often has been this is just complete. Non useful for the client like so often on maybe a commercial due diligence to say, okay, you know, what’s the market growth rate forecast for the next five years? Something like that. Like, I mean, look, I can pull up from some report, but honestly, you know, that’s just not really worth the paper. It’s written on it. Who knows, right? You could say, well, it’s been 5%, you know, 6% 8% for the last six years, but maybe it’s run its course, who knows. But they always want to see that. So you put something in, but this is actually testing and

determining if you were correct or not. And so what have you learned about forecasting from this? You know, from this whole process? Are people able to get better over time? Are some people just consistently better than others? What differentiates those who are good from those who are lousy?

Adam Braff 10:42
I would say there are probably five things that differentiate the good from the lousy and then you can have the debate about which way the causal arrow runs. But there are certainly some factors that repeatedly show up as correlating with having a good performance, which is to say persistently low Brier score. And the first is this this question we talked about earlier about making extreme forecasts. Every year, there are a number of people, we call them extremists who make more than half of their forecasts as literally zero or 100%. Now, just think about that for a second, right? If I were to look at the proposition that says, they said last year, Billy Eilish is going to win, Best Album and Best New Artist and the 2020 Grammy Awards, which she ended up doing. That’s not a 0% or 100% proposition, but whatever you may think about her music, there is an actual betting market on those things, you can look at the independent probabilities of that happening. And, you know, at the time, it was, I think, I think it was roughly 5050, that she would win both. And so she did. But if you’re an extremist, and you’re committed to the idea that you’re going to put zeros in hundreds, because you think that’s the only way to win a forecasting contest, you’re going to lose. And it’s simply the math of how many participants there are, how many questions there are, and, and it just doesn’t, it just doesn’t pay to enter extreme forecasts that deviate from what you think is the right answer. So you, you ultimately, the people who win, the contest will have no more than two or three forecasts at the zero or 100 level out of out of 25. The next factor is teamwork. So it’s really good to work in a team, people who report that they collaborated on their forecasts around the family table, tend to perform better than people who work by themselves. And they’re sharing different viewpoints. They’re arguing about stuff, they’re correcting each other’s mistakes to some extent, and working out there. And working out their errors that way. In fact, when you make an ensemble of all the forecasts, you get the wisdom of the crowds that routinely does very well kind of top five performance across all the forecasters, because teamwork, teamwork really makes a difference. The third thing I would say is making an effort and actually doing a bit of research into the base rate of things, right. So in your example of a commercial due diligence, looking at the historical growth rate of what sales have been, is better than nothing, right? It’s certainly starting with it used to grow at seven or 8%. It puts you ahead of somebody who is going to look at let’s say, you know, GDP growing at 2%. And using that as their as their forecast. So putting in some work to do at least a bit of research into what your forecast should be, is better than not doing it. And I say this because one of the

questions I asked on the the contest submission form is did you make your picks at random, like one of my kids does every year? Or do you put some work into it? And I found that I personally spend, you know, between two and four hours on one weekend day in January making my forecast and that has served me well. The force. The fourth thing that correlates with doing well is wagering. So being in the habit of making bets in casinos or with your friends or entering previous forecasting contests definitely makes you better over time. We make a lot of bets at our house. My kids are constantly betting with each other over everything over what’s the next thing that mom’s gonna say when she walks into the room or we do nor walls exists. So we we generally are getting better at Bayesian thinking by constantly wagering and that correlates with a good performance. And then the last thing I would say is overconfidence is the killer. About extremism, but also subject matter experts do not do particularly better than anyone else in the domain of their expertise. So I asked people when they submit their entry, are you an expert in any of the following topics world politics, business crime, we always have a lot of questions about crime. And typically, the the Brier scores on people’s questions in their area of expertise are not better than average. But what is true is there is a generalized super, super forecasting skill set. And there’s a group of people that I call it super forecasters locally within this contest, who routinely do very well in making their predictions. And in fact, an ensemble of their predictions that’s even better than the wisdom of the overall crowd.

Will Bachman 15:53
What types of STEM a little bit more about, like the teamwork and the research that that people do? In terms of research, I can imagine some of the things like, okay, you know, how gold rates trended, or, you know, the price of gold over time, how you know, how much variability it is, or how volatile but something like when will two people get married? How would you think even think about researching those kinds of topics to make a prediction.

Adam Braff 16:23
The ones about marriages and births are very tricky. And there’s a difference between using what Tetlock calls the outside view, which has a base rate of similar people in similar circumstances and how often they’ve, they’ve acted in certain ways versus the inside view, which is to study this particular person or this particular couple, and to try to predict what’s happening to them. So often, there’ll be a proposition about the Kanye West Kim Kardashian marriage, I think that’s come up twice. Now we have we have a pending proposition on that, which is that they will be living together on the seventh anniversary of their wedding coming up in May, that’s looking like a no right now. So the outside view on those celebrity marriages, that one and the the Jojo marriage is really tough, right?

Because you can look at the base rate of, you know, divorces in California and, and, and marriages and say, well, any two people who are dating for a certain amount of time, are likely to get engaged or to get married within x month, so that you can certainly hunt that kind of data down. In the case of doing this research, if you want, if you only have a limited amount of time to spend on, you probably are going to want to spend a lot of your research time on trying to find the right reference class of celebrities in California and what their what their marriage rate is relative to when they first met. But you could certainly do worse than than that. There are other propositions though, where research makes a really big difference, because you’ll find out that there is an active betting market on the topic, right? So one of the propositions last year was well grab a thunberg win the Nobel Peace Prize. There’s a betting market on the Nobel Peace Prize, it’s a pretty active market, there are multiple betting markets on that. And so it was certainly possible, I post time in January to look at that I look at the odds of that and say, Yes, he’s very likely to win. And that was, of course before the pandemic, which changed the odds. And then the winner turned out to be related to neither Greta thunberg, nor that. But certainly there was an amount of research that you can do to hunt down relevant betting markets, there’s also probably a wider range of cases to take the outside view than you might think. So for example, one of the props last year was by June 1, will the New York Times published at least three investigative pieces on distinct topics concerning McKinsey and Company. So that that prop was proposed and seconded by a couple of my friends who are also McKinsey alums? And it’s an interesting question, right? There’s a state of the world in which the New York Times is super excited about publishing a lot of investigations about McKinsey. At the time, in January of 2020. Pete Budaj edge was still in the presidential race. And so that gave the time some reason to do more publication on that topic. But if you were to do research on the base rate of how often the times publishes these pieces, you would find, as I did with with a quick search, that there was about one article on a distinct topic every other month. So figure that it was it was like a point four, eight articles per month rate at which the time since publishing pieces out McKinsey. So you can extrapolate from that to the amount of time between post time and June 1 and say, all right, well, you know, you make a normal distribution here and figure out the odds that The number is three or more. And, you know, I came up with a, I think a forecast of whatever it was 30%. And it turned out not to hit in that case it was zero. But the point is there is research you can do on base rates that is more common than you think.

Will Bachman 20:16
Talk to me some about some of the implications of this on terms of how you advise clients, it seems that there’s some that naturally would arise, like the part around teamwork and getting, you know, different points of view on a topic. There’s something around, you know, avoiding overconfidence and being concerned about that. But tell me about how

some of this has manifested in your in your advisory work.

Adam Braff 20:42
So my advisory work is divided in half between investors and corporates, investors, especially long short, public equity, hedge funds, are basically doing this kind of forecasting all day long. They are making repeated small bets on specific stocks and the drivers of those stocks and trying to come up with out of consensus view on what will drive these stocks to go to go higher or lower. That is an exercise in Super forecasting. And when I was working at a hedge fund full time, most recently, everybody very enthusiastically joined this contest and enjoyed and we’re super excited about simply sharpening their super forecasting toolkit. As it happens, they did not perform better than the average forecaster in in the Goddess. But they still entered somewhat physiologically and probably felt their skills along the way. And continuous improvement really is the goal of the contest, I try not to focus too much on the overall winner of the contest, I should point out that the stakes of this contest are a bowl of pho of Vietnamese noodle soup, and a book that I’ve selected for people to read. So I don’t have pitched battles of people fighting with the judges about whether I called the outcome correctly on this proposition. But maybe, maybe the foe is not enough to motivate the hedges to make even better forecasts, but they certainly can benefit from having better super forecasting skills. Then, if you think about the corporate clients that are closer to the example you gave about doing a budgeting or kind of a strap planning function or a diligence function, these skills do come in handy if they if they merely force you to think about the problem from different angles to not be overconfident to create more scenarios to identify the drivers that really matter. There is certainly an advantage that comes to you at being a better forecaster. When you’re doing the financial planning and analysis function within an existing client. To think about all the different techniques you might use, like base rate forecasting, deciding between trend following, and mean reversion. triage being the problem so that you are not spending too much time on drivers that are never going to matter. All of that matters, as well as the point you made about teamwork, I would also say there is something to the idea of meta rationality, knowing what you know. And knowing what you don’t know, the winners of the contest last year was a team that some teams go by code names. So this team goes by the name obsequious fog, which was the code name that was randomly generated from them for them in the first contest where I gave everybody code names because I thought that’s what they wanted. In fact, people are very happy to have their names out in the open here. But obsequious five, had a very college low confidence strategy where they took the 5050 on about six of the propositions and then for the other 19, they basically made them all at the 25% or 75% level for all of them. And that level of low confidence paid off because when you’re when you’re wrong, on the I think they got everything almost entirely directionally right. But if you’re, if you’re

wrong on a, you know, 7525 profit, it’s not as bad as being wrong, you know, with that with that 9010. And even though some of the props were kind of, were kind of obvious, I would say in in advance, there was there was a proposition, which was Amazon will spin off AWS as a separate company by August 1. Now, you know, sitting there in January, if you have some familiarity with corporations, as you do, you would know that a spin off of that size, which would be absolutely gargantuan spinning off 100 billion plus, Leo company would take much more than seven months to pull off. Right? You get you could conceive of it, you can vote on it. You could imagine people making that plan, but to actually execute on
that spin off in that amount of time. It was it was almost a 0%.

Will Bachman 24:57
I mean, even though government had like mandated It like, right, exactly right, so not gonna happen.

Adam Braff 25:03
Exactly. So So a number of forecasters who are perhaps not business types might look at that say, Yeah, I bet the government’s gonna crack down on this contest for a reason, I’m gonna, you know, put a high number, whereas a lot of people, including super forecasters, including experts in business gave that almost a 0% chance of happening, which is, which is correct. But obsequious fog, stuck to their meta rational heuristic of putting that one ad I believe, 25%. Not, because that’s what they thought it truly was. But they said, you know, what we’re gonna, if we’re gonna be under confident, we’re gonna be 100 confident everywhere, and it’s gonna serve us well. So they, so that’s what they did, and it worked out for them. And I think it’s, the goal is not to be so under confident in everything you do in business that you put every proposition as a as a bushy 5050. It’s to is to know, when you should should dial back your certainty and put a range around things, and perhaps put in place some measures to mitigate the damage that comes if you turn out to be wrong.

Will Bachman 26:04
We had an exercise in Business School, which I still remember, which was, we had, I think it was like a Tet might have been in a classroom setting. So you didn’t have time to do a lot of research. But it was a series of maybe 10 questions, and each one, you had to put a range that would encapsulate the answer to that. And the trick to it was that you, you can make the range, whatever you want. But you had to get you had to try to get nine out of
10. Correct in terms of within the range. So it was really about playing with the rain with sort of the range of your confidence range on those. And what,

Adam Braff 26:46
So you’re supposed to play zero to a billion, right? Is that the winning move?

Will Bachman 26:49
Well, you could write but if you did that for all of them, then you would get 10 out of 10. And you would lose, right. So the goal was to get nine out of 10, correct? Nine out of 10 within range. So if you just said like everything is from negative infinity to positive infinity, you’d get 10 out of 10. Right. But the goal was to get I think nine out of 10, or eight out of 10 to sort of get the right confidence range. And I think a lot of people ended up just being overconfident on their guess. So if they say, okay, what’s the GDP of the United States or something? their range might have been? They think, Oh, I think it’s, I don’t know, 30 trillion, let’s say, and I’m just guessing I have no idea what it is 22 trillion? I have no idea. So they say oh, it’s like 28 to 32 trillion, right? Instead of saying, Oh, I’m not sure it’s like 10 to 50 trillion, giving a wide net range. So it sort of is the same along the similar lines of trying to teach you about estimating what you’re, you know, how confident are you on something and avoiding that overconfidence without just completely saying, oh, it could be anything from you know, negative infinity to positive infinity.

Adam Braff 27:54
That’s right. That’s right. Yeah, you learn, you know, you learn not only the perils of overconfidence, but how truly gettable some of these days right numbers are right, it’ll it will take you 10 seconds to Google the answer to that question. Whereas other questions like a proposition in the contest last year was New York City will report more homicides in the summer of 2020. than in the summer of 2019. And, you know, there’s a certain amount of research, you need to do that. Right, you might have an instinct about the answer.
They’re certainly sitting there in January of 2020, you had no reason to believe that there was going to be such a strong spike in homicides in the in the in the summer, then. But you have you have the ability to do some amount of research to at least inform yourself with the historical answers and add to bound to that, that uncertainty around there. I did put that, you know, call it the 80% confidence interval around what you think the numbers will be. And when you’re making this forecast, your time is scarce, and you’re competing with other forecasts you’re trying to make both in the contest and in business. And so you learn how to get just the right amount of information to make quick decisions. That may be a more important lesson as well. Now, if

Will Bachman 29:01

I recall correctly, I haven’t read super forecasters by Philip Tetlock. I’ve certainly heard a lot about it. Tyler Cowen talks a lot about a lot. And I think he interviewed Philip Tetlock on his on his podcast recently, and that was a good episode. And if I recall, in that, it’s the case that in his research, it’s often the case that non experts tend, you know, can do quite well at this. And I mean, his term, not mine, but apparently, you know, there’s been some housewives that participate in Philip Tetlock research that do better than CIA analysts at forecasting Some even some world events just passively because they’re less biased or more, you know, just open to taking a clean slate approach and looking at open source research and not being biased by kind of mean something confidential they might know. What would you say companies should do or investors that are trying To get better at forecasting in terms of the way they recruit, or train or promote talent, who are doing roles like that, like strategic planning or investing or making bets on new markets or new products?

Adam Braff 30:16
Well, here’s where I’m going to express my lack of true confidence that the answer to that question I will say, I can observe which people are super forecasters, and who does well in the contest over and over again, how much of that is based on them, deliberately sharpening their skills versus having some inherent set of attributes, and therefore surviving through a selection effect, very hard for me to say. So if, if an organization wants to, wants to benefit from the lessons of super forecasting, at a minimum, it can encourage people to, to make a lot of predictions about the business and to have internal prediction markets, it can certainly, you know, study that people who do well and do poorly in these contexts. And it can try the training approach, right? How much of this is this selection effect? And how much is the treatment effect? You can certainly, you can certainly try giving people this training and see if they perform better in their forecasts over time, I don’t know of a company that’s deliberately done that I believe Tetlock gives lectures at companies, and there are certainly lots and lots of these contests for people to enter. And there’s a lot of academic work that’s been written about corporate prediction markets. But I would say like everything else, in my analytics, practice, a lot of this is about experimentation, test and learn. Try to see what works for your company and and see if you’re super forecasting, super forecasting actually leads you to make better decisions.

Will Bachman 31:43
I mean, it sounds like one idea could be for companies to run a process like yours internally, and get people to participate and volunteer and in, in making predictions in advance across a wide range of topics, not just not just purely related to the business, to raise more awareness of that variability and how and identify people internally who are

quite good at it.

Adam Braff 32:07
Well, it’s funny, right? Because the types of forecasting that go on inside of companies are NCAA, March Madness pools with a different set of skills entirely. And where you actually do have an artificial incentive to be an extremist to pick upsets, relative to what you believe is actually going to happen. The true Daisy and who is entering a March Madness contest, if they want to have the lowest error rate possible is going to pick the IRC in each in each game. But that’s not the way to win a winner take all tournament, because because of the number of participants typically. So that’s not always teaching the wrong thing. I would say if you did want to usefully sharpen your skills within a corporate setting, the more classic examples are around having people predict how long it will take to do a certain it implementation, or even to predict what sales are going to be in some far out the period where there isn’t any moral hazard, there isn’t any particular way for these employees to really affect the outcome. Just share the results in a certain way. And where the wisdom of the crowd turns out to be more accurate than perhaps with senior leadership is telling other stakeholders is the is the is the answer that they believe is going to happen?

Will Bachman 33:22
It seems that almost are my guess is that culturally, many companies would be reluctant to do that, because it’s almost a way of saying that the executives aren’t all knowing and omniscient. And that there may be people in the company who have better insight than the, the CEO on things,

Adam Braff 33:46
I think is for brave companies that are hyper rational, and is probably the sort of thing that could be adapted to a context where it’s where the information is contained, and where perhaps a smaller group of participants are aware of themselves, and I can study it, I would welcome this kind of transparency in any company that I’m running, but not every company is culturally ready for that. And I totally appreciate that.

Will Bachman 34:12
Well, this sounds like a really fun way of getting a group of friends together and learning together. And thank you so much for sharing. You know, this is an idea that I think I’m thinking about now that sounds like something that would be fun to do with with a group

of people. So thanks a lot for for getting on the show and sharing what you’ve been doing.

Adam Braff 34:35
I appreciate it. If people want to learn more about it, they can go to my blog braff.co there was an entire series of blog posts@braff.co slash advice, which I posted last year and the topic called forecasting masterclass, in which I tracked that contest as it progressed along the way in real time. And that is where you’ll find a lot of tips and tricks about how to be a superpower. That’s awesome.

Will Bachman 35:01
And do you publish the list of questions? It sounds like you do a, you know, a lot of research and work to come up with a list of questions if someone wanted to piggyback on that. And, you know, next January, when you start your contest and maybe use your questions, or do you do you publish those?

Adam Braff 35:19
I do there. Everyone should feel free to copy the format, the questions, whatever they like I published the questions and the leaderboard as well. So it’s all it’s all out there. For anyone who’s interested in this topic.

Will Bachman 35:31
Fantastic. We will include a link in the show notes. Adam, thanks so much for joining today.

Adam Braff 35:37
My pleasure. Thanks well.

Related Episodes

Episode
518

Author of The Parenting MBA

Josh Leibowitz

Episode
517

Leadership Consulting, Coaching & Speaking

Mahan Tavakoli

Episode
516

John Driscoll, President U.S. Healthcare at Walgreens and Host of CareTalk

John Driscoll

Episode
515

The Current State of the Consulting Job Market

Chad Oakley