A MaxDiff Approach to the NBA Draft

By bikeride from Canton, CT, USA (Flickr) [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

A couple of weeks ago before the start of the NBA playoffs, I stumbled upon a reddit post soliciting responses from /r/NBA users to rank the 16 playoff teams based on each team’s chances of winning the NBA Championship. What really caught my eye was that the poster (/u/The_SecretSauce) decided to utilize a MaxDiff methodology to compile this ranking. This is a methodology I actually use as part of my day job as a Market Research Analyst. That post inspired me to think about other ways I could apply techniques from the market research industry to the basketball industry. My first thought was another use-case for the MaxDiff methodology, compiling a “big board” of prospects for the NBA draft.

What is MaxDiff?

Maxdiff is short for Maximum Differential Scaling, and is a technique developed by Dr. Jordan Louviere in 1987 as a type of best-worst scaling model. In market research, one of the most common business questions is “how important are each of the attributes of my product to consumers?” For example, a client from the airline industry may provide a list of attributes for a consumer airline flight and task a market research company with finding out what people care about the most when booking a flight. A standard method to collect this data would utilize a 7-point Likert Scale.

Example of a typical attribute importance question

While this is a perfectly valid method to determine importance, the question design presents several issues:

  • Different people can interpret the same scale differently – think of your 7th grade English teacher who would refuse to give out an A+ because “no one is perfect”
  • People are good at determining extremes (the best and the worst), but not as good as determining everything in-between. I bet you can list off the top of your head the best and the worst movies you’ve ever seen, but what about the third best or fourth worst movie?

Example of a uninformative attribute importance response

The MaxDiff methodology takes advantage of our ability to determine extremes by only asking about extremes. Instead of rating each attribute individually, respondents are shown a subset of attributes and asked to select the best and worst among that subset before iterating on to another subset. This process continues until enough best-worst selections have been made to understand the relationships between the full set of attributes.

Example of a MaxDiff importance question

The NBA draft version of this would look similar to the above with respondents asked to select the best and the worst prospect.

Example of a MaxDiff question for the NBA draft

The power behind this technique is that with just two clicks, we can understand nearly all the relationships between the subset of options shown. In the above example, we can see that this particular scout rates the 5 prospects as follows:

  • Doncic > Ayton
  • Doncic > Bagley
  • Doncic > Young
  • Doncic > Bamba
  • Ayton > Bamba
  • Bagley > Bamba
  • Young > Bamba

The Big Board

This MaxDiff survey could be fielded to NBA analysts, fans, or even members of an NBA front office/scouting department to put together a consolidated ranking of all the available prospects from best to worst (AKA the Big Board). There are undoubtedly differences in people’s evaluation of prospects (even among scouts are working for the same team) and the MaxDiff methodology would offer a way to consolidate those opinions. In his book The Signal and the Noise, Nate Silver asserts “quite a lot of evidence suggest that aggregate or group forecasts are more accurate than individual ones, often where between 15 and 20 percent more accurate”.

Quite a lot of evidence suggest that aggregate or group forecasts are more accurate than individual ones, often where between 15 and 20 percent more accurate

I believe the MaxDiff methodology is a perfect fit for putting together the Big Board because it forces respondents to differentiate between prospects. No two players are truly equal (even identical twins Marcus and Markieff Morris have different strengths and weaknesses) and each team can only choose a single prospect at each draft slot. MaxDiff results can even provide a sense of the degree of differences between the evaluations of prospects. The output from a MaxDiff exercise on NBA draft prospects is a score on a ratio scale assigned to each prospect. Typically the sum of all the MaxDiff scores across all the options (in the case of the NBA draft this could be the sum of the scores of all the prospects tested) will add up to 100.

Example of a MaxDiff output

In the example above, we can clearly see that Luka Doncic and Deandre Ayton have very similar MaxDiff scores of 12.7 and 12.5 while the next closest player (Marvin Bagley III) has a score of 8.1. This would mean that according to the evaluations of all the people taking the hypothetical survey, there is a large drop-off in prospect quality after the top 2. This type of information could prove invaluable to a savvy team making a draft day trade. A team with the #3 pick in a draft with only 2 “Tier 1” prospects might be able to trade down to pick up an extra asset, while still drafting the same caliber of player. Obviously, teams would not want to make draft day trades based solely on this MaxDiff score, but I imagine it could serve as a useful reference during trade negotiations.

Notes

No actual data was shown in this post. The charts and questions shown were all samples which were only used for illustrative purposes.

Useful Links

Sawtooth Knowledgebase – Lots of information on the methodology and applications of MaxDiff
Fanjuicer – The site behind the MaxDiff post by /u/The_SecretSauce