## NBA Team Ratings, a Bayesian Approach

Friday, October 29th, 2021

*Using NBA team ratings as an excuse to dive headfirst into Bayesian Hierarchical Modeling. Gory math in LaTex & Jupyter notebook form can be found here, here, & here. Daily updating team ratings for the NBA & WNBA regular seasons going back to 2001 and 2009, respectively, can be found at my new site nba.mattefay.com.*

As previously mentioned, I’ve been curious about Hierarchical Bayesian Modeling, combining mixed effects models (you can read about those here) with a Bayesian approach. The pandemic brought every armchair statistician onto the blogosphere (I managed to resist by keeping busy at work) to model COVID, and one particularly interesting approach — rt.live from Instagram co-founders Mike Krieger and Kevin Systrom — used PyMC3 to fit a hierarchical model to COVID testing data.

Needless to say, I’ve been looking for an interesting application ever since! Luckily, the start of the NBA season quickly brought one. I was curious to see which teams were playing well/poorly thus far, evaluated on the scale of points per 100 possessions, both on offense and defense. A simple way to calculate this is to simply add up the points scored and conceded, along with the total possessions, and divide! Probably the best place to find this is Ben Falk’s site Cleaning the Glass, which conveniently performs a small amount of smart filtering to remove extraneous data (heaves at the end of each period and garbage time).

At the start of the season, these calculations are pretty error-prone, for two reasons. First, the sample size is small, so the evidence for extreme values in offense/defense is slim, and should be heavily regressed to the mean. Second, the schedules are unbalanced, with some teams playing only good opponents, traveling long distances to play on the road with little rest, and others lining up cupcakes from the comfort of their own arena.

An improved approach, the best I could find publicly, was using the simple ratings system. This takes opponent into account, finding offensive and defensive ratings (points per hundred possessions) for each team that best estimate the outcome of each game. Still, no home-court advantage, and especially important, no regression to the mean. So of course, I resolved to work this out myself using Bayesian Hierarchical Modeling.

I ended up writing up three different models of increasing complexity to discuss the considerations that went into the current approach I'm using. For the detailed version, with LaTex equations and code, you can find them on Github:

Here, I'll skip the derivation and get straight to the results and the approach behind it. Instead of modeling each team's offensive and defensive rating directly (points scored per 100 possessions), I derive them from modeled lower-level statistics, inspired by Dean Oliver's Four Factors. But, well, I have seven! Specifically:

- Three-point attempt rate: percentage of shots that were attempted from three point range
- Two-point make rate: percentage of two point shot attempts made
- Three-point make rate: percentage of three point shot attempts made
- Rebound rate: percentage of rebounds grabbed
- Turnover rate: percentage of possessions that end in a turnover
- Free throw attempt rate: free throws attempted per possession
- Free throw make rate: percentage of free throw attempts made

With these seven factors, a little bit of regression, and some gory math, I'm able to estimate the scoring rate (points scored per 100 possessions) with ~2% relative error from NBA regular season games since 2001:

The resulting model provides not only estimates each team's "seven factors" on offense and defense, but also the effect of home court for each. Conveniently, this takes care of "luck adjustment" by, e.g. ignoring free throw make rate on defense. Combining those seven factors results in our desired team ratings, factoring in opponent quality and home court with some "luck adjustment". Here, I've plotted those ratings with each team's logo, along with a white arrow showing the effect of the model in relation to "raw" team ratings, i.e. simply calculating the team's points scored and allowed per 100 possessions:

The effects of regression to the mean are obvious here, with most lines pointing toward the origin (the mean), but the model is also picking up on other effects. Because the model is calculating more granular statistics, we're able to interpret where those effects come from, e.g. believing in one team's three point shooting more, but heavily regressing its defensive turnover rate.

Finally, I also model the pace of each team on offense and defense separately, inspired by Michael Beuoy's invaluable site Inpredictable:

Here, the effects of regression are minimal because each team has played so many possessions, the model quickly "believes" in each team's more-or-less raw pace.

You can find daily updated team ratings for the NBA & WNBA regular seasons going back to 2001 and 2009, respectively, at my new site nba.mattefay.com. That's all for now, but I'll be posting more in the next few weeks on the nitty gritty of building that site, and of course, I've got some ideas on how to improve these team ratings further (taking possession type into account, for example).

## Questions | Comments | Suggestions

If you have any feedback and want to continue the conversation, please get in touch; I'd be happy to hear from you! Feel free to use the form, or just email me directly at matt.e.fay@gmail.com.