Expected Hypothetical Catch Probability – Part 1

What follows is the work Sameer Deshpande and I did for the 2019 NFL Big Data Bowl. We will be presenting this work at the Finals on February 27th.


Consider two passing plays during the game between the Los Angeles Rams and visiting Indianapolis Colts in the first week of the 2017 season.

The first passing play was a short pass in the first quarter from Colts quarterback Scott Tolzien intended for T.Y. Hilton which was intercepted by Trumaine Johnson and returned for a Rams touchdown.

The second passing play was a long pass from Rams quarterback Jared Goff to Cooper Kupp, resulting in a Rams touchdown (time stamp 3:39).

In this work, we consider the question: which play had the better route(s)?

From one perspective, we could argue that Kupp’s route was better than Hilton’s; after all it resulted in the offense scoring while the first play resulted in a turnover and a defensive score. However evaluating a decision based only on its outcome is not always appropriate or productive. Two recent examples of similar plays come to mind: Pete Carroll’s decision to pass the ball from the 1 yard line in Super Bowl XLIX and the “Philly Special” in Super Bowl LII. Had the results of these two plays been reversed, Pete Carroll might have been celebrated and Doug Pederson criticized.

All this is to say, we shouldn’t condition on the observed outcome along.

If evaluating plays solely by their outcomes is inadequate, on what basis should we compare routes? Intuitively, we might tend to prefer routes which maximize the receiver’s chance of catching the pass, or completion probability.

If we let y be a binary indicator of whether a pass was caught and let x be a collection of covariates summarizing information about the pass, we can consider a logistic regression model of completion probability:

\log{\left(\frac{P(y = 1 | x)}{P(y = 0 | x)}\right)} = f(x),

or equivalently P(y = 1 | x) = \left[1 + \text{e}^{-f(x)}\right]^{-1}, for some unknown function f.

If we know the function f, a first pass at assessing a route would be to plug in the relevant covariates x and see whether the forecasted completion probability exceeded some threshold, say 50%. If so, regardless of whether the receiver actually caught the actual pass, we could say that the route was run and ball was placed in such a way as to give the receiver a better chance than not of catching the pass.

Wait a minute, what’s f and what’re the inputs x, you might ask? We’ll go into all of the gory details later but suffice it to say: x contains what we’ll call “time of delivery” variables, which are recorded the moment the ball is thrown, and “time of arrival” variables, which are recorded when the receiver tries to catch the ball. Intuitively, we might expect that catch probability depends on both of these. And f, well f is probably some crazy non-linear function of a bunch of  variables. See Post 2 for more details.

We could then directly compare the forecasted completion probabilities of the two plays mentioned above; if it turned out that the Tolzien interception had a higher completion probability than the Kupp touchdown, that play would not seem as bad, despite the much worse outcome [spoiler: it wasn’t].

But why stop there? There are usually multiple eligible receivers running routes on a given pass play. What can we say about the non-targeted receivers? In particular, if the quarterback threw to a different location along a possibly different receiver’s route, can we predict the catch probability? It turns out, this is challenging for two fundamental reasons.

First, even if we knew the true function f, we are essentially trying to deduce what might have happened in a counterfactual world where the quarterback had thrown the ball to a different player at a different time, with the defense reacting differently. On such a counterfactual pass, we do not observe any “time of arrival” variables that may predictive of completion probability. Figure 1 illustrates this issue, showing schematics for an observed pass (left panel) and a hypothetical pass (right panel). In both passes, there are two receivers running routes; we have colored the route of the intended receiver on both passes blue and the route of the other receiver in gray.

Figure 1: Schematic of what we directly observe on an actual pass (left panel) from our dataset and what we cannot observe for a hypothetical pass (right panel). In both passes, there are two receivers running routes.The targeted receiver is denoted with a circle and the defender closest to the receiver is denoted with an X. Unobservables are colored red while observables are colored blue.

Before proceeding, let’s pause for a moment to distinguish between our use of the term “counterfactual” and its use in causal inference.

Sameer and I are both fairly embedded in the world of causal inference (though he doesn’t have a twitter handle, email and website that prominently displays his love of all things causal. Rejoinder from Sameer: Bayes is bae. I make no apologies.) and it feels weird to use the term “counterfactual” and not elaborate.

The general causal framework of counterfactuals supposes that we change some treatment or exposure variable and asks what happens to downstream outcomes. In contrast, in this work, we considering changing a midstream variable, the location of the intended receiver when the ball arrives, and then impute both upstream and downstream variables like the time of the pass and the receiver separation at the time the ball arrives. In this work, we use “counterfactual” interchangeably with “hypothetical” and hope our more liberal usage is not a source of further confusion below. We use the word “counterfactual” interchangeably with “hypothetical” because while an unobserved pass is hypothetical, the intended receiver of that pass is not.

Ok, I’ve said my piece.

The second fundamental challenge: we typically do not know the function f and must therefore estimate it using the observed data. Even if we knew how to overcome the issue of unobserved “time of arrival” inputs for the hypothetical passes, estimation uncertainty about f will propagate to the forecasts of hypothetical completion probabilities. So we’re going to need to estimate f in a way that makes it quantify uncertainty downstream functionals In doing so, estimation uncertainty about f propagates to the uncertainty about the hypothetical completion probabilities.

So to recap: we’re positing there’s some true function f that takes in “time of release” variables and “time of arrival” variables and outputs the log-odds of a receiver catching the pass. We don’t know this function so we need to estimate f. We then want to take this estimate and plug-in inputs about hypothetical passes to predict the completion probability for every receiver involved at all times during a play. Unfortunately, we don’t actually know the value of the “time of arrival” variables for the hypothetical passes.

If you’re still with us, you might be thinking “Wait a second! I can sidestep the fact that we never observe the hypothetical “time of arrival” variables by letting f only depend on “time of release” variables. And you’d technically be right! But it strains credulity to believe, for instance, that how far a receiver is from his closest defender doesn’t affect his chances of catching the ball. So, restricting f to not depend on “time of arrival” variables seems like a decidedly arbitrary solution to our first challenge. Technically, we’d need to first establish that models of catch probability that account for “time of arrival” variables predicts better than one that does not. But we’re willing to make this intuitive assumption for now.

OK, so we want to evaluate a function that we’re uncertain about at inputs about which we’re also uncertain. We overcome the two challenges in this work.Using tracking, play, and game data from the first 6 weeks of the 2017 NFL season, we developed Expected Hypothetical Completion Probability (EHCP).

At a high-level, our framework consists of two steps:

  1. We estimate the log-odds of a catch as a function of several characteristics of each observed pass in our data.
  2. We simulate the characteristics of the hypothetical pass that we do not directly observe and compute the average completion probability of the hypothetical pass.

In Part 2 of this blog post series, we will describe our Bayesian procedure for fitting a catch probability model like in the equation above and outline the EHCP framework.

In Part 3, we will discuss the results of our catch probability model and illustrate the EHCP framework on several routes.

Finally in Part 4, we will conclude with a discussion of potential methodological improvements and refinements and potential uses of our EHCP framework.



Expected Hypothetical Completion Probability – Quick Post

What follows is some info on the work Sameer Deshpande and I did for the 2019 NFL Big Data Bowl. We will be presenting this work at the BDB Finals at the NFL Combine in Indianapolis on February 27th.

We are in the process of putting together a series of blog posts that will explain our method in, hopefully, an easily digestible way. Until then, we wanted to share a copy of the paper as it was submitted to the contest.

Expected Hypothetical Completion Probability – link to pdf

We note that there are a few caveats:

1. This is very much proof-of-concept. EHCP is a modular framework that involves lots of pieces. We have put the pieces together, but none are optimized at the moment.

2. There are many technical and conceptual details to discuss. We’re going to dive into many of these details in the coming blog posts. Additionally, we’re happy to discuss the paper with particularly interested parties.

That being said,

3. Please be patient! We’re posting the paper we submitted to the Big Data Bowl contest. We recognize that the write-up is somewhat technical and terse when it comes to the finer details of our methodology. Over the last few weeks, we’ve received some great feedback and questions from some of our friends and colleagues in sports and academia. Our plan in the next several posts is to respond to this feedback and hopefully address a bunch of initial questions. So please be patient with us; if you send us a bunch of burning questions and we don’t respond, it’s not entirely because we’re avoiding you.

That being said, here is a quick FAQ:

Q: Did you think about including other variables, such as QB pressure, time from snap to throw, defensive schemes, player information, etc?
A: We considered many variables, but had to limit scope due to time constraints. Incorporating additional variables is a clear opportunity for further work.

Q: Why BART?
A: Over an ever-growing range of problems, BART has demonstrated really great predictive performance with minimal hyperparameter tuning and without the need to pre-specify a specific functional relationship between inputs and outputs. While we didn’t do it in our analysis, BART can also be adapted to do feature selection. At the same time, it’s totally plausible that another regression technique would be effective for the problem.

Q: Did you consider other outcomes like YAC or expected yards gained?
A: We did. Ultimately, we may want to maximize expected value of a play E[value | input variables], which we can further decompose as:
E[value | input variables] = E[value | catch, x] * P(catch | x) + E[value | no catch, x] * P(no catch | x)
We focused on the P(catch | x) part.

Q: Wait a minute! You need to do a better job of modeling the conditional distribution of the unobserved variables on the observed ones. There is no way they are independent. Especially since they may change as the route develops.
A: That’s not really a question, but we agree. Handling the missing variables is one of the modular parts of the framework and can be optimized independently of the other parts. It is an interesting missing data question in its own right.

Q: Who is the best QB/WR?
A: We didn’t have enough data to draw any strong conclusions. Jameis Winston looked great in the data we had available to us.

Q: Does this run on the block chain?
A: 😑

Q: Did y’all try deep lear–
A: No.

Cross Validation – Part 1

Recently I’ve seen a lot of misunderstanding about how/why cross validation (CV) is used in model selection/fitting. I’ve seen it misused in a number of ways and in a number of settings. I thought it might be worth it to write up a quick, hopefully accessible guide to CV.

Part 1 will cover general ideas. There won’t be any practical sports examples as I am trying to be very general here. I’ll have part 2 up in December with a practical example using some NBA shot data. I’ll post the code in a colab so people can see CV in action.

(There may be some slight abuse of notation)

Cross Validation: A Quick Primer


General Idea

Cross validation is generally about model selection. It also can be used to get an estimate of error.

Statistics is about quantifying uncertainty. Cross validation is a way to quantify uncertainty for many models and use the uncertainty to select one. Cross validation also can be used to quantify the uncertainty of a single model.

Some Quick Definitions

I think it is important to be clear about what I mean by a model. In simple terms, a statistical model is the way we are going to fit the data. A model encodes the underlying assumptions we have about the data generating process.

Examples of models might be:

  • Logistic with all available variables
  • Logistic with just variables A, B, C, and D
  • Random forest
  • SVM
  • Neural network with some tuning parameter lambda
  • Neural network with a different tuning parameter kappa
  • Neural network where the tuning parameter is selected to maximize accuracy for the data being fit
  • etc.

When I refer to a model, I mean the general framework of the model, such as linear with variables A,B, C, and D. When I refer to a model fit, I mean the version of that model fit to the data, such as the coefficients on A, B, C, and D in a linear model (and intercept if needed).


Generally we use cross validation to pick a model from a number of options. It helps us avoid overfitting. CV also helps us refine our beliefs about the underlying data generating process.

In the case of outcome prediction, we often need to tune the inputs used in the model (or data needed or whatever), explore different types of models, or determine which independent variables are of interest. We use CV to get estimates of error/metrics for various models (score, correlation, MSE, etc whatever) and pick one model from there. We can have CV do feature selection as well, but then we are testing that particular variable selection method, not the variables chosen by the selection method. For example we can use CV on a LASSO model (which incorporates variable selection), in which case we are testing LASSO, not the variables it selected. We could also test some particular set of variables in a model of their own.

The actual method for cross validation is as follows

For each candidate model:

  • Split the dataset into k folds, i.e. split the data into k equal-sized disjoint subsets, or “folds”
  • For each unique fold:
    • Take that fold as a validation set
    • Use the remaining k-1 folds as a training set
    • Fit the model onto the training set and use that fitted model on the held out kth fold to get a predicted outcome
    • Compare the predicted outcome to the truth, calculate error and other metrics etc.

Fit all the models this way and, generally, take the average of whatever metrics you calculated for each fold. But we might also look at the variance of the metrics. Then look at which model performs “best.” The definition of “best” will depend on what we care about. Is could be about maximizing true positives and true negatives. Or minimizing squared error loss. Or getting within X%. Whatever.

Once we have chosen a model based on the training data, we still want to get an estimate of prediction/estimation error for any new data that would be collected independently of the training data. If we have a new set of data, say we get access to a new season for a sport, then we can get an estimate of error using that new data. So if we decided that a simple logistic model with variables A, B, C, and A^2 is the “best”, we fit that model on all the training data to get coefficient estimates. We then predict outcomes for the new data, compare to the truth, and look at the error/correlation/score/whatever etc. This gives us our estimate of the true error/MSE/accuracy/AUROC etc.

We could instead decide that a LASSO model is best. So we fit LASSO on all our training data and use that to get coefficients/variables which we then apply to the validation data.

In the absence of a new data set, we still need to estimate the error. We can use CV to get an estimate of the error. We could do the same thing if we had a priori decided to use a model with certain variables. We take the model we decided on a priori, fit on k-1 folds, apply the fitted model to the held out kth fold, compare to truth, calculate error and other metrics etc. Do this for all folds and get metrics for all k folds. Then we can average those metrics, or look at their distribution.

Remember, statistics is about quantifying uncertainty. Cross validation is a tool for doing that. We do not get the final fitted model during the cross validation step.

Common Mistakes

Common scenarios, mistakes, and how to fix them:

  1. You already have an a priori idea of what model you are going to use to predict a continuous outcome – a linear regression with variables A, B, and C. You want to know the coefficients for A, B, and C.
    • Mistake: Fit that linear model on each of the k fold complement sets to get k sets of coefficients. Average those coefficients to get the final model.
    • Correction: Fit the linear model on all the training data at once to estimate coefficients. Test the model fit on totally new data (if new data is available). Or use the k folds to get estimates of error. Either way, you get the coefficients from fitting the model on all the data.
  2. You already have an a priori idea of what model you are going to use to make binary classifications – a threshold model where if the probability of a positive outcome is above some p%, you classify it as positive. You want to know what p should be.
    • Mistake: Fit that threshold model on each of the k fold complements sets to get the optimal p_k% for each subset. Average p_k across all k folds to get the final threshold p.
    • Correction: Fit the threshold model on all the training data at once to estimate p. Test the model fit on totally new data (if new data is available). Or use the k folds to get estimates of error. Either way, you get p from fitting the model on all the data.
  3. You have many ideas for potential models and want to know which one performs “best.” You fit each model on each of the k-1 folds and compare to each held out kth fold to estimate metrics. You decide a logistic regression model gives the best AUROC (your chosen metric of choice).
    • Mistake: You average the coefficients of all k logistic model fits in order to get the final model.
    • Correction: Fit the logistic model on all the training data at once to estimate coefficients. Test the model fit on totally new data (if new data is available). Or use the k folds to get estimates of error. Either way, you get the coefficients from fitting the model on all the data
  4. You have many ideas for potential models and want to know which one performs “best.” You fit each model on each of the k-1 folds and compare to each held out kth fold to estimate metrics. You decide a LASSO (penalized logistic regression) model gives the best AUROC (your chosen metric of choice).
    • Mistake: You average the coefficients of all k logistic model fits in order to get the final model. The model fits don’t always select the same variables, so you just take all of them, but assign a coefficient of zero whenever a variable is not chosen.
    • Correction: Fit the LASSO model on all the training data at once to choose variables and estimate coefficients. Test the model fit on totally new data (if new data is available). Or use the k folds to get estimates of error. Either way, you get the variables and coefficients from fitting the model on all the data
  5. You have many ideas for potential models and want to know which one performs “best.” You fit each model on each of the k-1 folds and compared to the held out kth fold to estimate metrics. You decide a random forest model gives the best MSE (your chosen metric of choice).
    • Mistake: You fit the random forest model on all the training data and use it to get predicted outcomes for all your training data samples. You then compare those predictions to the true outcomes and calculate an estimate of the error.
    • Correction:  Test the model fit on totally new data (if new data is available). Or use the k folds to get estimates of error. Estimating error from the data you used to select and fit the model will result in an underestimate of error.

Kathy Explains all of Statistics in 30 Seconds and “How to Succeed in Sports Analytics” in 30 Seconds

I spent the weekend of October 19-21 in Pittsburgh at the 2018 CMU Sports Analytics Conference. One of the highlights of the weekend was Sam Ventura asking me to explain causal inference in 15 seconds. I couldn’t quite do it, but it morphed into trying to explain all of statistics in 30 seconds. Which I then had to repeat a few times over the weekend. Figured I’d post it so people can stop asking. I’m expanding slightly.

Kathy Explains all of Statistics in 30 Seconds

Broadly speaking, statistics can be broken up into three categories: description, prediction, and inference.

  • Description
    • Summaries
    • Visualizations
  • Prediction
    • Mapping inputs to outputs
    • Predicting outcomes and distributions
  • Inference/Causal Inference
    • Prediction if the world had been different
    • Counterfactual/potential outcome prediction

I’ll give an example in the sports analytics world, specifically basketball (this part is what I will say if I only have 30 seconds):

  • Description
    • Slicing your data to look at the distribution of points per game (or per 100 possessions or whatever) scored by different lineups
  • Prediction
    • Predicting the number of points your team will score in a game given your planned lineups
  • Inference/Causal Inference
    • Prediction of change in points per game if you ran totally new lineups versus the normal lineups

My day job is working for a tech healthcare company, and the following are the examples I normally use in that world:

  • Description
    • Distributions of patient information for emergency department admissions stratified by length of stay
  • Prediction
    • Predicting length of stay based on patient information present on admission
  • Inference/Causal Inference
    • Prediction of change in length of stay if chest pain patient had stress test vs having cardiac catheterization

So, it’s not *all* of statistics. But I think its important to understand the different parts of statistics. They have different uses and different interpretations.

More thoughts from the conference

Any time I am at a sports conference there is always the question of “how does one succeed in/break into the field?” Many others have written about this topic, but I’ve started to see a lot of common themes. So….

How to Succeed in Sports Analytics in 30 Seconds

Success in sports analytics/statistics seems to require these 4 abilities:

  • Domain expertise
  • Communication
  • Statistics
  • Coding/programming/CS type skills

Imagine that each area has a max of 10 points. You gotta have at least 5/10 in every category and then like, at least 30 points overall. Yes I am speaking very vaguely. But the point is, you don’t have to be great at everything, but you do have to be great at something and decent at everything.

I don’t feel like I actually know that much about basketball or baseball, or any sport really. I didn’t play any sport in college, and generally when I watch games, I’m just enjoying the game. While watching the Red Sox in the playoffs I don’t really pay attention to the distribution of  David Price’s pitches, I just enjoy watching him pitch. Hell, I spend more time wondering what Fortnite skins Price has. I’ve been guessing Dark Voyager, but he also seems like the kind of guy to buy a new phone just to get the Galaxy skin. Anyway. I’m not an expert, but I do know enough to talk sensibly about sports and to help people with more expertise refine and sharpen their questions.

And I know statistics. And years of teaching during graduate school helped me get pretty damn good at explaining complicated statistical concepts in ways that most people can understand. Plus I can code (though not as well as others). Sports teams are chock full of sports experts, they need experts in other areas too.

These four skills are key to succeeding in any sort of analytical job. I’m not a medical expert, but I work with medical experts in my job and complement their skills with my own.

Concluding thoughts from the conference

Man, no matter what a talk is about, there’s always the questions/comments of “did you think about this other variable” (yes, but it wasn’t available in the data), “could you do it in this other sport…” (is there data available on that sport?), “what about this one example when the opposite happened?” (-_-), “you need to be clearer about how effects are mediated downstream, there’s no way this is a direct effect even if you’ve controlled for all the confounding” (ok that one’s usually me), etc.

Next time, we are going to make bingo cards.


Some Boston Marathon Numbers

I was enjoying the third quarter of the tight Raptors vs Wizards game on Sunday night when my coworker sent me this article and the accompanying comments on the Boston Marathon:

Oh my. This article makes me disappointed. So let’s skip Cavs/Pacers and Westworld and dig in.


On the surface it feels like the article is going to have math to back up the claim that “men quit and women don’t.” It has *some:*

But finishing rates varied significantly by gender. For men, the dropout rate was up almost 80 percent from 2017; for women, it was up only about 12 percent. Overall, 5 percent of men dropped out, versus just 3.8 percent of women. The trend was true at the elite level, too.

And some attempt to examine more than just the 2018 race:

But at the same race in 2012, on an unusually hot 86-degree day, women also finished at higher rates than men, the only other occasion between 2012 and 2018 when they did. So are women somehow better able to withstand extreme conditions?

But that’s it. No more actual math or analyses. Just some anecdotes and attempts to explain biological or psychological reasons for the difference.

Let’s ignore those reasons (controversial as they may be) and just look at the numbers.


The metrics used are ill-defined. There is mention of how the midrace dropout rate was up 50 percent overall from last year, but no split by gender. As quoted above, the finishing rates varied significantly by gender, but no numbers are given. Only the overall dropout rates are reported. What does overall dropout rate mean? I assume it is a combination of runners who dropped before the race began plus those who dropped midrace. And then the overall dropout rates are 3.8% for women and 5% for men. But the splashy number is that men dropped out 80% more than last year whereas women only dropped out 12% more. Is… is that right? I’ve already gone cross-eyed. The whole thing reeks of hacking and obscures the meaning.

There are a lot of numbers here. Some are combined across genders. Some are overall rates, some are midrace. Some are differences across years.

Frustrated with the lack of numbers in the article, I went looking for the actual numbers. I found the data on the official website. I wish it had been linked in the article itself…


all 29,978 26,948 25,746 95.50%
male 16,587 14,885 14,142 95.00%
female 13,391 12,063 11,604 96.20%

Now we can do some proper statistics.

First, we can perform an actual two sample test and construct confidence intervals to see if there was a difference in finishing rates between genders.

For those who entered the race, the 95% confidence interval for the difference in percent finished between males and females was (-0.022, -0.006).

For those who started the race, the 95% confidence interval for the difference in percent finished between males and females was (-0.017, -0.007).

The difference is technically significant, but not at all interesting. And that is ignoring the fact that we shouldn’t really care about p-values to begin with.

But the article mentions dropout rate, not finishing rate, so let’s use that metric:

Of those who started the race, about 5% of males and 3.8% of females dropped out.

For those who started the race, the 95% confidence interval for the difference in percent dropout between males and females was (0.0069, 0.0168).

So yes, there is a significant difference. But with these kinds of sample sizes, it’s not surprising or interesting to see a tiny significant difference.

But what about 2017? What about the big change from 2017 to 2018? After all the main splashy metric is the 80% increase in dropout for men.

2017 (numbers from here)

all 30,074 27,222 26,400 97.00%
male 16,376 14,842 14,431 97.20%
female 13,698 12,380 11,969 96.70%

In 2017, for those who entered the race, the 95% confidence interval for the difference in percent finished between males and females was ( -0.00006, 0.01497).

And in 2017, for those who started the race, the 95% confidence interval for the difference in percent finished between males and females was (0.0013, 0.0097).

Of those who started the race in 2017, about 2.8% of males and 3.3% of females dropped out.

For those who started the race in 2017, the 95% confidence interval for the difference in percent dropout between males and females was ( -0.0097, -0.0013).

So it does look like women dropped out more in 2017 compared to 2018. But the difference is so tiny that… whatever. This isn’t interesting. But at least now there are actual statistics to back up the claim.

But really, there’s not a lot going on here.

And FINALLY, we can look at the differences from 2017 to 2018.

The dropout rate for females increased from ~3.3% to ~3.8% which (using the exact numbers) was an increase of about 14.6% (not the 12% reported in the NYT article). The dropout rate for males increased from ~2.8% to ~5.0% which (using the exact numbers) was an increase of about 80% as reported.

At least now I understand where these numbers are coming from.

I still don’t buy it. Using dropout numbers instead of finishing numbers makes ratios much larger. An 80% increase in dropout sounds a lot more impressive than a 2% drop in finishing.

And that’s all before we try to compare to other years that might have also had extreme weather. If I had more time or interest I might look at the temperature, humidity, wind speed, wind direction etc for the past 20+ marathons. And then look at differences in dropout/finishing rate for men and women while controlling for weather conditions. That sort of analysis still probably wouldn’t convince me, but it would get closer.


This article is really frustrating. There are just enough scraps of carefully chosen numbers to make differences seem bigger than they really are. Comparing dropout rates to finishing rates is a bit hacky, and then comparing just two years (as opposed to many) gets even hackier. There’s an interesting hypothesis buried in the article and the data. And if we were to pull data on many marathons, we might get closer to actually being able to test if dropout rates vary by gender according to conditions. But the way the data is presented in the article obscures any actual differences and invites controversy. Audiences are eager for guidance with statistics and math. Tossing around a few numbers without explaining them (or giving a link to the source…) is such poor practice.

That Other Site I Work On

This site has been sparse lately and it is because I’ve been busy with two other projects.

The first is my actual day job. I finished my PhD in May of 2017 and began working at Verily Life Sciences in August of 2017. Did I turn down some jobs with pro teams? Yes. Yes I did. Why? That’s a story for another day. I like what I do at Verily. I get to have fun, with people I like, working on cool healthcare projects. Plus we work out of the Google offices in Cambridge which are very nice and full of free food and fun toys.

The second project I’ve been working on is the visualizations section of Udam Saini’s EightThirtyFour.


Udam and I worked together on this site’s NBA foul project, which started as an attempt to quantify how mad DeMarcus Cousins gets in games. We built survival models and visualizations to examine how players accrue fouls. But these models can just as easily be applied to assists, blocks etc. In fact, I took the ideas and examined how Russell Westbrook accrued assists in his historic triple-double season. By using survival models, we can see how the time between assists increased significantly after he reached 10 assists in a game. This could be seen as evidence in favor of stat padding.

The tool we’ve built on the site linked above allows you to look at survival visualizations and models for pretty much any player in seasons between 2011 and 2017. The stats primer linked in the first line has more explanation and some suggestions for players and stats to look at.

Survival analysis models and visualizations are not always the easiest to explain, but I think there is value in having other ways to analyze and examine data. Survival analysis can help us better understand things like fatigue and stat padding. And can help add some math to intangible things like “tilt.”

This project was also a lesson in working on a problem with a proper software engineer. I am a statistician and I’m used to a certain amount of data wrangling and cleaning, but I largely prefer to get data in a nice data frame and go from there. And I certainly don’t have the prowess to create a cool interactive tool on a website that blends SQL and R and any number of other engineer-y things. Well. I’d like to think I could, but it would take ages and look much uglier. And be slower. Conversely, my partner in crime Udam probably can’t sort through all the statistics and R code as fast as I can. My background isn’t even in survival analysis, but I still understand it better than a SWE. So this part of his site was a chance for us to combine powers and see what we could come up with. In between our actual Alphabet jobs, of course.

I think in the world of sports analytics, it’s hard to find somebody who has it all: excellent software engineering skills, deep theoretical knowledge of statistics, and deep knowledge of the sport (be it basketball or another sport). People like that exist, to be sure, but they likely already work for teams or are in other fields. I once tried to be an expert in all three areas and it was very stressful and a lot of work. Once I realized that I couldn’t do it all by myself and started looking for collaborations, I found that I was able to really shine in my expert areas and have way more fun with the work I do.

The same is true in any field. I wasn’t hired by Verily to be a baller software engineer *and* an expert statistician *and* have a deep understanding of a specific health care area. I work with awesome healthcare experts and engineers and get to focus just on my area of expertise.

In both my job and my side sports projects my goal is always to have fun working on cool problems with people I like. It’s more fun to be part of a team.

Anyway, have fun playing with the site, and if you have any suggestions, let us know :]

Russell Westbrook and Assists


I was going to flesh this idea out and refine it for a proper paper/poster for NESSIS, but since I have to be in a wedding that weekend (sigh), here are my current raw thoughts on Russell Westbrook. I figured it was best to get these ideas out now …  before I become all consumed by The Finals.

I’ve been thinking a lot about Russell Westbrook and his historic triple-double season. Partially I’ve been thinking about how arbitrary the number 10 is, and how setting 10 to be a significant cutoff is similar to setting 0.05 as a p-value cutoff. But also I have been thinking about stat padding. It’s been pretty clear that Westbrook’s teammates would let him get rebounds, but there’s also been a bit of a debate about how he accrues assists. The idea being that once he gets to 10, he stops trying to get assists. Now this could mean that he passes less, or his teammates don’t shoot as much, or whatever. I’m not concerned with the mechanism, just the timing. For now.

I’ll be examining play-by-play data and box-score data from the NBA for the 2016-2017 season. This data is publicly available from http://www.nba.com. The play-by-play contains rich event data for each game. The box-score includes data for which players started the game, and which players were on the court at the start of a quarter. Data, in csv format, can be found here.

Let’s look at the time to assist for every assist Westbrook gets and see if it significantly changes for assists 1-10 vs 11+. I thought about looking at every assist by number and doing a survival analysis, but soon ran into problems with sparsity and granularity. Westbrook had games with up to 22 assists, so trying to look at them individually got cumbersome. Instead I decided to group assists as follows: 1-7, 8-10 and 11+. I reasoned that Westbrook’s accrual rate for the first several assists would follow one pattern, which would then increase as he approached 10, and then taper off for assists 11+.

I freely admit that may not be the best strategy and am open to suggestions.

I also split out which games I would examine into 3 groups: all games, games where he got at least 11 assists, and games where he got between 11 and 17 assists. This was to try to account for right censoring from the end of the game. In other words, when we look at all games, we include games where he only got, say, 7 assists, and therefore we cannot hope to observe the difference in time to assist 8 vs assist 12. Choosing to cut at 17 assists was arbitrary and I am open to changing it to fewer or more.

Basic Stats

Our main metric of interest is the time between assists, i.e. how many seconds of player time (so time when Westbrook is on the floor) occur between assists.

First, let us take a look at some basic statistics, where we examine the mean, median, and standard deviation for the time to assist broken down by group and by the different sets of games. Again, this is in seconds of player time.


We can see that if we look at all games, it appears that the time between assists goes down on average once Westbrook gets past 10 assists. However this sample of games includes games where he got upwards of 22 assists, which, given the finite length of games, means assists would tend to happen more frequently. Limiting ourselves to games with at least 11 assists, or games with 11-17 assists gives a view of a more typical game with many assists. We see in (1b) and (1c) that time to assist increases on average once Westbrook got his 10th assist.

However, these basic statistics only account for assists that Westbrook actually achieved, they do not account for any right censoring. That is, say Westbrook gets 9 assists in a game in the first half alone, and doesn’t record another assist all game despite playing, say, 20 minutes in the second half. If there game were to go on indefinitely, Westbrook eventually would record that 10th assist, say after 22 minutes. But since we never observe that hypothetical 10th assist, that contribution of 22 minutes isn’t included. Nor is even the 20 minutes of assist-less play. This basic censoring problem is why we use survival models.


Next we can plot Kaplan Meier survival curves for Westbrook’s assists broken down by group and by the different sets of games. I used similar curves when looking at how players accrue personal fouls – and I’ll borrow my language from there:

A survival curve, in general, is used to map the length of time that elapses before an event occurs. Here, they give the probability that a player has “survived” to a certain time without recording an assist (grouped as explained above). These curves are useful for understanding how a player accrues assists while accounting for the total length of time during which a player is followed, and allows us to compare how different assists are accrued.


Here is it very easy to see that the time between assists increases significantly once Westbrook has 10 assists. This difference is apparent regardless of which subset of games we look at, though the increase is more pronounced when we ignore games with fewer than 11 assists. We can also see that the time between assists doesn’t differ significantly between the first 7 assists and assists 8 through 10.

Survival Models

Finally we could put the data into a conditional risk set model for ordered events. I’m not sure this is the best model to use for this data structure, given that I grouped the assists, but it will do for now. I recommend not looking at the actual numbers and just noticing that yes, theres is a significant difference between the baseline and the group of 11+ assists.


If interested we can find the hazard ratios associated with each assist group. To do so we exponentiate the coefficients since each coefficient is the log comparison with respect to the baseline of the 1st  through 7th assists. For example, looking at the final column, we see that, in games where Westbrook had between 11 and 17 assists, he was 63% less likely to record an assist greater than 10 versus how likely he was to record one of his first 7 assists (the baseline group). Interpreting coefficients is very annoying at times. The take away here is yes, there is a statistically significant difference.


Based on some simple analysis, it appears that the time between Russell Westbrook’s assists decreased once he reached 10 assists. This may contribute to the narrative that he stopped trying to get assists after he reached 10. Perhaps this is because he stopped passing, or perhaps its because his teammates just shot less effectively on would-be-assisted shots after 10. Additionally, there are many other factors that could contribute to the decline in time between assists. Perhaps there is general game fatigue, and assist rates drop off for all players. Maybe those games were particularly close in score and therefore Westbrook chose to take jump shots himself or drive to the basket.

What’s great is that a lot of these ideas can be explored using the data. We could look at play by play data and see if Russ was passing at the same rates before and after assist number 10. We could test if assist rates decline overall in the NBA as games progress. I’m not sure which potential confounding explanations are worth running down at the moment. Please, please, please, let me know in the comments, via email, or on Twitter if you have any suggestions or ideas.

REMINDER: The above analysis is something I threw together in the days between my graduation celebrations and The Finals starting and isn’t as robust or detailed as I might like. Take with a handful of salt.