As of September 1st 2019, I will be the Director of Strategic Research for the Toronto Raptors.
As such, this site will be on hiatus.
As of September 1st 2019, I will be the Director of Strategic Research for the Toronto Raptors.
As such, this site will be on hiatus.
Recently I’ve seen a lot of misunderstanding about how/why cross validation (CV) is used in model selection/fitting. I’ve seen it misused in a number of ways and in a number of settings. I thought it might be worth it to write up a quick, hopefully accessible guide to CV.
Part 1 will cover general ideas. There won’t be any practical sports examples as I am trying to be very general here. I’ll have part 2 up in December with a practical example using some NBA shot data. I’ll post the code in a colab so people can see CV in action.
(There may be some slight abuse of notation)
Cross validation is generally about model selection. It also can be used to get an estimate of error.
Statistics is about quantifying uncertainty. Cross validation is a way to quantify uncertainty for many models and use the uncertainty to select one. Cross validation also can be used to quantify the uncertainty of a single model.
I think it is important to be clear about what I mean by a model. In simple terms, a statistical model is the way we are going to fit the data. A model encodes the underlying assumptions we have about the data generating process.
Examples of models might be:
When I refer to a model, I mean the general framework of the model, such as linear with variables A,B, C, and D. When I refer to a model fit, I mean the version of that model fit to the data, such as the coefficients on A, B, C, and D in a linear model (and intercept if needed).
Generally we use cross validation to pick a model from a number of options. It helps us avoid overfitting. CV also helps us refine our beliefs about the underlying data generating process.
In the case of outcome prediction, we often need to tune the inputs used in the model (or data needed or whatever), explore different types of models, or determine which independent variables are of interest. We use CV to get estimates of error/metrics for various models (score, correlation, MSE, etc whatever) and pick one model from there. We can have CV do feature selection as well, but then we are testing that particular variable selection method, not the variables chosen by the selection method. For example we can use CV on a LASSO model (which incorporates variable selection), in which case we are testing LASSO, not the variables it selected. We could also test some particular set of variables in a model of their own.
For each candidate model:
Fit all the models this way and, generally, take the average of whatever metrics you calculated for each fold. But we might also look at the variance of the metrics. Then look at which model performs “best.” The definition of “best” will depend on what we care about. Is could be about maximizing true positives and true negatives. Or minimizing squared error loss. Or getting within X%. Whatever.
Once we have chosen a model based on the training data, we still want to get an estimate of prediction/estimation error for any new data that would be collected independently of the training data. If we have a new set of data, say we get access to a new season for a sport, then we can get an estimate of error using that new data. So if we decided that a simple logistic model with variables A, B, C, and A^2 is the “best”, we fit that model on all the training data to get coefficient estimates. We then predict outcomes for the new data, compare to the truth, and look at the error/correlation/score/whatever etc. This gives us our estimate of the true error/MSE/accuracy/AUROC etc.
We could instead decide that a LASSO model is best. So we fit LASSO on all our training data and use that to get coefficients/variables which we then apply to the validation data.
In the absence of a new data set, we still need to estimate the error. We can use CV to get an estimate of the error. We could do the same thing if we had a priori decided to use a model with certain variables. We take the model we decided on a priori, fit on k-1 folds, apply the fitted model to the held out kth fold, compare to truth, calculate error and other metrics etc. Do this for all folds and get metrics for all k folds. Then we can average those metrics, or look at their distribution.
Remember, statistics is about quantifying uncertainty. Cross validation is a tool for doing that. We do not get the final fitted model during the cross validation step.
Common scenarios, mistakes, and how to fix them:
The talk concerns NBA fouls rates and is a condensed version of the blog posts found on this very site (Part 1 can be found here). However the numbers are updated to include more recent seasons.
I spent the weekend of October 19-21 in Pittsburgh at the 2018 CMU Sports Analytics Conference. One of the highlights of the weekend was Sam Ventura asking me to explain causal inference in 15 seconds. I couldn’t quite do it, but it morphed into trying to explain all of statistics in 30 seconds. Which I then had to repeat a few times over the weekend. Figured I’d post it so people can stop asking. I’m expanding slightly.
Broadly speaking, statistics can be broken up into three categories: description, prediction, and inference.
I’ll give an example in the sports analytics world, specifically basketball (this part is what I will say if I only have 30 seconds):
My day job is working for a tech healthcare company, and the following are the examples I normally use in that world:
So, it’s not *all* of statistics. But I think its important to understand the different parts of statistics. They have different uses and different interpretations.
Any time I am at a sports conference there is always the question of “how does one succeed in/break into the field?” Many others have written about this topic, but I’ve started to see a lot of common themes. So….
Success in sports analytics/statistics seems to require these 4 abilities:
Imagine that each area has a max of 10 points. You gotta have at least 5/10 in every category and then like, at least 30 points overall. Yes I am speaking very vaguely. But the point is, you don’t have to be great at everything, but you do have to be great at something and decent at everything.
I don’t feel like I actually know that much about basketball or baseball, or any sport really. I didn’t play any sport in college, and generally when I watch games, I’m just enjoying the game. While watching the Red Sox in the playoffs I don’t really pay attention to the distribution of David Price’s pitches, I just enjoy watching him pitch. Hell, I spend more time wondering what Fortnite skins Price has. I’ve been guessing Dark Voyager, but he also seems like the kind of guy to buy a new phone just to get the Galaxy skin. Anyway. I’m not an expert, but I do know enough to talk sensibly about sports and to help people with more expertise refine and sharpen their questions.
And I know statistics. And years of teaching during graduate school helped me get pretty damn good at explaining complicated statistical concepts in ways that most people can understand. Plus I can code (though not as well as others). Sports teams are chock full of sports experts, they need experts in other areas too.
These four skills are key to succeeding in any sort of analytical job. I’m not a medical expert, but I work with medical experts in my job and complement their skills with my own.
Man, no matter what a talk is about, there’s always the questions/comments of “did you think about this other variable” (yes, but it wasn’t available in the data), “could you do it in this other sport…” (is there data available on that sport?), “what about this one example when the opposite happened?” (-_-), “you need to be clearer about how effects are mediated downstream, there’s no way this is a direct effect even if you’ve controlled for all the confounding” (ok that one’s usually me), etc.
Next time, we are going to make bingo cards.
I was enjoying the third quarter of the tight Raptors vs Wizards game on Sunday night when my coworker sent me this article and the accompanying comments on the Boston Marathon:
Oh my. This article makes me disappointed. So let’s skip Cavs/Pacers and Westworld and dig in.
On the surface it feels like the article is going to have math to back up the claim that “men quit and women don’t.” It has *some:*
But finishing rates varied significantly by gender. For men, the dropout rate was up almost 80 percent from 2017; for women, it was up only about 12 percent. Overall, 5 percent of men dropped out, versus just 3.8 percent of women. The trend was true at the elite level, too.
And some attempt to examine more than just the 2018 race:
But at the same race in 2012, on an unusually hot 86-degree day, women also finished at higher rates than men, the only other occasion between 2012 and 2018 when they did. So are women somehow better able to withstand extreme conditions?
But that’s it. No more actual math or analyses. Just some anecdotes and attempts to explain biological or psychological reasons for the difference.
Let’s ignore those reasons (controversial as they may be) and just look at the numbers.
The metrics used are ill-defined. There is mention of how the midrace dropout rate was up 50 percent overall from last year, but no split by gender. As quoted above, the finishing rates varied significantly by gender, but no numbers are given. Only the overall dropout rates are reported. What does overall dropout rate mean? I assume it is a combination of runners who dropped before the race began plus those who dropped midrace. And then the overall dropout rates are 3.8% for women and 5% for men. But the splashy number is that men dropped out 80% more than last year whereas women only dropped out 12% more. Is… is that right? I’ve already gone cross-eyed. The whole thing reeks of hacking and obscures the meaning.
There are a lot of numbers here. Some are combined across genders. Some are overall rates, some are midrace. Some are differences across years.
Frustrated with the lack of numbers in the article, I went looking for the actual numbers. I found the data on the official website. I wish it had been linked in the article itself…
|CATEGORY||NUMBER ENTERED||NUMBER STARTED||NUMBER FINISHED||
Now we can do some proper statistics.
First, we can perform an actual two sample test and construct confidence intervals to see if there was a difference in finishing rates between genders.
For those who entered the race, the 95% confidence interval for the difference in percent finished between males and females was (-0.022, -0.006).
For those who started the race, the 95% confidence interval for the difference in percent finished between males and females was (-0.017, -0.007).
The difference is technically significant, but not at all interesting. And that is ignoring the fact that we shouldn’t really care about p-values to begin with.
But the article mentions dropout rate, not finishing rate, so let’s use that metric:
Of those who started the race, about 5% of males and 3.8% of females dropped out.
For those who started the race, the 95% confidence interval for the difference in percent dropout between males and females was (0.0069, 0.0168).
So yes, there is a significant difference. But with these kinds of sample sizes, it’s not surprising or interesting to see a tiny significant difference.
But what about 2017? What about the big change from 2017 to 2018? After all the main splashy metric is the 80% increase in dropout for men.
2017 (numbers from here)
|CATEGORY||NUMBER ENTERED||NUMBER STARTED||NUMBER FINISHED||
In 2017, for those who entered the race, the 95% confidence interval for the difference in percent finished between males and females was ( -0.00006, 0.01497).
And in 2017, for those who started the race, the 95% confidence interval for the difference in percent finished between males and females was (0.0013, 0.0097).
Of those who started the race in 2017, about 2.8% of males and 3.3% of females dropped out.
For those who started the race in 2017, the 95% confidence interval for the difference in percent dropout between males and females was ( -0.0097, -0.0013).
So it does look like women dropped out more in 2017 compared to 2018. But the difference is so tiny that… whatever. This isn’t interesting. But at least now there are actual statistics to back up the claim.
But really, there’s not a lot going on here.
And FINALLY, we can look at the differences from 2017 to 2018.
The dropout rate for females increased from ~3.3% to ~3.8% which (using the exact numbers) was an increase of about 14.6% (not the 12% reported in the NYT article). The dropout rate for males increased from ~2.8% to ~5.0% which (using the exact numbers) was an increase of about 80% as reported.
At least now I understand where these numbers are coming from.
I still don’t buy it. Using dropout numbers instead of finishing numbers makes ratios much larger. An 80% increase in dropout sounds a lot more impressive than a 2% drop in finishing.
And that’s all before we try to compare to other years that might have also had extreme weather. If I had more time or interest I might look at the temperature, humidity, wind speed, wind direction etc for the past 20+ marathons. And then look at differences in dropout/finishing rate for men and women while controlling for weather conditions. That sort of analysis still probably wouldn’t convince me, but it would get closer.
This article is really frustrating. There are just enough scraps of carefully chosen numbers to make differences seem bigger than they really are. Comparing dropout rates to finishing rates is a bit hacky, and then comparing just two years (as opposed to many) gets even hackier. There’s an interesting hypothesis buried in the article and the data. And if we were to pull data on many marathons, we might get closer to actually being able to test if dropout rates vary by gender according to conditions. But the way the data is presented in the article obscures any actual differences and invites controversy. Audiences are eager for guidance with statistics and math. Tossing around a few numbers without explaining them (or giving a link to the source…) is such poor practice.
This site has been sparse lately and it is because I’ve been busy with two other projects.
The first is my actual day job. I finished my PhD in May of 2017 and began working at Verily Life Sciences in August of 2017. Did I turn down some jobs with pro teams? Yes. Yes I did. Why? That’s a story for another day. I like what I do at Verily. I get to have fun, with people I like, working on cool healthcare projects. Plus we work out of the Google offices in Cambridge which are very nice and full of free food and fun toys.
The second project I’ve been working on is the visualizations section of Udam Saini’s EightThirtyFour.
Udam and I worked together on this site’s NBA foul project, which started as an attempt to quantify how mad DeMarcus Cousins gets in games. We built survival models and visualizations to examine how players accrue fouls. But these models can just as easily be applied to assists, blocks etc. In fact, I took the ideas and examined how Russell Westbrook accrued assists in his historic triple-double season. By using survival models, we can see how the time between assists increased significantly after he reached 10 assists in a game. This could be seen as evidence in favor of stat padding.
The tool we’ve built on the site linked above allows you to look at survival visualizations and models for pretty much any player in seasons between 2011 and 2017. The stats primer linked in the first line has more explanation and some suggestions for players and stats to look at.
Survival analysis models and visualizations are not always the easiest to explain, but I think there is value in having other ways to analyze and examine data. Survival analysis can help us better understand things like fatigue and stat padding. And can help add some math to intangible things like “tilt.”
This project was also a lesson in working on a problem with a proper software engineer. I am a statistician and I’m used to a certain amount of data wrangling and cleaning, but I largely prefer to get data in a nice data frame and go from there. And I certainly don’t have the prowess to create a cool interactive tool on a website that blends SQL and R and any number of other engineer-y things. Well. I’d like to think I could, but it would take ages and look much uglier. And be slower. Conversely, my partner in crime Udam probably can’t sort through all the statistics and R code as fast as I can. My background isn’t even in survival analysis, but I still understand it better than a SWE. So this part of his site was a chance for us to combine powers and see what we could come up with. In between our actual Alphabet jobs, of course.
I think in the world of sports analytics, it’s hard to find somebody who has it all: excellent software engineering skills, deep theoretical knowledge of statistics, and deep knowledge of the sport (be it basketball or another sport). People like that exist, to be sure, but they likely already work for teams or are in other fields. I once tried to be an expert in all three areas and it was very stressful and a lot of work. Once I realized that I couldn’t do it all by myself and started looking for collaborations, I found that I was able to really shine in my expert areas and have way more fun with the work I do.
The same is true in any field. I wasn’t hired by Verily to be a baller software engineer *and* an expert statistician *and* have a deep understanding of a specific health care area. I work with awesome healthcare experts and engineers and get to focus just on my area of expertise.
In both my job and my side sports projects my goal is always to have fun working on cool problems with people I like. It’s more fun to be part of a team.
Anyway, have fun playing with the site, and if you have any suggestions, let us know :]
I will expand this talk into a larger blog post in the near future.
I was going to flesh this idea out and refine it for a proper paper/poster for NESSIS, but since I have to be in a wedding that weekend (sigh), here are my current raw thoughts on Russell Westbrook. I figured it was best to get these ideas out now … before I become all consumed by The Finals.
I’ve been thinking a lot about Russell Westbrook and his historic triple-double season. Partially I’ve been thinking about how arbitrary the number 10 is, and how setting 10 to be a significant cutoff is similar to setting 0.05 as a p-value cutoff. But also I have been thinking about stat padding. It’s been pretty clear that Westbrook’s teammates would let him get rebounds, but there’s also been a bit of a debate about how he accrues assists. The idea being that once he gets to 10, he stops trying to get assists. Now this could mean that he passes less, or his teammates don’t shoot as much, or whatever. I’m not concerned with the mechanism, just the timing. For now.
I’ll be examining play-by-play data and box-score data from the NBA for the 2016-2017 season. This data is publicly available from http://www.nba.com. The play-by-play contains rich event data for each game. The box-score includes data for which players started the game, and which players were on the court at the start of a quarter. Data, in csv format, can be found here.
Let’s look at the time to assist for every assist Westbrook gets and see if it significantly changes for assists 1-10 vs 11+. I thought about looking at every assist by number and doing a survival analysis, but soon ran into problems with sparsity and granularity. Westbrook had games with up to 22 assists, so trying to look at them individually got cumbersome. Instead I decided to group assists as follows: 1-7, 8-10 and 11+. I reasoned that Westbrook’s accrual rate for the first several assists would follow one pattern, which would then increase as he approached 10, and then taper off for assists 11+.
I freely admit that may not be the best strategy and am open to suggestions.
I also split out which games I would examine into 3 groups: all games, games where he got at least 11 assists, and games where he got between 11 and 17 assists. This was to try to account for right censoring from the end of the game. In other words, when we look at all games, we include games where he only got, say, 7 assists, and therefore we cannot hope to observe the difference in time to assist 8 vs assist 12. Choosing to cut at 17 assists was arbitrary and I am open to changing it to fewer or more.
Our main metric of interest is the time between assists, i.e. how many seconds of player time (so time when Westbrook is on the floor) occur between assists.
First, let us take a look at some basic statistics, where we examine the mean, median, and standard deviation for the time to assist broken down by group and by the different sets of games. Again, this is in seconds of player time.
We can see that if we look at all games, it appears that the time between assists goes down on average once Westbrook gets past 10 assists. However this sample of games includes games where he got upwards of 22 assists, which, given the finite length of games, means assists would tend to happen more frequently. Limiting ourselves to games with at least 11 assists, or games with 11-17 assists gives a view of a more typical game with many assists. We see in (1b) and (1c) that time to assist increases on average once Westbrook got his 10th assist.
However, these basic statistics only account for assists that Westbrook actually achieved, they do not account for any right censoring. That is, say Westbrook gets 9 assists in a game in the first half alone, and doesn’t record another assist all game despite playing, say, 20 minutes in the second half. If there game were to go on indefinitely, Westbrook eventually would record that 10th assist, say after 22 minutes. But since we never observe that hypothetical 10th assist, that contribution of 22 minutes isn’t included. Nor is even the 20 minutes of assist-less play. This basic censoring problem is why we use survival models.
Next we can plot Kaplan Meier survival curves for Westbrook’s assists broken down by group and by the different sets of games. I used similar curves when looking at how players accrue personal fouls – and I’ll borrow my language from there:
A survival curve, in general, is used to map the length of time that elapses before an event occurs. Here, they give the probability that a player has “survived” to a certain time without recording an assist (grouped as explained above). These curves are useful for understanding how a player accrues assists while accounting for the total length of time during which a player is followed, and allows us to compare how different assists are accrued.
Here is it very easy to see that the time between assists increases significantly once Westbrook has 10 assists. This difference is apparent regardless of which subset of games we look at, though the increase is more pronounced when we ignore games with fewer than 11 assists. We can also see that the time between assists doesn’t differ significantly between the first 7 assists and assists 8 through 10.
Finally we could put the data into a conditional risk set model for ordered events. I’m not sure this is the best model to use for this data structure, given that I grouped the assists, but it will do for now. I recommend not looking at the actual numbers and just noticing that yes, theres is a significant difference between the baseline and the group of 11+ assists.
If interested we can find the hazard ratios associated with each assist group. To do so we exponentiate the coefficients since each coefficient is the log comparison with respect to the baseline of the 1st through 7th assists. For example, looking at the final column, we see that, in games where Westbrook had between 11 and 17 assists, he was 63% less likely to record an assist greater than 10 versus how likely he was to record one of his first 7 assists (the baseline group). Interpreting coefficients is very annoying at times. The take away here is yes, there is a statistically significant difference.
Based on some simple analysis, it appears that the time between Russell Westbrook’s assists decreased once he reached 10 assists. This may contribute to the narrative that he stopped trying to get assists after he reached 10. Perhaps this is because he stopped passing, or perhaps its because his teammates just shot less effectively on would-be-assisted shots after 10. Additionally, there are many other factors that could contribute to the decline in time between assists. Perhaps there is general game fatigue, and assist rates drop off for all players. Maybe those games were particularly close in score and therefore Westbrook chose to take jump shots himself or drive to the basket.
What’s great is that a lot of these ideas can be explored using the data. We could look at play by play data and see if Russ was passing at the same rates before and after assist number 10. We could test if assist rates decline overall in the NBA as games progress. I’m not sure which potential confounding explanations are worth running down at the moment. Please, please, please, let me know in the comments, via email, or on Twitter if you have any suggestions or ideas.
REMINDER: The above analysis is something I threw together in the days between my graduation celebrations and The Finals starting and isn’t as robust or detailed as I might like. Take with a handful of salt.
The weekend has come and gone and so has the 2017 Sloan Sports Analytics Conference. This was the third time I attended the conference and easily the most enjoyable experience I have had to date.
Many others have recapped a lot of the compelling analytics content, so I don’t feel compelled to repeat much of that. Moreover, I don’t have the journalistic abilities yet to condense everything I learned into a nice blog entry. AND I have a proper dissertation committee meeting this week, followed by the ENAR biometrics conference next week. Between the two, I haven’t been burdened with an abundance of time. So here are some thoughts on the conference, which will inevitably spiral into larger thoughts on the field as a whole.
My experience at SSAC this year was a weird mix of trying to see famous people speak, trying to hear interesting analytics/statistics talks, and trying to meet as many people as possible. In previous years I didn’t know anyone and wasn’t thinking seriously about a career in this field, so I prioritized panels with famous speakers. This was great for maximizing entertainment value. But now that I am making a proper attempt to pursue sports analytics as a career, it was clear I needed to actually understand where the field is and where its going… while still taking time to see big names where possible. Because who can resist Nate Silver and Mark Cuban or Nate Silver and Adam Silver. It’s clear that experiences at SSAC will vary greatly depending on interests and goals.
It’s also interesting to be at the conference while in a position of actively looking for a job. During almost every conversation I had, I was trying to maintain a balance between a number of potentially conflicting motivations. Mostly, I just wanted to nerd out and talk about sports stats with like-minded people. But I also wanted to make sure the work I am doing is in the right direction and get advice on how to be better. How can I improve my work not just to be better intrinsically, but also to have a bigger impact. And then at a certain point, especially if I was talking to somebody working for a team, I’d think “is this person on a team that is hiring? Would they want to hire me?” I’m better at networking than I used to be, but at the end of the day, I am a still a somewhat awkward stats nerd. One big takeaway from the conference for me was that I need to be more aggressive and confident in general. It’s easy to have imposter syndrome. I eventually felt generally okay with the other stats folks, but at a conference with a lot of MBAs, it can be intimidating to talk to new people. Especially since I was in the minority at SSAC.
Yep, I’m going to talk about diversity for a minute. There are a lot of men at Sloan. A lot of white men. And of the women who are there, few are statisticians. I was lucky enough to meet Diana Ma who does analytics for the Indiana Pacers. We hugged out of sheer joy of finding another woman in sports stats. Diana is the first woman I have met in person who works for a team in any sport. I’ve been in STEM for most of my life, and I’m used to being in scenarios that are majority white male, but SSAC takes the cake. Conference attendees are, for the most part, aware of the demographic disparities. Not just about the lack of women, but the lack of any other minorities. And there are always conversations about how to increase the diversity of the conference and the field overall. I don’t have a good answer, but I’m glad people (including Daryl Morey) are talking about it.
Side note to the jerks on twitter, and elsewhere, questioning why diversity is important – this is for you. Even if you want to argue that diversity adds nothing to the end product, equality is important. Not everyone who had interest in the conference had access. And not everyone who might have had interest had access to resources to foster that growing interest.
Moreover, I distinctly remember being at SSAC in 2015 and hearing somebody say that women shouldn’t bother with this field, because it is such a man’s world. I can’t remember if it was on a panel or a conversation I overheard, but it struck a huge chord with me and was a large part of why I eschewed the field for so long. Fortunately, I am lucky enough to have incredibly supportive friends, family, and mentors.
Which brings me to a final, big takeaway from the conference this year. Success in sports analytics has a large component of luck. From the family into which you were born, to the school you attended and the TA you happen to have for a class, to who re-tweets you, to who you randomly happen to be sitting next to at a panel. Don’t get me wrong, you also need skill. You need to be good enough that when you are lucky enough to make a connection or have your blog post re-tweeted, people find value in it and pay attention.
Our entire careers are about quantifying uncertainty and randomness in the data we examine; we should acknowledge the randomness in our lives.
Anyway. I met a lot of really awesome people. I’m going to avoid trying to name everyone, because I’m sure I’ll forget somebody and then feel bad. But needless to say, everyone was friendly, smart, and incredibly welcoming. I wish the conference were a few days longer so things wouldn’t be so rushed, interesting panels wouldn’t overlap, and I’d have more time to chat with everyone. It’s all well and good to talk over email or the phone, but in person conversations are ideal. Maybe next year I just won’t sleep.
I hope everyone makes it out to NESSIS in September.
This is Part 3 of my series on DeMarcus Cousins and how NBA players accrue personal fouls.
Part 2 can be found here.
Part 1 can be found here.
I strongly recommend reading at least Part 2 before continuing as I reference it.
To provide more statistical rigor, we analyze our players using a conditional risk set model for ordered events. This model, first proposed by Prentice, Williams, and Peterson, models the hazard at each foul event time as a function of the current number of fouls accumulated and time since the last foul. The model is flexible and can include other covariates as needed. For this paper, our covariates include the lead or deficit in the score of the player’s team, game time in minutes, and an interaction between the two. We chose these covariates, as we believe that a closer game can have an impact on a player’s fouling rates. We include actual game time in minutes to reflect how close the game is to ending, and to account for potential overtime periods.
Let and be the foul and censoring time for the kth foul (k=1, 2, …,6) in the ith game and let be the vector of covariates for the ith game and with respect to the kth foul. We assume and are independent given . We then define and let be a vector of unknown regression coefficients. Under the proportional hazard assumption, the hazard function of the ith game and for the kth foul is:
From Table 2, we can see that the difference in score plays a minimal impact on player fouling rates, even after adjusting for game time for Cousins, Horford, and Lopez. Closer games do not seem to cause more fouls to be committed. However, the total game time that has been played has an impact. Furthermore, as time goes on, it appears that players are less likely to foul. This trend holds true for our three players of interest and all players when pooled together, which is surprising considering that players are more likely to foul later in the game. With this analysis, it shows that players are more likely to foul if they have already fouled as the game goes on. If a player has not fouled already in the game, they are less likely to foul since time plays a negative relationship with likelihood to foul. This trend holds true for all centers we analyzed. These results are line with what we saw in Figure 1. Moreover, these results are similarly likely due to the selection bias we have that precludes us from seeing every foul in every game.
As before, we can limit our analysis to games where the players had at least 5 fouls, and examine analysis of the first four fouls. Table 3 displays the survival model output for Cousins, Horford and Lopez when we use the restricted dataset. For all players, fouls 2, 3, and 4 are committed significantly sooner than the prior foul. To find the hazard ratios associated with each foul, we exponentiate the difference in the coefficients since each coefficient is with respect to the baseline of the 1st foul. For example, when Cousins has 3 fouls he is 405% more likely to commit a foul at any given time than when he only has 2 fouls. Cousins is 303% more likely to commit a foul when he has four fouls compared to when he only has three. Although the hazard ratios increase dramatically with each foul, it is important to keep in mind that the initial probability of fouling at any given moment is low, as the initial foul takes nearly 500 seconds (over 8 minutes) to take place on average for DeMarcus Cousins.
It is interesting to note that the opposite effect happens with game time. As each minute passes in the game, Cousins is only 90% as likely to commit a foul as the previous minute. This trend holds for all players.
From the table, we can see that although all players seem to have this “tilting” behavior, DeMarcus Cousins has a higher likelihood of committing a foul than other players as he accrues fouls. Cousins seems to “tilt” more than others centers in our analysis. Part of this behavior may be explained by teams attacking players who already have many fouls, attempting to get them in foul trouble. However, we believe that no one factor can tell the complete story.