A New Method for In-Season Regression of Hockey Statistics

Whenever hockey analysts attempt to forecast future performance from past data, it’s a given that regression to the mean needs to be taken into account. A great season or a strong run of play often reflects natural, random variation rather than underlying talent or strategy, particularly in a sport like hockey, where so much hinges on fortunate bounces and 50/50 plays. In general, it’s usually safe to assume that exceptional results (whether good or bad) tend to regress to the average. An analytic technique for regressing statistics to the mean is described in this article by Eric Tulsky, and has been used by other hockey statisticians and even the Puck Prediction Playoff Forecast model. Essentially, the idea is to use the year-on-year autocorrelation r as a measure of a statistic’s repeatability, and use (1 – r) to regress the value of that measure toward the average.

Photo credit: Flickr user danheap77. Use of this image does not imply endorsement.

When we’re talking about forecasting full-season performance, I don’t have a problem with this method. Still, as someone who’s using it to regress partial-season measures to estimate their full-season values, a number of things bother me about it. The correlation r is generally derived from complete seasons of data, but as I’ve found, most of the critical measures of team performance are extremely volatile in partial-season samples. The implication here is that, in small samples, we really know nothing empirical about team performance, and would tend to regress nearly all the variability out of each team’s statistics if we’re doing things right. But, clearly, this becomes less true as the season goes along. As a team gets closer to 82 games played, we should become more confident that their performance is sustainable, for the simple reason that it has fewer opportunities to regress back to average. If, for example, a team has a 10% even-strength shooting percentage (unlikely, but not impossible, as the 2009-10 Capitals demonstrated) at the 70-game mark, it’s incredibly unlikely to regress back to 8% by season’s end. What we need is a method that allows us to adjust our uncertainty about team performance as the season moves along. Fortunately for all of you, I’ve put one together.

The starting point of my approach is the same as the original method: use the year-on-year autocorrelation in our measures of interest to estimate repeatability. Where my approach differs slightly is that I’m calculating the repeatability of event rates rather than the percentages we usually talk about. Using data from the five most recent 82-game NHL seasons (2007-08 through 2011-12, for n=150 team-seasons), I estimated year-on-year correlations for even-strength GF, GA, SF, and SA (adjusting the numbers for differences in 5-on-5 TOI), close-score even-strength Fenwick For and Fenwick Against (again, adjusting for differences in TOI spent in such situations), PP goals, PP opportunities, penalty kills, and PK chances. I then took 2013 data and, after applying the same TOI adjustments and extrapolating 48-game numbers to an 82-game season, estimated expected values for these measures, by team, for the 2013-14 season, regressing them to their 2007-12 means using the correlation coefficients. Once these expected season totals are calculated, they can be scaled according to the number of games played in 2013-14. Which means that they can be added to a team’s in-season totals to provide an estimate of what the 82-game numbers might look like.

To provide an example, let’s say a team has played 43 games, and we want to gauge how sustainable their 9.9% shooting is likely to be. After 43 games, they’ve scored 104 goals on 1,052 shots, and based on my regressed estimate of their shooting performance using 2013 data, we would have expected them to score 145 goals on 1,778 shots over the full season. If we assume that the team (OK, you might have guessed I’m talking about Anaheim) will shoot and score at the pace we estimated, we’d simply multiply 145 and 1,778 by (39/82): this tells us to expect the Ducks to score 69 more goals on 845 more shots over the remainder of their schedule. At that point, it’s just a matter of adding the even-strength goals and shots they’ve accumulated through 43 games to these totals. This gives us 173 goals on 1,897 shots, or a shooting percentage of 9.1%. Updating the analysis later in the season (after, say, 70 games), you would multiply our estimated 2013-14 goals and shots for Anaheim by (12/82) and add this to the observed totals. The implication, obviously, is that that we’d rely more heavily on our expectation of regression to the mean early in the season, and trust the data more late in the campaign.

The tables below depict the estimated 82-game values of team Sh%, Sv%, Fenwick Close %, PP %, and PK%, by division, regressed using this method. The observed data were pulled from Extra Skater and nhl.com on January 4, so they represent between 40 and 44 games played per team.

regress_atl

regress_metro

regress_cent

regress_pac

About Nick Emptage

Nicholas Emptage is the blogger behind puckprediction.com. He is an economist by trade and a Sharks fan by choice.
This entry was posted in Original Analysis and tagged , , , , , , . Bookmark the permalink.

14 Responses to A New Method for In-Season Regression of Hockey Statistics

  1. benjaminwendorf says:

    Nice work here. One thing I like to consider is the possibility for a bit greater retention for save percentages if the goaltender in-question has exhibited sufficient ability. I know that’s not your intent here, as you’re building a model, though one thing that could be built-in would be a margin for error for goaltenders.

    • Nick Emptage says:

      Thanks very much. You could definitely implement different regression patterns for goaltender Sv%. If a goalie really does sustain a better Sv% from year to year (a subject of some debate), that would be reflected in the estimated year-on-year autocorrelation. So you’d expect their performance to exhibit less regression to the mean.

      More generally, there’s no reason this method couldn’t be applied to individual player performance (assuming sufficient sample sizes).

  2. dan says:

    Excellent work.
    One question around how to handle the crazy 2013 season?
    Isn’t it problematic to extrapolate the 2013 stats from 48 to 82 games?
    Did I misread yr post or did you regress these totals at all?
    As some teams posted rates that were unsustainable?
    Wouldn’t it be better to regress them back to 2012 then use these regressed numbers as the starting point for 2014?
    thanks Dan

    • Nick Emptage says:

      It’s absolutely problematic to extrapolate the 48-game numbers for an 82-game season. It’s a really annoying tradeoff. On one hand, the 2013 numbers are almost certainly more volatile than the expected numbers from this season. On the other, if you’re going to use numbers from 2 seasons ago to estimate the % regression, you need to estimate a 2-year correlation. But there are lots of non-random reasons why a team’s numbers would be different over that length of time. We’d be expecting Detroit to play as though they still had Lidstrom in the lineup, and expecting the Penguins to be mostly Crosby-less. And statistically, you’re estimating that correlation using a much smaller sample, which is also problematic.

  3. dan says:

    thanks alot! I have been wrestling with this all year but I thought i was missing something? You explanation was terrific! – Just got to make the best of it – glad it wont happens again for at least 8 years. But yr method will be great moving forward..

  4. jeff says:

    Good stuff and I was thinking of another approach.

    Mirtle tweeted out the SH% of players over the last few seasons over 14%
    http://www.hockey-reference.com/play-index/psl_finder.cgi?request=1&match=combined&year_min=2012&year_max=&season_start=1&season_end=-1&age_min=0&age_max=99&birth_country=&franch_id=&is_active=&is_hof=&pos=S&handed=&c1stat=games_played&c1comp=gt&c1val=100&c2stat=shot_pct&c2comp=gt&c2val=14&c3stat=&c3comp=gt&c3val=&c4stat=&c4comp=gt&c4val=&order_by=shot_pct

    Mirtle points out Bowzak is in fine company, but also look at how many leafs are in that list (Lupul, Bolland, Kadri). The number of high SH% players on the leafs appear disproportionate compared to other teams across the league. But for a team like the leafs that gets mightily outshot, the ability (skill) to convert a shot at an above average rate is critical.

    The leaf’s losing Bolland and replacing him with Mclemment will have an impact on team SH% (and thereby PDO). But I think what is more interesting is that we can create a expected team SH% and expected team PDO, based on the constituent players on the team career SH% and goalies SV%. This team metric will need to be weighted by the number of shots that each player on the leaf generates.

    The value is that this would give a much better idea of what team SH% should regress too. That is, we know PDO tends to 1000 and league wide SH% tends to 8.5% (?) but a specific team’s mean SH% and PDO can be better estimated by looking at career SH% of the players (and SV% of the goalie). This will allow us to better approximate how much the team SH% is elevated or below the expected mean. Further, by looking at the goalies career SV% we can then calculate an expected PDO. You may want to include this

    • Nick Emptage says:

      An interesting thing about Sh%: the year-on-year autocorrelation in Sh% is nearly 0, but this isn’t true of its component parts (GF and SF). This is why I focused on event rates rather than percentages in this analysis. YOY rates of GF have almost no correlation to one another, but rates of SF tend to stay pretty steady from year to year.

      • macrojeff says:

        Does player SH% or team SH% have this property. The idea was to take the career SH% of the player (not the team). And then create a weighted sum team SH% by the summation of player 1 career SH%*player 1 shot attempts current year + player 2 career SH% * shot attempts …….

        This allows for an “expected” team SH% to calculated.

      • Nick Emptage says:

        That’s an empirical question I haven’t seen answered, but I would guess it applies to player SF also.

        Two things you’d need to account for in creating the weighted average you’re talking about: player 5v5 TOI is more volatile than team 5v5 TOI, so the shot and goal rates would be very unstable; and you’d need to account for injuries somehow if you’re estimating team shooting this way, since a team’s shooting is often dependent on a few guys (e.g., PIT with Crosby, Malkin and Neal).

  5. Pingback: Playoff Forecast: February 2014 | Puck Prediction

  6. Pingback: What Can Pythagorean Expectation Teach Us About Winning in the NHL? | Puck Prediction

  7. Pingback: Playoff Forecast: March 2014 | Puck Prediction

  8. Pingback: What If the NHL Had Played 82 Games in 2012-2013? | Puck Prediction

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s