Weekend Hockey Analytics Links: April 25, 2015

The first round of the playoffs is nearing its conclusion. As I write this, the Anaheim Ducks and New York Rangers are through to round two, and three more teams will try to punch their ticket today: the Washington Capitals will try for the fourth time to eliminate a team led by Jaroslav Halak, the Blackhawks will take their second crack at knocking out the Predators, and the Flames will try to keep their unlikely 2014-15 season going. In the meantime, below are links to some interesting hockey analytics reading I came across recently:

  • Following their 5-1 drubbing at home at the hands of the Senators last night, the Canadiens are a bit more nervous about their first-round series, having gone from a 3-0 lead to 3-2 (with Game 6 looming in Ottawa tomorrow). Prior to yesterday’s game, Chris Boyle wrote an interesting (and prescient) analysis suggesting that the Habs may be closer to blowing that 3-0 lead than many realize. As a Sharks fan, let me offer a word of advice to the Montreal faithful: watching this happen is even less fun than it sounds. No word on whether the team is panic-starting Alex Stalock for Game 6.
  • Earlier this week, ESPN’s Grantland released a short documentary on the role of analytics in pushing enforcer players out of the NHL. At about 9 minutes, it’s well worth your time. My thoughts about the film are here.
  • This is, of course, the time of year for statistical predictions of NHL playoff series, including mine. (In a change from last season, I’m not reposting revised series-win probabilities every day to this site; if you want them, follow me on Twitter.) Over at the Contrarian Goaltender blog, Philip Myrland warns about the dangers of overfitting playoff models. Basically, the temptation is to throw every variable you can into a model to increase its fit to the data (the NHL’s own SAP playoff model is an excellent example), but past a certain point, you might end up adding correlates with no causal relationship to playoff success. It’s an excellent piece, and a subject I touched on a year ago. I would extend the same critique to many simple playoff models, such as those that claim incredibly high accuracy using Score-Adjusted Fenwick % alone. Let’s face it, with the Kings and Stars watching the postseason from home, the Penguins and Jets already eliminated, and the Isles, Blues and Predators sitting on the brink, the predictive power of SAF% alone isn’t looking so hot right now.
  • Speaking of playoff series, Arik Parnass wrote an interesting piece recently at Hockey Prospectus on whether score effects exist at the series level as well as the game level. I need to think about this more carefully, but the implications for playoff models (mine and others’) are potentially huge, insofar as we usually assume each game in a series is independent from the others.
  • Ryan Stimson posted a fascinating analysis over at the Devils blog In Lou We Trust on the relationship between passing sequences and shot quality. Ryan is one of the most meticulous analysts I know of, and the article is very rich in detail, so I’ll just encourage you to read it and digest it for yourself rather than trying to summarize.
  • Ben Wendorf is a great source for insights into historical trends in the NHL, and recently he posted a piece on how parity in the league has shifted patterns in player usage. Basically, the data demonstrate an increasing trend toward optimal deployment by CF%. I’d recommend the whole piece.
Posted in Uncategorized | Tagged , , , , , , , , , , | Leave a comment

Knuckles vs. Numbers: Some Thoughts

Yesterday, hockey analytics had yet another “moment”, with the release of the short documentary Knuckles vs. Numbers, produced by the Grantland team at ESPN. The premise of the film, which runs just under nine minutes, is that the increasing influence of number-crunchers is pushing “enforcer”-type players out of the NHL. You can view the film, for free, on YouTube.

All in all, its an enjoyable movie. The perspective of the enforcers is provided in face-to-face interviews with Brian McGrattan, Paul Bissonette, and Colton Orr, and Barry Melrose drops in to add context. The primary narrator is Grantland’s Sean McIndoe, author of the Down Goes Brown column, and the analytics side is represented by two notable friends of this blog: Steve Burtch (who’s done great work on Sportsnet and the Maple Leafs blog Pension Plan Puppets) and Ben Wendorf (one of the creators of Hockey-Graphs.com). (If you like the film, Ben provides some behind-the-scenes anecdotes at his site, including descriptions of interesting footage that didn’t make the final cut.) In general, the documentary feels slightly slanted toward the enforcers; my obvious bias aside, the players are given repeated chances to explain their value in the game, and the film focuses on their struggles to stay in the NHL, while the value provided by the analytic side isn’t discussed in much depth. Nevertheless, the discussion of the metrics and the thinking behind them is solid, and easy for a novice to understand. Knuckles is definitely framed in old-school vs. new-school terms, and (perhaps unsurprisingly) it’s not very sympathetic to the new school. But a traditionalist fan unfamiliar with analytics can at least understand them better by watching this.

Still, one thing about the documentary bugged me, and I ended up discussing it on Twitter with Steve and David Johnson of Hockey Analysis: pointing to statistical analysts as the reason why the game is passing enforcers by seems overly simplistic. For me, the biggest reason these players are leaving the game is that they don’t have much of a role in the modern NHL. That fighting is on the decline is indisputable: McIndoe chugged through the numbers going back to the early 1990s, and more recent data are available here. Partly, this is due to the league’s decades-long crackdown on fighting (i.e., the instigator rule, the “third man in” ejection, stiff suspensions for leaving the bench or penalty box to fight, automatic ejections and supplemental discipline for line brawling, the “helmet rule”, and encouraging linesmen to break up more fights before they start). In recent years, increased concern for player safety and the long-term effects of head trauma have probably also played a part. The deaths of ex-enforcers like Derek Boogaard, Rick Rypien and Wade Belak, in particular, have shone a harsh light on the challenges many fighters face after their playing days end, and have led many to question the romantic notion of enforcers as providers of toughness and courage.

But the biggest changes have probably been to the game itself. Between the many post-lockout rule changes intended to increase the speed and flow of the game, the heightened parity introduced by the salary cap, and the scarcity of goals in today’s game, the modern NHL places a very high premium on hockey ability. And despite the old-school insistence on the value of enforcers (articulated very well by Melrose in the film), the role they actually play in today’s game is very telling: whether their teams are analytics believers or skeptics, enforcers these days are essentially specialists who play less than 10 minutes a game and only see the opponent’s worst players. This usage suggests that teams don’t view the value of enforcers in the same way that someone like Melrose does; more to the point, it suggests that they see what the numbers guys do, that these players are liabilities on the ice. On this point, it’s probably fair to credit the analytics movement to some extent; by breaking down the data on enforcers’ detrimental on-ice impact, and writing publicly about it, analysts have likely pushed the conversation within hockey in the direction that we’re seeing. Nevertheless, it’s probably fair to say that the NHL would have phased out the enforcer without our input, and that (at most) we’ve just helped the process along. I’m not sure that fighting will ever entirely vanish from the game, but I tend to agree that fighting specialists will be out of the NHL fairly soon. And insofar as the forces driving them out are unlikely to change, I don’t expect them to return.

Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

Why Did the 2014-15 Los Angeles Kings Miss the Playoffs?

The 2015 NHL playoffs are underway now, and (predictably) interest in the regular season has passed quickly into the rearview. Still, the 2014-15 season remains notable for the team most surprisingly absent from the postseason: the defending champion Los Angeles Kings. Following a 3-1 loss in Calgary just before the season’s final weekend, the Kings were officially eliminated. (I’m a Sharks fan, and chose to note the occasion this way.) Yet even to seasoned hockey analysts, this was a difficult result to explain.

Photo credit: Flickr user rtype03. Use of this image does not imply endorsement.

Aside from their pedigree as winners of two of the last three Stanley Cups, the Kings have been a popular favorite of the hockey analytics community due to their elite puck possession numbers over the past four seasons: LA has been one of the league’s top-five teams in Score-Adjusted Fenwick all four years, and have exceeded 55% SAF over the last three. Yet, interestingly, the typical explanations for good possession teams missing the playoffs don’t really apply to this season’s Kings. When the New Jersey Devils missed the playoffs in 2012-13 and 2013-14 despite superlative Fenwick numbers, it was straightforward to point to their terrible team shooting and even-worse goaltending as the culprits; those Devils stand alongside other solid possession clubs that have been undone by subpar play in net (the 2010-11 Flames come to mind). Yet this season’s Kings had a PDO a hair above 1.000, on the strength of solid 0.926 goaltending from Jonathan Quick and Martin Jones. So, what the hell happened in Los Angeles?

Unsurprisingly, analysts have had a tough time dissecting exactly what went wrong. Over at Hockey Analysis, David Johnson summarized many of the more common explanations I’ve seen rolling around the Twittersphere. The most popular analytic narrative for LA’s collapse is that, essentially, they had just enough terrible luck at the margins to knock them off the bubble: a 0.351 winning percentage in one-goal games and a 3-15 record in overtime/shootout games cost them just enough points that they finished two back of Calgary for the last playoff spot. Ben Wendorf also offered an interesting take over at Hockey Graphs, to the effect that lower scoring league-wide increased the impact of random bounces by creating more one-goal games, and tanking for the McDavid/Eichel lottery created more parity among the non-tanking teams.

I’m not convinced, though, that this is the whole story. First off, it’s not clear to me that lower scoring created an unusual number of one-goal games in 2014-15: in 82-game seasons since 2005-06, the NHL has tended to average between 570 and 590 such games, and this season’s 589 are on the high side, but not exceptionally so. As far as the impact of one-goal losses in 2014-15, playoff teams had a winning percentage of 56.3% in these games. This disparity is above average (normally playoff teams win roughly 54% of one-goal contents), but it’s not easy to interpret: aside from being somewhat trivial (i.e., more wins will tend to be associated with more standings points), it’s impossible to say from this number which teams lost because they were unlucky and who lost because they weren’t very good to begin with. Also worth noting is that the relationship between one-goal game win % and playoff probabilities is far from linear. Over the past 10 seasons we have plenty of examples of teams with poor one-goal-game records having strong seasons, and vice versa. To give some of the more dramatic examples:

  • This season’s Blue Jackets missed the playoffs with a 1GG record of 23-8-5.
  • Last season’s Maple Leafs had a 1GG record of 19-8-8. We know how their season turned out.
  • Last season’s Blackhawks had one of the worst 1GG records in the NHL (17-8-15).
  • In the shortened 2012-13 season, the Jets and Flyers had a combined 1GG record of 24-8-6. Both missed the playoffs.
  • The Cup champion Kings of 2011-12 won just 37% of their one-goal games.
  • The 2011-12 Lightning missed the postseason despite the best 1GG record in the NHL.
  • In 2010-11, New Jersey missed the playoffs despite a 21-8-5 1GG record.
  • Nashville missed the playoffs in 2008-09 despite a 22-8-8 record in one-goal contests.
  • In the same season, a 12-7-12 record in these games did not prevent the Blackhawks from reaching the Conference Finals.
  • In 2007-08, the Oilers and Islanders had a combined 1GG record of 48-16-15. Both missed the playoffs.
  • Ottawa made it to the Finals in 2006-07 despite winning just 31.3% of one-goal games that season.

Clearly, then, it’s quite possible to be one of the league’s better teams despite poor performance in one-goal games. Those of you interested in the shootouts side of this can check out some work I did last year (basically, shootouts matter, but not a ton). So I’m not entirely satisfied with this explanation for the Kings’ poor results. Fortunately, a deeper dive into the Kings’ numbers identifies the problem. On the defensive side, only six teams allowed fewer goals than LA, and no team allowed fewer shot attempts at even strength (score-adjusted) than the Kings’ 45.9 per 60 minutes. On offense, though, the story is less positive. Los Angeles attempted 58 shots at even strength per 60 (again score-adjusted), one of the highest rates in the league, but this was coupled with a thoroughly mediocre 7.5% team shooting percentage at 5-on-5; thanks to poor finishing and middling power play, the Kings ranked just 18th in goals scored this season. As such, it appears there is a metric on which LA looked like the bubble team they were.

The real question, then, about this season’s Kings isn’t whether poor performance in OT and shootout situations knocked them out of the playoffs, but rather why they were on that bubble in the first place. I think the answer lies in a concept from baseball analytics known as a “run-scoring environment”; this is the notion the game is higher-scoring in some eras relative to others, and some strategies work better in higher-scoring environments than lower-scoring ones. I believe an analogous concept carries over to goal-scoring in the NHL. The important thing to remember about scoring environments is that it’s more important to be good at things others aren’t good at than exceptional at something everyone’s good at. No one questions the Kings’ defensive acumen; only the St. Louis Blues have allowed fewer goals over the last four seasons than LA’s 667, and in none of those seasons have the Kings averaged more than 50 even-strength Corsi attempts against per 60. Yet in a league that averages just 5.4 goals a game, great defending doesn’t differentiate you much from other teams. When goals are tough for everyone to come by, being able to score has a greater ability to set you apart from the pack, and some of the NHL’s best teams in the last four seasons are among the highest scoring: Pittsburgh tops that list with 894 goals, followed by Chicago (871), Tampa Bay (870), and Boston (854). (This may also provide a clue as to why Anaheim has been so successful in recent years, and why New Jersey hasn’t.) The Kings, in contrast, sit 23rd in goal-scoring over that time, and at this point, it’s hard to see their miserable 6.8% even-strength shooting as simple puck luck. And their regular-season results since 2011-12 have borne this out: they were a bubble team in 2012 and a mid-seeded playoff squad the last two years, and haven’t come within 15 points of a President’s Trophy* (even in the 48-game 2013 season) during that time. Given these results and what happened this year, one has to ask whether goal-scoring is a genuine flaw in the Kings that many analysts (myself included) have tended to overlook because of their playoff success and strong Fenwick differential.

* Critics will undoubtedly point to their Stanley Cups in 2012 and 2014. My usual response is that a goalie giving a team nearly 0.950 goaltending for two months isn’t necessarily an indication of their overall quality. Nor is two months of 8.9% shooting for a team that typically performs far below that.

Posted in Uncategorized | Leave a comment

2014-15 NHL Playoffs: First Round Predictions

Hi everyone. I realize I haven’t been very active with this site lately, basically taking the 2014-2015 season off. Unfortunately, life sometimes gets in the way of having fun, and I haven’t really had the time to write every day the way I did last season. Nevertheless, I’m as excited as all of you that playoff hockey is almost back, and I figured this was as good a time as any to make a comeback.

Last season, I created a method for predicting postseason series, which is described here. Obviously, predicting outcomes in short series between evenly-matched opponents is not a simple task, and my method was largely an effort to avoid two common temptations: 1) reducing team comparisons to a single, simplistic metric, and 2) reading too much into late-season hot streaks. Basically, I leaned heavily on home-ice advantage as a predictive factor, and incorporated elements of puck possession, special teams, goaltending and finishing into a single measure of team quality I called modified goals-for % (mGF%). For the 2015 playoffs, I made one adjustment to this method, relating to teams’ expected even-strength Sv%. Instead of using the team’s 5v5 Sv% regressed to a team-specific 3-season average, I used the even-strength Sv% of each team’s expected starting goaltender. I made this change partly to avoid penalizing teams for the play of their backups (who are unlikely to see the ice much in the postseason), and partly because so many playoff teams this season have different starters relative to previous seasons (i.e., it doesn’t seem valid to use Evgeni Nabokov’s final two seasons when gauging how well Jaroslav Halak will play for the Islanders). In cases where a goalie had little to no NHL data in past seasons (e.g., Andrew Hammond), I assumed league-average goaltending that season.

Photo credit: Flickr user Reg Natarajan. Use of this image does not imply endorsement

Eastern Conference

New York Rangers (mGF% 52.5%) vs. Pittsburgh Penguins (mGF% 52.7%)

Many things have changed in the eleven-odd months since these two teams played a thrilling seven-game series in the second round of last year’s playoffs. In the wake of blowing a 3-1 series lead, Pittsburgh fired their long-time coach and GM, while the Rangers came within three wins of their first championship in two decades. Since then, New York has won the President’s Trophy, and come into this postseason as a very popular Cup pick. Pittsburgh, on the other hand, fought through a series of devastating injuries to their blueline, and needed a win on the season’s last day to secure a playoff spot.

On the surface, this looks like an easy call to make. The Rangers have one of the best goaltenders in the business in Henrik Lundqvist, and enter the playoffs largely healthy and playing well. Pittsburgh, on the other hand,will have to make do without the services of key defensemen Kris Letang, Olli Maatta and (probably) Christian Ehrhoff; apart from Paul Martin, the Penguins’ defense is an unfortunate combination of abysmal (e.g., Rob Scuderi) and inexperienced. Add in the injury troubles of Evgeni Malkin and Patric Hornqvist down the stretch and it’s hard to see how Pittsburgh puts up much of a fight against the Rangers.

Still, there is a case for optimism if you’re a Pens fan. A glance at the underlying numbers reveals something curious: despite injuries and lack of depth, Pittsburgh’s fundamentals have actually been excellent. With a score-adjusted Fenwick % of 53.8%, they were the NHL’s third-best possession team this season, and only three teams were better at shot prevention. On the other hand, the Rangers’ possession numbers declined considerably this season, and Lundqvist’s brilliance disguised surprisingly poor defensive play; New York ranked 20th in the league in the rate of Corsi attempts against at even strength. In other words, these teams may be more closely matched than they look at a glance. Still, in any close series, it’s usually safe to pick based on home-ice advantage.

NYRPIT_2015

Prediction: It’s much more of a coin flip than people may realize, but New York Rangers in seven games.

Washington Capitals (mGF% 51.7%) vs. New York Islanders (mGF% 53.4%)

If nothing else, this series should be very entertaining. A series of shrewd offseason moves turned the Isles from a perennial laughingstock into the NHL’s best possession team (score-adjusted Fenwick % 54.5%), though a late-season skid cost them both the Atlantic division and home ice in the first round. The Capitals, meanwhile, had a very solid season under first-year coach Barry Trotz, and Alex Ovechkin continued his annual habit of effortless 50-goal seasons. Still, the Islanders are one of the most dynamic offensive teams in this year’s postseason, based on shot creation. And the Caps can’t be looking forward to facing Jaroslav Halak in another playoff series.

WSHNYI_2015

Prediction: Also a coin flip, but New York Islanders in six games.

Montreal Canadiens (mGF% 52.7%) vs. Ottawa Senators (mGF% 49.2%)

These teams met in the postseason a couple seasons ago, in an ugly five-game series that saw Ottawa advance despite barely touching the puck. Since then, much has been made of how porous Montreal’s defense is, and how heavily they rely on goaltender Carey Price, but the same could be said of the Senators. Except, of course, that instead of Price, they’re riding the hot hand with the unproven Andrew Hammond in net. Neither of these teams is particularly adept at puck control, though each is led by a defenseman who is extremely capable in that area. In the final analysis, Price has shown so much quality for such a long time that it’s very hard to pick against him or against the home-ice advantage. The Sens have been a great story these past few months, but I’d have to guess that that story will end in round one.

MTLOTT_2015

Prediction: Montreal in five games.

Tampa Bay Lightning (mGF% 55.8%) vs. Detroit Red Wings (mGF% 51.2%)

Between scorching team shooting and an incredibly effective shot-prevention game, the Lightning have quietly become a very, very good team. Goalie Ben Bishop has had an uneven season after his Vezina-nominated 2013-14, but underestimating the Lightning this year is not recommended. The Red Wings, on the other hand, limp into the postseason with just as many questions in goal. I’d expect a very low-event, low-scoring series, but the Bolts are clearly the better squad.

TBLDET_2015

Prediction: Tampa Bay in five games.

Western Conference

Anaheim Ducks (mGF% 51.8%) vs. Winnipeg Jets (mGF% 50.1%)

Few teams confound hockey analytics like the Ducks, who cruised to their third straight Pacific division title despite mediocre possession numbers and below-average goaltending. It may look like smoke and mirrors, but at some point it’s silly to call a team’s success a lucky streak. Anaheim was once again one of the highest-scoring teams in the NHL in 2014-15, but their underlying numbers suggest they’re a mediocre team in every other respect. Frederik Andersen will lead the Ducks in goal (despite an uninspiring season). At the other end, Winnipeg is in the playoffs for the first time since the franchise moved from Atlanta. The Jets have outstanding defensive numbers, and vaulted into the top ten in puck possession this season. Yet it’s still hard for me to get excited about a team that’s depending on one of the worst starting goalies in the NHL to hold back a high-powered attack like Anaheim’s.

ANAWPG_2015

Prediction: Anaheim in five games.

Vancouver Canucks (mGF% 48.6%) vs. Calgary Flames (mGF% 50.8%)

Between the late start times for East-Coast viewers and the quality of the matchup, my guess is that this series will see limited viewership outside of Western Canada. I considered writing something snarky about the Flames overachieving their way out of high picks in a deep draft, but honestly, if my team had spent several seasons in the wilderness I’d be pretty thrilled to see them back in the postseason. And, honestly, this is probably the best first-round match-up Calgary could’ve gotten. Sure, they’re an awful possession team playing without captain Mark Giordano, but honestly, I can’t stress enough how little faith I have in the Canucks, an anemic shot-creation squad depending on Ryan Miller in net.

VANCGY_2015

Prediction: Calgary in six games.

St. Louis Blues (mGF% 53.2%) vs. Minnesota Wild (mGF% 52.3%)

Once again, St. Louis will hope that this is the year their high-quality squad breaks out for a deep playoff run. The Blues are essentially the same team they’ve been since 2011-12: excellent in possession and defense, deep down the middle, and going with the good-but-not-great Brian Elliott in goal. Once again, however, they face a difficult first-round match-up. The Wild are a solid possession team that struggled badly until their season was (oddly enough) rescued by Devan Dubnyk. Dubnyk, you may recall, started 2013-14 as Edmonton’s first-string goaltender, and played so poorly that he was traded twice and finished that season in the AHL. He’s been terrific since joining the Wild, but it’s hard to know what to expect against a tough opponent like the Blues.

STLMIN_2015

Prediction: St. Louis in seven games.

Nashville Predators (mGF% 54.6%) vs. Chicago Blackhawks (mGF% 55.8%)

This, for me, is clearly the marquee series of the first round. It’s hard to overemphasize how dramatic a turnaround we’ve seen in Nashville. Pekka Rinne had an outstanding season after an injury-riddled 2013-14, but seemingly overnight, the Predators appear to be loaded with young talent (which is a very odd sentence to write). Unfortunately for them, their opponent is one of the NHL’s best teams. The Blackhawks continue to be what they’ve been throughout Joel Quenneville’s tenure: dominant in possession and incredibly productive offensively. In the end, this is sort of an intuition pick (though the numbers back it up): both of these teams are very good, but Chicago is the team with the track record.

NSHCHI_2015

Prediction: The margin is razor-thin in this series, but Chicago in six games.

Posted in Uncategorized | Leave a comment

The Scheduler’s Dilemma: Back-to-Back Games and Fatigue Effects in the NHL Regular Season

In the course of predicting all 1,230 games of the 2013-14 NHL season, one common occurrence made me doubt my model more than any other: back-to-back games. My model did a decent job overall when it came to picking games, but I often wondered whether I should have factored the effects of travel and fatigue into my predictions. Home ice provides a consistent advantage in NHL hockey, and is probably the most reliable predictor in single games, but it stands to reason that its effects should differ depending on whether one or both teams enters the game having played and traveled the night before. Yet it doesn’t seem that anyone has done a comprehensive study of the issue before. So, I pulled game dates for all regular-season contests since the 2005 lockout; after excluding season openers, this left me with a sample of 10,403 games. After playing with date functions in Excel, I was able to identify all the back-to-backs and travel in each team’s schedule.

First, I wanted to get a sense of the frequency of back-to-backs in a given season. The table below provides the median number of back-to-backs for each team in the eight 82-game seasons since 2005-06 (2012-13 data, obviously, are excluded). On average, a team can expect to play back-to-backs about 15 times each season, though there are clearly differences among teams. Part of this is likely due to the efforts of league schedulers to minimize back-to-back play for teams that travel more. As such, high-travel teams like Calgary, Colorado, Dallas, Edmonton, and Vancouver average fewer back-to-backs than teams like Buffalo, New Jersey, or the Islanders. In some cases, though, this explanation doesn’t really account for the patterns we see. The Rangers, for example, don’t suffer much at the hands of the schedule-makers, averaging the same number of back-to-backs as San Jose (the team with the most consistently brutal travel itinerary in the NHL) and fewer per season than the Anaheim Ducks, who regularly travel nearly 50,000 miles over 82 games. And the Blackhawks and Blues seem to play a lot of back-to-backs for teams that are regularly around 40,000 travel miles a season.

b2b_tb1

The real question, of course, is how bad it is to play more back-to-backs than other teams. Or, phrased differently, how does back-to-back play affect the expected advantage of the home team? The numbers based on the 10,000+ games in my sample are depicted below.

b2b_tb2

Overall, NHL teams playing on home ice can be expected to win about 55.1% of the time. If both teams are playing the second of back-to-back games, the home team’s advantage jumps a percentage point to 56.1%. If both teams are playing the second of a back-to-back after having traveled the night before, as in a home-and-home situation, the home team’s advantage is even greater, at 57.1%. This would suggest that the fatigue effect matters more for the road team, and a look at the more typical scenarios only reinforces this. When the home team is playing a back-to-back against a rested opponent, their win probability drops down to 53.8%, or about 1.3%; if the home side has played and traveled the prior night, their chance of winning drops even further, to 52.8% (down 2.3% from the average). When the visitors are playing their second game in two nights, though, the home team’s win probability goes up to 57.9%, or 2.8% above the average, if they’re rested.

I’ve also presented some comparisons involving relative days’ rest, which are not limited to back-to-backs; the picture here is slightly less clear. A home team with an additional day of rest has a 57% win probability; if they have 2 additional days of rest, they have a 58.7% chance of winning. When playing a road team with 1 more day of rest, their win probability goes down to 53.2%. However, the home side still has a 55.3% win probability when playing a road team with 2 additional days’ rest, and their win probability is actually below-average (53.1%) when they have three or more days of rest. A road team with three or more additional rest days than the home side has the best chance of winning than in any other scenario (47.8%). The sample sizes in the bottom three rows of the table are a good deal smaller than those involved in the other analyses, so it’s possible that these estimates aren’t reliable. There may be some reason why we see these results (maybe road teams benefit more from long layoffs than home teams?), but I’d have to be convinced that there’s something more than a small-samples artifact at work.

To summarize, it looks like having an easier NHL travel schedule carries a significant downside of having to play more back-to-back games (unless you’re the Rangers, in which case you apparently don’t have to worry about either issue). Insofar as the most common scenario in a back-to-back involves a rested home squad, this may explain another piece of the consistent edge that home teams enjoy in the NHL. And more to the point, it looks as though having more back-to-backs might actually be worse than traveling more. Many teams that logged over 42,000 road miles this past season, like Boston, Anaheim, Colorado, San Jose, and Los Angeles, didn’t seem to suffer much from it, and you have to go all the way back to 2009-10 to find a President’s Trophy winner with an easy travel schedule. And though it stands to reason that teams with heavier travel may have longer road trips, it’s not clear that the relative fatigue has a consistent effect on win probabilities. But back-to-back games are pretty much a bad proposition no matter how you look at them, especially if you play them on the road.

Posted in Original Analysis | Tagged , , , | 2 Comments

Hockey Analytics in 2014

Now that the day-to-day grind of the 2013-14 NHL season is over, and attentions are shifting to things like the draft, coaching hires, and free agency, I’ve been thinking a bit about the future of analytics in hockey. Recently, Jonathan Willis published an interesting article over at Edmonton Journal, making the case that data aren’t driving decisions by NHL teams as much as some analysts would like to believe. And certainly, the examples he cites support his point. San Jose’s reputation as a forward-thinking, analytically-minded team has taken on some water recently with the inexplicable signing of Mike Brown to a multi-year deal, the rumored trades of Joe Thornton and Patrick Marleau, and an interesting exchange (via Twitter) between Sharks beat reporter Kevin Kurz and Derek from Fear the Fin suggesting that the team doesn’t realize they have the best shot-creating power play in the NHL. Other examples of this phenomenon include:

  • In 2011, Edmonton started up an internal analytics shop to, as David Staples says, “help the Oilers enter in to the new world of sports analytics”, and comments from GM Craig MacTavish and coach Dallas Eakins suggest that both have an appreciation for the value of numbers. Yet, as Willis notes, moves like claiming Luke Gazdic off waivers, playing Justin Schultz in a top-pairing role, and overpaying for mediocre veterans like Andrew Ference and Nikita Nikitin – not to mention the team’s disastrous defensive play and on-ice results – don’t suggest a team making much use of numbers to guide its decisions.
  • The Pittsburgh Penguins used the work of the Sports Analytics Institute in making the 2011 decision to trade for star winger James Neal, and, in the words of Director of Player Personnel Dan MacKinnon, “I don’t think we’ve made an impact decision since then without consulting the analytics”. The value of a shot-quality analysis from a model that SAI actually refers to as a “black box” is, of course, open to question, but more importantly, it’s hard to believe the Penguins consulted the analytics when trading two draft picks for one of the league’s worst defensemen, trading a top D prospect to rent a declining winger for a couple months in 2013, signing a badly declining Rob Scuderi to a four-year contract, signing fourth-liner Craig Adams to multiple deals, firing the winningest head coach in franchise history after a 107-point season, or trading Neal to Nashville this week.
  • Stats-savvy Toronto fans were thrilled at the suggestion that new team President Brendan Shanahan was willing to embrace possession metrics in charting a direction for the team. For the time being, I’ll reserve judgment, but the decision to retain coach Randy Carlyle and GM Dave Nonis, the faces of the organization’s notorious hostility to numbers, is not a good sign. Nor is the acquisition of Roman Polak from St. Louis.
  • The Tampa Bay Lightning employ Michael Peterson, a full-time statistical analyst who assists the front office and coaching staff in making all sorts of decisions. One imagines that Peterson isn’t too happy with the six-year deal the team just gave Ryan Callahan.
  • New Flyers GM Ron Hextall reportedly views analytics as a “huge part of what we do going forward”. This sounds great, but it’s a bit at odds with the notion that Andrew MacDonald is “going to be a big part of this team moving forward”, and with the trade of a useful player in Scott Hartnell for a . . . less than useful one in R.J. Umberger.

The point of this isn’t to be discouraging or contrarian, but rather to suggest that the adoption of analytic thinking and results by NHL teams is still in its early stages. It’s certainly fair to say that the profile of this work is higher now than it’s ever been before. Thanks in no small part to the Maple Leafs’ wretched 2013-14 season, as well as the popularity of Moneyball (book or film) and the work of people like James Mirtle, Neil Greenberg, and Sean McIndoe (to name just a few), it’s hard to avoid encountering advanced stats in even mainstream hockey writing, and in some cases (the Kings and Sabres come to mind), local TV broadcasts are bringing possession numbers into their coverage. Willis’s point, which I agree with, is that none of this means that teams are actually leveraging the numbers to make decisions. But we do appear to have reached the point where teams can’t publicly brush the numbers aside the way Nonis and Carlyle did a year ago. Coaches and front-office officials who don’t have at least a vaguely positive answer (like Hextall’s, for example) to questions about their use of analytics risk being painted as behind the curve, or (as we saw in Toronto) having their inattention to statistics become a story on its own.

Still, it appears that much of the NHL’s interactions with the advanced-stats community these days involves using numbers to validate pre-existing notions or observations. While it’s not a bad thing for analysts to be involved in the decision-making calculus of NHL teams, even peripherally, it’s hard to imagine unlocking the potential of analytics in such a limited role. The real value of this work lies in using numbers to challenge popular assumptions about the game, and using those insights to build successful teams in less conventional ways. And obviously, this isn’t going to happen if the numbers are only used (in effect) to justify choices after they’ve been made. Much has been made of the reported hiring of Sunny Mehta, a contributor at Vic Ferrari’s old site, as New Jersey’s Director of Hockey Analytics; while he may be an excellent choice for the position, the real question is the level of influence he’ll have in an organization that Lou Lamoriello has ruled (with something of an iron fist) since the late 1980s. This article from Mirtle suggests that Lamoriello isn’t entirely enthusiastic about embracing the insights the numbers can offer, which is potentially a critical problem.

None of this, of course, is cause for pessimism. At this point, it’s simply a matter of trusting that bigger opportunities for analytics will come with time. It may not happen in the league’s more prominent and wealthy markets, but eventually, teams that are desperate to win will start to look for new ways to gain an edge on the competition. In the meantime, some things analysts can do to keep this process moving forward are as follows:

  • Even though data analysis can be conducted without attention to basic principles of sound statistics, remember that it isn’t a good idea. While the proliferation of analysis software and web-based data sources has undoubtedly been a boon to researchers, one downside is that people without a grasp of concepts like statistical power, sampling error, and non-causal correlation can easily crunch numbers and disseminate the work online. But a big reason why decision-makers in NHL teams are reluctant to trust the work of analysts is that they don’t have the training needed to critique statistical work; from their perspective, it’s easier to ignore it than risk making a bad decision based on shoddy analysis. Put another way, analysts need to remember that the burden is on them to prove the value of what they do, and that starts with producing work that’s valid and reliable.
  • Don’t give bad advice. Most hockey analysts are fans of the sport as well, and few fans can resist the pull of simplistic narratives or emotional reactions to short-term results (i.e., your favorite team losing in the playoffs). The problem with this is that many of the mistakes of mismanaged teams come down to focusing on short-term results at the expense of seeing a process through. One of the biggest favors the stats community could do for teams would be convincing them that there’s no magical formula to winning in the playoffs, beyond assembling a strong team and hoping the bounces go their way. More generally, tempting as it might be to gravitate towards emotionally satisfying conclusions, and to pick and choose data points to fit them, in the long run this does the cause of hockey analytics no favors.
  • Take advantage of the community of analysts around you. This is pretty self-explanatory: it’s a lot easier to do high-quality work, and to generate better ideas, when you can bounce ideas off of others in the field. While different analysts obviously don’t agree on everything, and have different preferences when it comes to methodology, it’s rarely the case that good work is produced in isolation. Fortunately, thanks to Twitter, jumping into the conversations of analysts and getting the feedback you need has never been easier.
  • Remember that the future is very bright. Hockey analytics have taken a big step forward this season, and with analysts like Corey Sznajder, Eric Tulsky, and Chris Boyle spearheading efforts to produce new data sources based on tracking events by hand, it’s only going to get better. These data sources, as well as ubiquitous video, will make it increasingly possible to connect patterns in the data to specific plays on the ice; this will allow analytics to become more prescriptive, and to incorporate a new dimension of face validity. It might be a long time before NHL GMs are using hockey stats to run their teams, but there’s never been a better time to be involved in them.
Posted in Random Noise, Sports Narrative | Tagged , , , | Leave a comment

How Did the Puck Prediction Model Do in 2013-14?

The 2013-14 NHL season is in the books, and now that we’re wrapping up for the offseason, it’s time for the big reveal. Those of you who have been reading Puck Prediction since the early days know that this past season was devoted to a prospective trial of NHL game prediction. Last summer, I pulled together game-level data from the eight seasons between 2005-06 and 2012-13, breaking down information on shot differential, event rates, and shooting and save percentages to develop an algorithm for predicting games. In order to provide a good comparison, I decided to track “the wisdom of the crowd” via the gambling markets as well. If you read the site during the regular season, you no doubt saw my daily game predictions against those of the gambling markets (via the Odds Shark website). I took interim looks at the model’s performance at the quarter-season, mid-season, and three-quarter-season marks; now, it’s time to look at how things shook out after 1,230 games.

Photo credit: Flickr user Hakan Dahlstrom. Use of this image does not imply endorsement.

First, I promised a detailed description of my algorithm for picking games, and here it is. Throughout the season, I kept track of each team’s all-situation shooting and save percentages, their average all-situation shots-for and shots-against per game (added together to create a rough event rate), and their average all-situation per-game shot differential; coming into each game, eight total values (these four variables for the two teams in question) were involved in making the prediction. For season-opening games, the model used teams’ season-ending values from the prior campaign. These values were then used to calculate an expected goal differential for the game, as follows:

PP model_eqn

Unpacking this a little bit, the model assumes that each team’s Sh% in game i will be a 50% weighted average of their Sh% in games 1 through i – 1 and (1 – their opponent’s team Sv% in games 1 through i – 1). To estimate the number of shots for each team, we assume the total number of shots in the game will be the higher of each team’s average event rate in games 1 through i – 1 (this is more consistent with the data). We then take Team A’s average shot differential and subtract Team B’s differential from it. This difference is then added to the event rate for Team A, and subtracted from the event rate for Team B; halving each of these numbers gives an estimate of total shots for and against. If Team A enters the game with a higher shot differential than Team B, the implication is that they’ll control the majority of the shots in the game, while the opposite is true if B enters the game with a better differential. The Puck Prediction model picked games as follows:

1. If one team has an expected goal differential of 0.3 or higher, pick that team.
2. If neither team has an expected goal differential of at least 0.3 but one team has an expected shot differential of 9 or higher, pick that team.
3. If neither team has an expected goal differential of at least 0.3 or an expected shot differential of at least 9, pick the home team.

As was the case at each interim analysis, and as I anticipated here, this method of picking games was statistically no more or less accurate than the gambling markets in the 2013-14 season. With a record of 728-502, Vegas was 59.2% accurate when it came to picking the NHL regular season; my model was 58.7% accurate, with a nearly identical record of 722-508. This difference was not statistically significant (t=-0.493, p=0.622). I picked against Vegas 148 times this season, and was correct 71 times (48%). The gambling markets were slightly more accurate when picking road teams (59.1% vs. 57.5%) and when picking teams with the greater shot differential entering the game (61.1% vs. 60.0%). When it came to specific teams, Puck Prediction was more accurate than Vegas in games involving Boston (69.5% vs. 67.1%), Buffalo (73.2% vs. 70.7%), Colorado (58.5% vs. 56.1%), Montreal (56.1% vs. 53.7%), New Jersey (58.5% vs. 54.9%), Philadelphia (65.9% vs. 61.0%), and Toronto (64.6% vs. 59.8%). The gambling markets were better when it came to games involving Chicago (59.8% vs. 56.1%), Dallas (59.8% vs. 54.9%), Edmonton (63.4% vs. 59.8%), Minnesota (62.2% vs. 50.0%), Nashville (56.1% vs. 50.0%), the Islanders (52.4% vs. 48.8%), Ottawa (51.2% vs. 48.8%), Vancouver (69.5% vs. 65.9%), and Washington (59.8% vs. 54.9%).

Summarizing a project of this magnitude isn’t easy, but a few things are worth noting. The most obvious is that, as I pointed out in my earlier piece, beating the gambling markets consistently is not easy. The fact that my model’s picks agreed with the market 88% of the time should tell you that, by and large, Vegas predictions are not easy to disagree with. Looking at the 148 games for which our predictions differed, 98 of them were “obvious” picks; that is, the expected goal differential was 0.3 or greater, or the expected shot differential was 9 or greater. In those 98 games, Puck Prediction had a record of 53-45 (54.1%), while Vegas managed 45-53 (45.9%). On the other hand, in the 50 “toss-up” games where the pick came down (for me) to home-ice advantage, the model got crushed, with just 36% accuracy (18-32) compared to Vegas (32-18). At 53.7%, home-ice advantage was slightly less reliable in 2013-14 compared to prior NHL seasons, and it looks as though my model was particularly unlucky when I went against Vegas to rely on it. Another thing worth thinking about is the team-specific differences in model accuracy. Given the different approaches NHL teams take to winning games, it’s fair to wonder whether team-specific predictions could offer an improvement over a one-size-fits-all method.

All in all, though, this experience has definitely taught me a lot about the limits of prediction in NHL hockey, as well as the difficulty of beating the gambling markets. I can’t promise I’ll do another single-game prediction project again next season, but it’s been fun.

Posted in Uncategorized | 4 Comments