It’s been a while since I penned an update here, but I’ve spent some time the last couple of weeks looking in detail at the performance of our AFL models, and wanted to share with you some of the findings.
Firstly, a quick update on my results. It’s been a busy last 2-3 months that haven’t left a lot of time for playing Daily Fantasy Sports (DFS), with a lot of focus on Stats Insider (which we’re hoping to preview through Fantasy Insider in a few weeks time). What lineups I have put in haven’t done well, and am personally looking at a -20% POT (profit on turnover) on the AFL season, giving back around 9% of my profits from the past year. NRL has been better, with a 10% POT, but on much lower volume (and helped quite a bit by the Game Time Insurance at Draftstars).
So, I wanted to take a look to see if it’s the models, the lack of time for entering teams, or both that contribute to the losses. Previously, we’ve measured the accuracy of our projections by looking at the point totals projected vs the actual totals, and that’s a perfectly legitimate way to do it (if you had perfect projections, you’d win every game). By that measure, we’re 1.7 points per player better in 2017 than we were in 2016… so, what’s going on?
Well, if we accept no projections are going to be perfect, your win-rate/return in DFS contests isn’t just related to accuracy, but also to identifying the aspects of a players performance that are undervalued by others. Last year, we had a few of these, and they meant that even though our projections overall were less accurate, the players we were selecting in our teams were undervalued by the wider userbase and thus lower owned. This year, the market’s caught up. That’s no surprise, with the influx of poker pros and others with computer models, and most of these factors are now being better accounted for.
So, what’s the best way to improve our lineups beyond reducing projection error? My solution was to use a simulator. The simulator uses our projections (and variations on them), as well as our archived salary information and payout tables / contest results to re-run every tournament (that I played in) from the previous two years.
Using our default projections, and entering 15 teams in every contest that had a maximum entries of 15 or more (and entering the maximum in contests with lower entry limits), you would have achieved a 28% POT entering Draftstars contests in 2016 (and more at Moneyball, who’s non-Dream Team scoring gave us a big edge). So far in 2017, the top 15 lineups from our base projections in each contest would have an POT of 1.2% – so profitable, but barely, and all of that profit comes from a single 2nd place with a 13th choice lineup (which of course I didn’t enter!). Without that, you’d be close to my personal results.
Changing things up..
OK, so how do we arrest the decline? Well, there are two ways. One is to find out which factors are being undervalued in 2017, and in part 2 of this article (next week!) I’ll outline how we went about improving the DFS-specific models to adjust for this new climate, and we plan to release those publicly at the start of July (as part of a bigger shift in this website). Secondly though, is to look at how changing the settings on the existing cruncher impacts on your results (e.g. should we stack more or less, should we be using unique players or degrading players in existing lineups, should we be using ceiling or standard projections).
Advanced Settings: The Secret to Success
To answer those questions, I re-ran the simulations with a range of different settings, and below are some of the headline results. I’ve split the simulations between single game contests, and multi-game tournaments, over the two year period.
Single game contests in 2017 with our default settings would have a 1.8% POT. If we modify the ‘unique settings’ and ‘stack number’ settings alone, we see a dramatic change (n.b. this assumes when stacking that you stack each team equally, e.g. 7 home and 7 away in a 15-max entry contest):
There’s two immediate conclusions I draw here. Firstly, 2 Unique Players seems to be too few in one-match AFL contests. With the variation in player performance, having 2 unique players is a boom or bust approach — if your top ranked players all perform, you’ll dominate the leaderboard, but for the majority of us, a ‘unique player’ setting of 4 or 5 will smooth out the variance, and increase your chances of having a team near the top of the leaderboard. Secondly, stacking pays off — we know this from last year, but it’s good to see it in the data. Based on these results a 7-stack with 4 unique players is the optimal setting, but given the smallish data set I’d say a 5-7 stack with 4-6 unique players are viable settings. The base ratings (i.e. default) outperformed the ceiling in single match contests.
For multiple game contests, I’ve run our simulations assuming 16 entries per contest with no stacking, or 3 stacks per team (so a max of 18 in 6 team contests).
The results here are a bit more complex. Unlike with the single game contests, the ceiling projections outperform the base ones significantly, taking us from 2.59% profit to 14.4% profit using otherwise default settings. Again, stacking is a viable strategy, although less so than in single-game contests. If stacking, 6-7 player stacks seem to be the best bet, but these (perhaps strangely) have done better with less variation amongst the teams, 2 unique players outperforming 4 or 6. 4 Unique players outperforms 2 in 0-stack situations. In part this might be due to the way the cruncher works; a team with ‘at least 4 unique players’ will often have 5-6 unique in tournaments with hundreds of players to choose from (in order to squeeze the most out of the cap), but are more likely to have just 4 in a single game contest with 44.
Finally, we know not everybody can enter 50 lineups in a contest. How would it have gone if you’d just entered 1 stack per team (or 1 0-stack team) in each multi-team GPP?
The short answer is you’d not have done as well in profit terms, but you’d have shown massive profit on turnover because of a tournament win; a $4875 payout with a Carlton stack on 28 May. Ceiling projections generally outperform base, although the one win could have been achieved with base projections and a stack also. If you entering less teams you’d have been better off stacking, and indeed if you exclude the win your POT drops to around 5-10% and 6 stacks provide the best result, so I’d still stick with that 6-7 range. Unique players is meaningless if you just have the one team with the way the cruncher currently works.
So that’s a wrap for this week, and hopefully there’s some information in there that will help you improve your results at the back-end of the season. Remember, advanced settings are only available to Fantasy Insider premium subscribers.
Is there other data you’d be interested in seeing, or other settings you’d like us to explore? Let us know over in the forum.