allenfrostline

Texas Hold'em Series (6): Range Estimation

2019-11-05


Range is important. A poker player without a mature preflop1 range is more dangerous than a blind soldier aiming at random people on the battlefield, or a vicious chef throwing anything within reach into his pot. On the other hand, given your opponents’ range (think somehow from a God’s eye view) you may be able to do whatever you want with your hand cards, no matter how poor they actually are. In reality, the most difficult and also heatedly discussed topic about ranging is not its application but rather the estimation: how can you give a consistent estimator of your enemy’s range? This is the question I’m concerned about in this post.\(\newcommand{bs}{\boldsymbol}\newcommand{R}{\mathbb{R}}\newcommand{P}{\mathbb{P}}\newcommand{1}[1]{\unicode{x1D7D9}_{\{#1\}}}\)

Before introducing the estimation, let’s give “range” a rigorous definition. The range, or so-called range chart, refers to a \(13\times 13\) table that includes every single preflop hand scenario in it. It is true that there’re in total \(\binom{52}{2}=1{,}326\) different hands but we don’t really need to differentiate 4♦5♦ from 4♠5♠ as both are suited and share the same hand value, right? If you agree with me about this, then you’ll realize right away that there are in total only \(13^2=169\) cases, including both suited and off-suit hands. Therefore, we may include one’s intention to enter a game on preflop with any hand in a table like such, which is the formal definiton of the range. More mathematically, a range is a matrix with each entry being the probability of entering the game given the corresponding hand. We let the upper triangular2 part of the matrix to denote suited hands and the rest for off-suit.

The above beautiful table3 is NOT a range chart. It is the natural winning probability given any hand in a heads-up game, calculated using my hand evaluation tool in this post. Let’s call it \(\bs{W}\) just for now. The table can, however, in a very truthful and exemplary way represent one’s preference of entering a game. With higher natural winning rate, I would enter the game with a higher probability. That makes sense to me. Also, the table shares some straightforward properties with a range chart, like the diagonals are exceptionally better than surrounding entries (as they’re in pairs) and that upper right entering probabilities are higher than lower left ones (as by definition they’ve got an edge for being suited). Furthermore, if we assume that hero is playing under Kelly criterion, namely maximizing his (conditional) expected log returns, then it’s also easy to show that in a heads-up game it’s ideal to submit to a range that is equal to \(\bs{R}_{\text{kelly}}=(2\bs{W}-1)_+\). Hence, it would be unwise if you enter the game with a hand like 3♠A♦ (winning rate \(49.3\%\)) though it looks quite good sometimes.

Enough for intro. Let’s start estimation. Assume now that we’re playing against someone, enemy, with his fixed but unknown range \(\bs{R}\). What we do observe are multiple (enough for a good estimation if you know how) games that he’s involved — sometimes our enemy enters the game, sometimes not. We don’t always have the chance to see his hands unless he’s made it to showdown, so calculating simple average entering probabilities becomes a method out of our consideration right away. We do have his average entering probability \(\pi\in(0,1)\). What else? We also know, regardless of him being any kind of player, the natural probability matrix4 of getting any hand, denoted by \(\bs{P}\), which is given by

\[ \bs{P}_{ij} := \P\{\ \text{hand}\ (i,j)\ \} = \frac{8\cdot\1{i<j} + 12\cdot \1{i=j} + 24\cdot\1{i>j}}{52\times 51} = \frac{2\cdot\1{i<j} + 3\cdot \1{i=j} + 6\cdot\1{i>j}}{663} \]

\(\forall i,j=1,2,\ldots,13\). Meanwhile, by definition we know

\[ \bs{R}_{ij}:=\P\{\ \text{enter}\mid\text{hand}\ (i,j)\ \}. \]

Hence, let \(\bs{\iota}\in\R^{13}\) be the one-vector, we know

\[ \pi := \P\{\ \text{enter}\ \} = \bs{\iota}'(\bs{R}\odot\bs{P})\bs{\iota}\tag{1} \]

where \(\bs{A}\odot\bs{B}\), a.k.a. the Hadamard product, means the elementwise product of \(\bs{A}\) and \(\bs{B}\). This introduces \(\bs{H}\in\R^{13\times 13}\), which by Bayes’ rule follows

\[ \bs{H}_{ij} := \P\{\ \text{hand}\ (i,j)\mid\text{enter}\ \} = \frac{\P\{\ \text{enter}\mid\text{hand}\ (i,j)\}\cdot\P\{\ \text{hand}\ (i,j)\ \}}{\sum_{\forall u,v}\P\{\ \text{enter}\mid\text{hand}\ (u,v)\}\cdot\P\{\ \text{hand}\ (u,v)\ \}} = \frac{(\bs{R}\odot\bs{P})_{ij}}{\bs{\iota}'(\bs{R}\odot\bs{P})\bs{\iota}} \]

or simply

\[ \bs{H} = \frac{\bs{R}\odot\bs{P}}{\bs{\iota}'(\bs{R}\odot\bs{P})\bs{\iota}}.\tag{2} \]

With \(n\) equations there is no way we can estimate \(2n\) variables (\(n=13^2=169\) in this context) from \(\bs{R}\) and \(\bs{H}\). Seems we’ve come to a dead end. It’s a good time to add an assumption. If we assume that probability of enemy entering the game is then proportional to him staying until showdown (in which case we can see his cards), then

\[ (\kappa\bs{R}\odot\bs{R})_{ij} \equiv \kappa \bs{R}^{\circ 2}_{ij} := \P\{\ \text{showdown}\mid\text{hand}\ (i,j)\ \},\quad\kappa\in(0,1) \]

where \(\bs{R}^{\circ 2}\) is the Hadamard power of \(2\), namely elementwise square of \(\bs{R}\). By Bayes’ rule again we know

\[ \bs{S}_{ij} = \P\{\ \text{hand}\ (i,j)\mid\text{showdown}\ \} = \frac{\P\{\ \text{showdown}\mid\text{hand}\ (i,j)\}\cdot\P\{\ \text{hand}\ (i,j)\ \}}{\sum_{\forall u,v}\P\{\ \text{showdown}\mid\text{hand}\ (u,v)\}\cdot\P\{\ \text{hand}\ (u,v)\ \}} = \frac{(\kappa\bs{R}^{\circ 2}\odot\bs{P})_{ij}}{\bs{\iota}'(\kappa\bs{R}^{\circ 2}\odot\bs{P})\bs{\iota}} = \frac{(\bs{R}^{\circ 2}\odot\bs{P})_{ij}}{\bs{\iota}'(\bs{R}^{\circ 2}\odot\bs{P})\bs{\iota}} \]

or simply

\[ \bs{S} = \frac{\bs{R}^{\circ 2}\odot\bs{P}}{\bs{\iota}'(\bs{R}^{\circ 2}\odot\bs{P})\bs{\iota}}.\tag{3} \]

Notice that the probability of our enemy reaching showdown is observable, so we have

\[ \xi := \P\{\ \text{showdown}\ \} = \kappa \bs{\iota}'(\bs{R}^{\circ 2}\odot\bs{P})\bs{\iota}.\tag{4} \]

There’re in total \(2n+1\) equations by combining eq. (1) - (4) together, which makes it possible to solve for \(2n+1\) variables including all entries in \(\bs{R}\) and \(\bs{H}\) together with scalar \(\kappa\). Simple algebra gives the solution as follows:

\[ \begin{cases} \kappa = \displaystyle{\frac{\xi [\bs{\iota}'(\bs{S}\odot\bs{P})\bs{\iota}]^2}{\pi^2}},\\ \bs{R} = \displaystyle{\frac{\pi\bs{S}^{\circ \frac{1}{2}}\odot\bs{P}^{\circ -\frac{1}{2}}}{\bs{\iota}'(\bs{S}\odot\bs{P})\bs{\iota}}} = \displaystyle{\frac{\pi(\bs{S}\odot\bs{P})^{\circ \frac{1}{2}}\odot\bs{P}^{\circ -1}}{\bs{\iota}'(\bs{S}\odot\bs{P})\bs{\iota}}},\\ \bs{H} = \displaystyle{\frac{(\bs{S}\odot\bs{P})^{\circ \frac{1}{2}}}{\bs{\iota}'(\bs{S}\odot\bs{P})\bs{\iota}}} \end{cases} \]

which, you may have noticed, can be written in a more succinct form by denoting \(\bs{\Sigma}:=(\bs{S}\odot\bs{P})^{\circ \frac{1}{2}}\) and \(\sigma:=\bs{\iota}'(\bs{S}\odot\bs{P})\bs{\iota}\):

\[ \begin{cases} \kappa = \xi\sigma^2/\pi^2,\\ \bs{R} = \pi\bs{\Sigma}\oslash\bs{P} / \sigma,\\ \bs{H} = \bs{\Sigma} / \sigma \end{cases} \]

where \(\bs{A}\oslash\bs{B}\) is the Hadamard division, namely the elementwise division of the two matrices. It is equivalent to \(\bs{A}\odot\bs{B}^{\circ -1}\). Notice that all known variables in the RHS of the solution, except for \(\bs{P}\) which is a constant matrix, are estimated by taking averages that converge to corresponding expectations in distribution at a rate of \(\sqrt{2}\). Therefore, by continuous mapping theorem we know this convergence still holds for \(\kappa\), \(\bs{R}\) and \(\bs{H}\) and thus our estimators are indeed consistent. Now we can finally conclude that the problem is properly solved with the additional assumption we made above.


  1. Usually when we talk about ranges, we mean preflop ranges, though in some rare cases it can actually mean something more general. ↩︎
  2. Apparently diagonal entries cannot represent suited hands. Cards with the same rank can’t be of the same suit, too. ↩︎
  3. It’s not so beautiful if you’re currently using a small screen. ↩︎
  4. By probability matrix I mean a matrix with all entries adding up to \(1\). ↩︎