• ## Texas Hold 'em Series: Hand Evaluation

In the previous post, we considered the probabilities of making one specific hand with the turn/river card. This can be rather useful in specific situations, but still cannot apply thoughout a game. Poker is essentially an incomplete information game. Different from Go, where you can see all stones placed on the chessboard and thereby "solve" an optimal move, you never know you opponents' pocket cards until showdown (yet even then, people mucks). Also, you have little clue on the unshown community cards. Therefore, in order to evaluate a hand during a poker game, we'd better opt for a online evaluation algorithm instead of considering this as a DP-like problem.

• ## Texas Hold 'em Series: Odds Chart

In this post we're gonna introduce one of the most widely-used results in hold 'em: the odds chart.

• ## Texas Hold 'em Series: Poker Hands Dataset

In this post, I'll walk through the whole process to download, clean and then browse one of world's largest poker hands history dataset, the IRC Poker Database[1], which is a little bit aged but well-known for its huge size. The work we're doing here is meant to be a preparation for further analysis and model training.

• ## Texas Hold 'em Series: Basic Concepts

Starting from today, I'm gonna write a series of posts on Texas Hold 'em, one of world's most famous forms of poker. The game is rather complicated, especially considering its origin dating back to early 20th century. In this post, I will list the bare-bones of hold 'em. These concepts may sound boring to you if you are a veteran poker player, but I just want to make sure we're talking in the same language — or building using the same bricks.

• ## Random Projection and Its Expectation

A couple of months ago I was asked the following question during an interview (for propriatary concerns I'm not gonna disclose the industry or name of the company): $\newcommand{R}{\mathbb{R}} \newcommand{E}{\text{E}} \newcommand{bs}{\boldsymbol} \newcommand{N}{\mathbb{N}}$

Assume $k$, $n\in\N$ and $k < n$. For a uniformly chosen subspace $\R^k\subsetneq\R^n$ we define the orthogonal projection as $P:\R^n\mapsto\R^n$. Find $\E[P(\bs{v})]$ where $\bs{v}\in\R^n$ is given.

It's an interesting question and also a totally novel one to me at that time. How do we define a "uniformly" chosen subspace and its corresponding projection? What are the possible intuitions in this simple piece of question? Despite the busy schoolwork and student projects, these thoughts persist in my mind and drive me digging this question from time to time. Curiosity has been aroused and an appetite is meant to be satisfied.

• ## Notes on Foreign Exchange

These are the lecture notes on foreign exchange market and theories. $\newcommand{\E}{\text{E}} \newcommand{\P}{\text{P}} \newcommand{\Q}{\text{Q}} \newcommand{\F}{\mathcal{F}} \newcommand{\d}{\text{d}} \newcommand{\N}{\mathcal{N}} \newcommand{\eeq}{\ \!=\mathrel{\mkern-3mu}=\ \!} \newcommand{\eeeq}{\ \!=\mathrel{\mkern-3mu}=\mathrel{\mkern-3mu}=\ \!} \newcommand{\MGF}{\text{MGF}}$

• ## Notes on Stochastic Calculus

This is a brief selection of my notes on the stochastic calculus course. Content may be updated at times. $\newcommand{\E}{\text{E}} \newcommand{\P}{\text{P}} \newcommand{\Q}{\text{Q}} \newcommand{\F}{\mathcal{F}} \newcommand{\d}{\text{d}} \newcommand{\N}{\mathcal{N}} \newcommand{\sgn}{\text{sgn}} \newcommand{\tr}{\text{tr}} \newcommand{\bs}{\boldsymbol} \newcommand{\eeq}{\ \!=\mathrel{\mkern-3mu}=\ \!} \newcommand{\eeeq}{\ \!=\mathrel{\mkern-3mu}=\mathrel{\mkern-3mu}=\ \!} \newcommand{\R}{\mathbb{R}} \newcommand{\MGF}{\text{MGF}}$

• ## Billiard Tournament: Martingale, Kelly Criterion and More

I am recently playing a billiard game where you can play a series of exciting tournaments. In each tournament, you pay an entrance fee of, for example, $\500$, to potentially win a prize of, say, $\2500$. There are various kinds of tournaments with different entrance fees ranging from $\100$ up to over $\10000$. After hundreds of games, my winning rate stablized around $58\%$, which is actually pretty good as it significantly beats random draws. A natural concept therefore came into my mind: Is there an optimal strategy?

• ## Deep Learning on MacOS with AMD eGPU?

I've recently sold my Nvidia GTX 1080 eGPU[1] after two month's waiting in vain for a compatible Nvidia video driver for MacOS 10.14 (Mojave). Either Apple's or Nvidia's fault, I don't care any more. Right away, I ordered an AMD Radeon RX Vega 64 on Newegg. The card arrived two days later and it looked sexy at first sight. It's plug-and-play as expected and performed just as good as its predecessor, regardless of serious gaming, video editing or whatever. I would have given it a 9.5/10 if not find another issue a couple of days later — wow, there is no CUDA on this card!

• ## To the Arctic Circle (Again)!

It's been more than two years since my last trip to the Arctic Circle when I was still studying in the Netherlands. Our adventurous hike in Abisko, in endless Northern European Mountains, was still a frequent dream of mine. This time we went to Fairbanks, Alaska, for Aurora and also, for another Arctic experience.