“In all speculations — whether it’s investing or wagering — you make money when the collective misprices the odds.”

Many, many years ago, the understanding of an investment was based on the “Present Value” theory. Basically, an investment should be valued based on future cash flows.

In the mid-1950s, Harry Markowitz realized that this theory did not include a factor that considered the impact of risk. Over the course of several years, Markowitz created what is known as Modern Portfolio Theory. MPT mathematically considers the impact of the following: diversification, correlation and risk. To model risk, MPT uses standard deviation. From Wikipedia:

Modern portfolio theory (MPT) proposes how rational investors will use diversification to optimize their portfolios, and how a risky asset should be priced. The basic concepts of the theory are Markowitz diversification, the efficient frontier, capital asset pricing model, the alpha and beta coefficients, the Capital Market Line and the Securities Market Line.

MPT models an asset’s return as a random variable, and models a portfolio as a weighted combination of assets so that the return of a portfolio is the weighted combination of the assets’ returns. Moreover, a portfolio’s return is a random variable, and consequently has an expected value and a variance. Risk, in this model, is the standard deviation of return.

That last line is key. Risk is defined as standard deviation. That is, if you were to create a histogram of daily stock price returns, it should look like a bell curve. That definition of risk has permeated every aspect of modern finance. The industry jargon for standard deviation is called volatility.

Volatility, at its core, is the probability of a price event. For instance, with volatility, you can determine the probability of an asset’s price reaching a certain target level. The implication for options traders is that you can use that method — in combination with a trade’s breakeven price — to calculate a trade’s probability of profit. The nice thing about volatility is that, because it is based on standard deviation, the calculations are relatively easy.

I then reversed the process. Instead of calculating a trade’s probability, my method lets you come up with a probability, and use it to find trades that meet that probability objective. I called that process ODDS, which stands for Options and Derivatives Decision Support.

In the late-1980s, when analyzing the way the stock market behaved when index values are above or below long-term moving averages, I realized that there was a critical problem with volatility. What if the bell curve did not accurately represent the actual price distribution?

Volatility is based on the assumption that a financial instrument’s price follows a Weiner process, or Guassian random walk. Those two paths are considered continuous stochastic processes. They are often referred to as Geometric Brownian Motion. It is called GBM because of the name of the person traditionally credit with its discovery, botanist Robert Brown in 1827.

Brown was studying pollen particles floating in water on a microscope slide and he saw stuff moving around. He tracked the motion and found that the movement of the stuff was random and was not due to the pollen being alive. Many years later, the movement was found to be consistent with a “least squares” mathematical explanation.

Amazingly in 1900, a mathematician in France, Louis Bachelier, used Brownian motion to describe the stochastic process of the stock and option markets. A few years after that, Albert Einstein used it as a way to indirectly confirm the existence of atoms and molecules.

So what’s the problem? In the physical sciences, GBM does a very good job of estimating the movement of molecules. And it does a whole host of good for various other tasks, especially games of chance.


But in the “speculative” world, where stock prices are largely driven by sentiment (i.e, human emotion), GBM horribly deviates from reality.

With GBM, the probability of a 4 standard deviation event is about 1 in 16,000, the probability of a 5 standard deviation event is about 1 in 1.75 million, the chances of a 6 standard deviation event are 1 in 506 million, the chances of a 7 standard deviation event are 1 in 385 billion.

I haven’t figured out what the odds of a 9.5 standard deviation event are! But here’s the punch line. Back in 2008, we had seven moves greater than 4 standard deviations. Of those, four were greater than 5 standard deviations, three were greater than 6 standard deviations, and two were greater than 7 standard deviations.

Now think of the implications of all of this. How many days have humans been on this planet?? How long has humanity existed in a civilization?

Assuming 5,000 years, that’s 1.82 million days. In other words, according to GBM, the normal distribution the Gaussian and all the other names given to the bell curve, the probability of a 5 standard deviation price event is about once every 5,000 years!

But in 2008, we had 5 standard deviation moves repeatedly. The same thing happened in 2009, 2002, 2001, 2000, 1998, 1990, 1987 … you get the idea!


Now as long as this is nothing more than a mathematical oddity, who cares, right? But that’s not the way of the world. Over the course of many decades, people began to realize that in most circumstances, the bell curve did a reasonably good job of estimating probability of asset classes. The operating operating word being “reasonably”. But with the limited data availability and computing capabilities of the past, using standard deviation to model asset prices was the only way to calculate probability.

This expanded from a niche back in the early-1970s to the de facto standard right now. You’ve heard about the banks increasing the amount of leverage versus the capital on hand during the real estate boom? That’s all because of volatility. It’s all because of standard deviation used in a function called Value-At-Risk, or VAR.

Back in 2003, the Bush Administration lifted the leverage limits for banks. But this was NOT a U.S. driven or ideological change. You may have heard of the Basel Accords. They were created in the 1990s so that banks could compete globally. If you gave the Swiss banks an advantage on their capital usage, and kept the American banks for using their capital “wisely”, then money would flow to the Swiss banks. So there was an international accord that decided on global capital standards. Those capital standards were based on the concept of VAR. And VAR calculations were based on … you got it, volatility, which I just showed grossly underestimates the likelihood of a gigantic event!

That means banks and the ratings agencies around the globe were grossly underestimating the likelihood of risk. They thought the likelihood of an event like October 19, 1987 was once a generation. They were geared up to handle a one-day event like that. The problem is, in 2008, it happened 7 times in one month! They wilted under that kind of persistent asset devaluation.[1]

History lesson is over. Back to my method.

I knew, as far back as the 1980s that something was wrong with the models, but I did not yet have the tools to perform a rigorous analysis. In 1996, I developed a method that actually allowed one to determine just how wrong they were. The models were grossly underestimating the likelihood of a gigantic price event – up or down. Given the fact that we had a Crash in 1929, a Crash in 1987 was a probabilistic impossibility. I went back and found that, since 1930, the S&P 500 has endured about 11 moves of 8 standard deviations or more. That’s 11 out of 20,000. Remember, the bell curve says that it should be 1 in 1 quadrillion! BIG DIFFERENCE!

What I did in 1996 was to radically change the way you evaluate probability. Instead of calculating an asset’s standard deviation, and then plug it into a model, I chose to simply count how many times a stock moved a certain percentage. I measured, instead of modeled. I told a friend of mine about this, and he asked, “Why on earth isn’t this method being used?” He asked, if it’s so good, why hasn’t anyone else done this before. I answered, “Because it’s too easy!” Counting doesn’t impress anybody. You will never receive the accolades of your peers if all you do is something you learned to do in kindergarten. The formula is not elegant, it doesn’t require a lot of higher level math. It doesn’t satisfy what Rick Bookstaber calls “physics envy”. It simply requires counting, although counting on this scale is beyond tedious.

Back in 1996, the counting process was pretty hard. I had to use a charting program and a spreadsheet. But as computers got faster and data storage got bigger, the tedious tasks got easier. With my pioneering software program ODDS Online, a basic version of the process got automatic.

But there was still one remaining piece of the puzzle. I had the probability part solved, but I couldn’t figure out a way to translate it into valuing a derivative security.

Then, in January 2011, I got it. I created a spreadsheet that allowed me to see the true probabilities, and the true option values, based on actual movement, as opposed to a theoretical model. This process is called Measure, Don’t Model®. Others have tried to replicate this process, but they all have a weakness based on the concept of overlap. My ODDS software is the only options valuation method on earth has this capability.

Image Image

There is one more point I want to make. It may not be a popular opinion, but it’s the truth. Whether you realize it or not, options trading is like running a casino. It’s also like running an insurance company. I am going to use a simple example that illustrates what I mean:

Let’s say when you win, you win $1 and when you lose, you lose $4. What are the fair odds?

Well, let’s say the win rate is 70%. Now play the game 10 times. If you win 7 out of 10 times, you’ll have seven $1 wins (total +$7), and three $4 losses (total -$12) for a net loss of -$5. You would not play that game.

Now let’s say the win rate is 90%. Play the game 10 times. If you win 9 out of 10 times, you’ll have nine $1 wins (total +$9), and one $4 loss (total -$4) for a net gain of +$5. The other guy would not play that game.

Now let’s say the win rate is 80%. Play the game 10 times. If you win 8 out of 10 times, you’ll have eight $1 wins (total +$8), and two $4 losses (total -$8) for a net result of $0. The odds are now considered fair.

So with a game that has risk of $4 and reward of $1, the probability of winning has to be 80% for the odds to be fair.

What that implies is, you can look at risk and reward, and determine the probability of the game!

Now, here’s why that is important. If people are trading options based on bell curve probabilities, and those probabilities are wrong, then the risk and reward is also wrong! Armed with actual probabilities, we are able to determine what an option’s true risk and reward should be. And because risk and reward of an option is based on its price, we are able to determine an option’s true value.

That is our edge.

Of course, this is just scratching the surface. If you’ve read this far, I know you have a deeper interest, and there is much more that you can learn about this subject. For those of you who have that interest, I’ve created a video course that goes into much greater detail. It’s called The Casino Secret to Profitable Options Trading.

[1] In one instance, Lehman and others were unhappy that VAR wasn’t allowing them to use as much leverage as they (Lehman traders) thought they should, so they changed the volatility calculations to include just four years of data! Four years!! In another instance, Moody’s found that one of their models had a mathematical error. Had they fixed the error immediately, it would have resulted in the downgrade of a wide swatch of derivatives. Moody’s solution? Change the volatility assumption!