While the challenges discussed in the first article were related to your risk/reward possibilities and included well-known data, the real information gap an option trader faces relates to the volatility, or the ‘implied volatility’. The definitions for these parameters are sometimes obscure, and more – not all volatilities are created equal. Sometimes (like before specific events) the expected volatility is built upon different assumptions than usual, and requires different arithmetic.
The BS model is popular not because it is always correct, but rather because it easy to handle. The volatility – and the assumptions around it – is one of the main sources for differences between the ‘real’ value of the option and the value that the market sees while using a classic BS model. Needless to say, this imparity creates big opportunities. It really doesn’t matter whether your strategy is to ‘sell’ or ‘buy’ options, high quality data regarding the volatility level is critical.
Before you start, do your homework
Practically, you can find a lot of data sources with different level of granularity about options and it’s important is to realize several facts before you dive in:
– in most cases, data (historical and daily) about the options market, will cost money. The question is how much and what would you get in return?
– it’s very important to know what you really need. Is it just open/end of day data? High/low value points during the trading day? Or maybe much more? The implications of these differences could be costly.
Capitalise Appoints William Klippel as its Head of SalesGo to article >>
Each model has his own needs, one can get endless data (hundreds of GBs) with great quality and granularity about options, but if the indirect cost is the replacement of your PC because you lack storage capacity, it might not be worth it. Speaking of technical resources, one other thing to pay attention to is how much work is needed (in the data analysis arena) in order to make the dataset you are going to purchase actionable.
We may not deal with the formal definition of ‘big data’ but it could definitely become ‘quite a lot of data’. Even if you are an experienced programmer, you had better make a valid assessment if you have the resources to deal with datasets in this magnitude. As part of the technical evaluation of the dataset, be sure that the data structure is available in a format you can work with, like csv, which lets you exploit the data with Excel spreadsheets, or use data analysis platforms like Python/Pandas, R or SQL.
I started looking for financial data at Quandl. The datasets which contain options information (mostly implied volatility daily and historical values) available for premium payment and have some free trial period. Some web pages where historical and daily option data (including several kinds of prices) can be found are: historical option data, OptionData.net and OptionCast (for daily data), optionmetrics, and Tick Data.
The CBOE (Chicago Board Options Exchange) page offers some relevant statistics, tools and education. But if you want the real thing you should follow the links to CBOE Livevol where options data and analysis/backtesting tools are available for a variety of prices. The prices for those datasets can start from several hundreds of dollars per year (for historical data), to several thousands, depending on the level of granularity, the quality of the data, the available tools which are included etc.
You can also check with your online broker for extra data and tools. InterActive Broker, for example, offers several hundreds of 3rd party solutions which could be relevant to this issue. Since we are talking about a large amount of data, you should check whether the data were adjusted to changes (or mistakes) that might occur over time or just reflecting the regular day to day trading (‘raw data’).
In case you want to check deeper (and find better and more suitable options), there are forums where discussions are conducted on these matter, for example elite trader and stack exchange.