My TFSA Update August 2017 - Does Combining CNNs and GRUs Yield Better Stock Price Predictions?

Last update on Sept. 18, 2017.

Image Credit: MeskPhotography/Shutterstock.com

 

In this series, I (Jin Choi) talk about my goal of reaching $1 million in my TFSA account by 2033. If you want to know what a TFSA is, I recommend you read my free book. In this post, I’ll detail what came of my efforts to combine two machine learning architectures - Convolutional Neural Networks (CNN), and Gated Recurrent Unit (GRU) - in order to make better stock price predictions.

 

August Results: Down 5.5%

At the end of August, I had $45,907 in my TFSA account, which was down by 5.5% during the month. By comparison, the Canadian stock market went up by 0.7% while the U.S. stock market went up by 0.3% in Canadian dollar terms. Therefore, my portfolio underperformed in August.

The majority of my portfolio still consists of oil and gas stocks. Although I no longer feel as optimistic about oil as I did perhaps a year ago, the oil and gas sector still seems like a good place to put money right now. One of the best times to buy a stock is when nobody else wants to touch it. That’s true of oil stocks today.

In August, oil prices went down from $50.21/bbl to $47.26/bbl. Much of this move was due to hurricane Harvey. The flooding that accompanied the hurricane forced oil refineries, many of which are located in the Gulf Coast, to close down temporarily. Oil is useless without those refineries to turn it into products like gasoline, so its price went down.

But if you overlook the effect of Harvey, the situation looks rather positive for oil prices. On the one hand, we’ve had large inventory drawdowns so far this year, as the chart below shows.



On the other hand, U.S. oil production has been growing at a much slower pace than many, including myself, had anticipated. For example, the following chart shows the difference between the U.S. Energy Information Administration (EIA)’s weekly production estimates and monthly production estimates. The weeklies are based on EIA’s forecasts, whereas the monthlies are based on “counting” the produced barrels.



Regular readers may recall that I “threw in the towel” on oil a couple of months ago, primarily because I thought U.S. oil production would surge. Since then, some readers made some good arguments to suggest that I was wrong to give up. I’m not yet ready to change my mind again, but if U.S. production keeps disappointing, I may eventually have to.

Regardless of whether oil prices surge or not, my long term plan remains to rely on machine learning to guide my investments, and I continue to put time and effort into investigating machine learning techniques. For the rest of this article, I will share my findings from combining two separate machine learning architectures I’ve investigated in the past: CNNs and GRUs. If you’re not familiar with these architectures, I recommend you read up on them first.

 

CNNs and GRUs - Are Two Better Than One?

There are two ways of combining CNNs and GRUs. The first way is to use both of them in separate layers of the same overall machine learning architecture.

As an analogy, think of machine learning architectures as Lego structures. Convolutional layers,  which form the basis of CNNs, could be the thin wide blocks, whereas GRU layers could be the tall narrow blocks.

So far, by examining CNNs and GRUs separately, it is as if we’ve decided to use only thin wide blocks to create one set of architectures, and only use tall narrow blocks to create another. But of course, we can also try to combine the different shaped blocks to create totally new architectures, and that’s exactly what I did.

The architectures I investigated first used one or two convolutional layers to detect features of the input. One or two GRU layers then processed those features chronologically, and tried to form “memories”. Finally, the architecture used the last imprinted memory to predict outputs.

As was the case in previous articles on CNNs and GRUs, I used the historical price of Canadian stocks as input, and the performance of each stock one week later as output.

Before I conducted my research, I had high hopes that this way of combining CNNs and GRUs would yield better stock price predictions. Using the same technique for language identification seemed to have yielded good results. Unfortunately, that didn’t turn out to be the case as none of the architectures that I’ve tried yielded an R2(R-squared: A higher number denotes better predictive capability) of more than 3%. This compares to the R2 of roughly 5 to 6% that I’ve seen with pure CNN and GRU architectures.

I’m not sure why combining CNNs and GRUs failed to deliver better predictions. It’s possible that combining them is effective, but I just haven’t tried the right architecture. There are many ways to combine CNNs and GRUs, but due to time and computational constraints, I have only been able to try a small subset of them. On the other hand, it’s also possible that mixing CNNs and GRUs this way is not very effective in generating the particular predictions I’m looking for.

The other way to combine CNNs and GRUs is using an “ensemble” approach. Instead of mixing convolutional and GRU layers into a single architecture, we train two different architectures - one purely CNN, and the other purely GRU - on the same data set. We then take the forecasts of both of these architectures into account to yield the final forecast. This is like asking two experts where they think a stock is going to go, and taking the midpoint between their predictions.

The ensemble approach would generate better predictions if the two “experts” disagreed often to keep each other accountable. Suppose one of them predicted the stock price will be $5 next week, while the other predicted $1, yielding a midpoint forecast of $3. Now, suppose that the actual stock price ended up at $2.50. Then, the midpoint prediction would have been more accurate than either of the standalone predictions.

However, an ensemble approach would not yield better results if the two “experts” always agreed with each other, regardless of when they’re right or wrong. For example, if both predict the same stock price of $5 next week and the correct price turns out to be $2.50, then we didn’t gain anything from having multiple predictions.

Unfortunately, as it turns out, the latter appears to be the case. Using the ensemble approach yields models with an R2 of between 5% and 6%, which is the same range of R2 shown by pure CNN and GRU models. This suggests that each pure model tends to give the same predictions as the other.

I don’t profess to know why CNN and GRU models tend to give the same answers, but I do have some theories.

One possibility is that the two models are similar enough to each other, so that they tend to detect the same patterns in the data. Although CNNs and GRUs have different architectures, they still belong to a broader class of machine learning architectures known as neural networks. It could be that we need to use a very different approach, such as random forests, in order to detect additional patterns.

The other possibility is that the architectures we’ve examined - CNNs and GRUs - are not sophisticated enough to capture the patterns contained in the data. There are many ways to augment these models, and some of them have shown promise in predicting other types of data.

The final possibility is that the CNNs and GRUs I’ve examined have captured almost all of the patterns available for discovery. In other words, perhaps it’s not possible to create a model that scores much higher than an R2 of 5% to 6%.

Academics have long argued that stock prices exhibit a “random walk”. In other words, whether a stock price goes up or down in any given day is impossible to predict. Value investors like Warren Buffett generally concur with this point of view, at least in the short term of less than a few years.

The only group of investors who disagree with this notion are technical analysts. Using historical price charts as the sole source of information, practitioners try to predict the daily, weekly, or monthly changes in stock prices. But while it’s possible to make money using technical analysis, almost everyone universally agrees that it’s hard.

I think the results I’ve shown in this article confirm this view. Clearly, there’s some sort of pattern in Canadian stock price data. However, the patterns are not easily detectable, nor reliable enough that one could make large sums of money through its application.

I believe the better way, as I’ve laid out in my last TFSA update, is to predict each company's fundamentals. I’ll turn my attention to this task from now on.

If you enjoyed this article, you might be interested in our free newsletter. Enter your email to get free updates.

Web Analytics