Can Financial Ratios Predict Next Year's Stock Performance? (Part 2)

Last update on April 30, 2017.

Image Credit: Zapp2Photo / Shutterstock.com

 

In part 1 of this series, I explained how deep neural networks (DNN) can discern complex cause and effect relationships that traditional statistical models can’t. I had also set up the question we are trying to evaluate using DNNs: is there some combination of financial ratios that can predict individual stock performances over the next 12 months?

In this article, I continue where I left off and present some of the results. I will also discuss some possible explanations behind the results and explore some ideas for the future.

I mentioned in part 1 that there are an unlimited number of ways to configure DNNs. One of the primary configurations is the number of hidden layers. Adding more hidden layers allows DNNs to detect more complex relationships between input (financial ratios) and output (stock performance), but it also increases the chance of “overfitting”, which is a phenomena we want to avoid (discussed later). Therefore, we usually want as few hidden layers as possible.

To start off, I decided to see whether a DNN with just one hidden layer could make the connection between input and output. However, I quickly found out that such a DNN couldn’t do the job.

The following graph shows the actual stock performance (x-axis) vs. the stock performance predicted by the trained DNN (y-axis) on the ‘train’ data set in green dots. To use an analogy, let’s say that the DNN is like a robot we trained to fly a plane using a flight simulator. In a manner of speaking, the following graph shows the performance of the robot in one of the simulations it has trained on before.

fr-1h-train.png

If the DNN has the ability to predict stock performances, we would expect the predicted output to come close to the actual output. Therefore, we would expect to see the majority of green dots to appear inside the red ellipse.

Dots appear inside the blue ellipse when the DNN has no idea how to predict output, in which case it errs by predicting some average output. Since the output is the performance of individual stocks relative to the average of all stocks, the average output is close to 0. The blue ellipse encircles the area where the predictions are close to 0.

While some green dots did in fact appear inside the red ellipse, more dots appeared inside the blue ellipse. This suggested to me that a simple DNN with just one hidden layer can’t discern the relationship between input and output. (For the machine learning nerds out there, I used 80 nodes in the hidden layer and trained it through 50,000 epochs. I tried several activation functions and varied the batch sizes, but the results were similar.)

The next logical step then was to increase the number of hidden layers, and see if that yielded a better model. I didn’t get very good results for models with two or three hidden layers, but I was able to get the following results when I used four hidden layers. As with the one hidden layer example, the graph shows the actual vs. predicted stock performance on the ‘train’ data.

fr-returns-train-1.png

As you can see, quite a few dots still appear within the blue ellipses. However, the majority of the dots seem to appear within the red ellipse. This means the DNN figured out the relationship between financial ratios and stock performances, right? Not so fast.

As I’ve mentioned before, the above results show the performance of the DNN on the ‘train’ data set. Therefore, the results show the performance of the DNN on input that it had already seen the actual output for during training. I’ve likened this to training a robot on a flight simulator, and getting that robot to perform on a flight simulation that it has already trained on before.

However, I think you’ll agree with me that this is no way to judge how the robot will perform in a real flight. In a real flight, the robot will encounter situations it didn’t encounter in a flight simulation. So at the very least, we should test the robot on flight simulations it hasn’t trained on before.

The following graph shows the actual vs. predicted stock performance on the ‘test’ data set, which the DNN had not seen during training. In other words, it’s a test of the robot in flight simulations it had never encountered before.

fr-returns-test-1.png

I didn’t draw the red ellipse this time, but the same principle applies: if the DNN could predict stock performances, we should see lots of green dots in the diagonal corridor. Judging from the virtual absence of such dots, we can plainly see that the DNN doesn’t do a good job.

But if the DNN can’t in fact predict stock performance, then why could it predict stock performances in the ‘train’ data set? The reason is that the DNN simply memorized some results in the ‘train’ data set.

Let me explain this point using the analogy of the robot learning how to fly using a flight simulator. On a certain simulation, it probably figured out that it would get to the destination if it turned left once and turned right twice. The robot never got to understand why such turns would make the plane go the right way, just that it did. Thus when that particular simulation showed up, the robot would turn left and then turn right twice. However, when it encountered a completely new flight simulation, it had no clue how to navigate.

The same situation happened in the ‘train’ data set. In many instances of inputs, the DNN simply memorized the corresponding output. So when the DNN was asked to predict the output of those inputs, it simply predicted values that were close to the actual outputs. However, the DNN never really figured out a more generally applicable relationship between inputs and outputs.

Thankfully, there are ways to combat the tendency of DNNs for rote memorization. The technical term for this is ‘regularization’. But when I applied even some mild forms of regularization, I found that even DNNs with four hidden layers couldn’t fit the data well. For example, the following graph shows the actual vs. predicted stock performance of one such DNN.

fr-returns-train-2.png

A DNN with four hidden layers is considered to be pretty complex. So after seeing these results, I concluded that financial ratios, by themselves, had virtually no power to predict stock market returns over the next 12 months.

This conclusion may surprise you. It certainly surprised me. But why can’t the financial ratios predict stock performance? I can think of a few possibilities.

One, my DNN models may suck. I may have consistently used only those configurations that yield poor results. While this is possible, I think it’s unlikely. Unless I’m completely off track, I likely would have seen more encouraging signs of DNNs working.

That said, it’s possible that a different type of neural network would yield better results. Recurrent neural networks (RNN), for example, are known to fit time series data better. Time series are types of data where each data point is associated with time. Stock performance is an excellent example of time series data.

Two, financial ratios may predict stock performance, but only over longer periods of time. In this article, I used financial ratios that have been calculated over the past 12 months to investigate the stock performance over the next 12 months. Perhaps financial ratios calculated over longer time horizons will have better predictive capability. Or, perhaps we’ll need to measure stock performance over longer periods of time. It would be interesting to lengthen each time horizon to say, three years, and see if that yields better results.

Three, the financial markets may have become more “efficient” in recent years. It is now common knowledge that ‘value’ stocks typically outperform the rest of the stock market. The academic papers typically determine what a value stock is by looking at specific financial ratios such as the Price to Book, or the EV to EBIT.

In recent years, with the help of technology, sophisticated investors have implemented trading strategies that take advantage of such findings. In fact, I would be shocked if I were the first person to ever employ machine learning to investigate the relationship between financial ratios and stock performances. Such trading strategies may have moved stock prices to the point where (ironically) such strategies no longer work. In other words, the market may have become more “efficient”.

There are some academics who agree with this point of view. For example, this paper asserts that value investing driven purely by financial ratios hasn’t worked in the period between 2002 and 2015. Note that I’ve only used data since 2007 for my DNNs, so the paper fits with my conclusions.

However, I think it’s too soon to say that such value investing strategies have become ineffective. In fact, I think it’s possible that instead of the market becoming more efficient, it has become gradually more inefficient.

Take the P/E ratio, for example. Stocks with low P/E ratios have historically outperformed the market, at least until the turn of the millennium. The ratio has predictive powers because when the ratio is low, the stock earns more per dollar’s worth of stock.

For example, if the P/E ratio is 5, then the stock earns 1/5 = 20 cents per each dollar’s worth of stock. If the P/E ratio is 10, then the stock earns 1/10 = 10 cents per each dollar’s worth of stock. If the earnings stay the same for each stock forever, then the owner of the stock whose P/E is 5 will see more earnings per dollar invested into each share.

Once this becomes apparent, investors will generally buy the stock with a P/E of 5, and sell the stock with a P/E of 10. However, there are two reasons why this may not happen.

On the one hand, if the stock with a P/E of 5 sees its earnings decline rapidly, then the stock price may actually decline - and rightfully so. As I’ve written before, growth is a critical component of a company’s valuation, and investing purely on P/E ratios may neglect to incorporate this component.

Low P/E stocks have often performed badly in these last couple of decades. In recent years, tech stocks with high P/E ratios, like Amazon and Google, rose to prominence through high growth. On the other hand, the traditional companies that these tech companies disrupted - newspapers, retailers, etc - typically have lower P/E ratios. This led to the situation where companies with high P/E ratios outperformed companies with low P/E ratios.

But on the other hand, even if a company with a low P/E ratio outperforms on a rational basis, there’s no mechanism that forces such outperformance, at least in the short term. If investors simply choose not to invest in companies with low P/E ratios, then such stocks will stay cheap.

This phenomenon occurred, for instance, during the internet bubble. Companies with no revenue but who were full of “vision” outperformed companies with real incomes. When I look at Tesla, Snapchat, or marijuana stocks, it makes me think we could be in the midst of another such bubble.

In summary, short term financial ratios doesn’t seem to predict short term stock performance, but that doesn’t tell us whether the market is more or less efficient today. We’ll have to perform more analysis to ascertain that.

But for now, there’s one lesson we can draw from the DNN experiment I outlined in this article: don’t invest your money based simply on some financial ratios. Instead, do your due diligence on other aspects of the company, such as gauging its growth prospects.

Happy investing.

If you enjoyed this article, you might be interested in our free newsletter. Enter your email to get free updates.

Web Analytics