Iâ€™ve intensively reviewed 45 papers that found that analysts are optimistically inaccurate. Analysts were generally inaccurate and optimistic.

Prior research calculated this inaccuracy. They also tried to identify if error had a pattern to it, and considered both individual analystâ€™s performance as well as a consensus-earnings forecast.

There are seven issues from prior research that I see.

Issue #1: Most research is scaled error by price, not earnings. What do I mean by this? Look at the equations here:

**(Forecast earnings â€“ Actual earnings) / Share price = SFE****(Forecast earnings â€“ Actual earnings) / |Actual earnings| = SFE**

You can see forecast earnings minus actual earnings divided by share price in equation 1. That gives what I call â€śscaled forecast error.â€ť But thatâ€™s the way most people did it; and the problem with this for most prior research is that itâ€™s hard for it to be comparable; because that share price could be wildly swinging over time or from stock to stock. Therefore, what I do is scale the error which is the difference between the forecast and actual earnings divided by the absolute value of actual earnings, i.e. equation 2.

The second issue that I see is a shifting definition of a forecast time horizon.

If we look at Analyst #1 who was 20% in error twelve months prior to when results came out, and Analyst #4 who was 20% in error one month before results came out, are they equally accurate? No, theyâ€™re not because Analyst #4 had eleven months of information and data. So weâ€™re not really comparing apples to apples. We must make sure that weâ€™re comparing apples to apples.

The next issue I saw was execution time frame. Many of them can be unrealistic for fund managers and investors to actually execute, because in prior research, they may say that an analyst or a fund manager will execute based upon an analyst changing a recommendation. But the truth is that a fund manager canâ€™t just switch in and out of stocks just because an analyst said so. They have their own process of due diligence.

Another issue is that the research is overly focused on explaining individual analystsâ€™ forecast, but investors donâ€™t have time to keep track of which analyst is outperforming versus the one whoâ€™s not.

The research has also shown that analysts have been better as reporters and not as originators. In fact, some great research has recently shown that the analysts are just piggybacking on the prior news that has come out from the company and adjusting their earnings from that. The news was the actual originator of the information and the analyst just adjusted to it.

Thereâ€™s also what I would call â€śunderutilized research.â€ť Some research shows that people donâ€™t use research. Therefore, even though there may be knowledge of where thereâ€™s an opportunity, people arenâ€™t reading it and taking advantage of it.

And then, youâ€™ve got the cost-benefit analysis because analysts are paid a pretty penny to do their job, and itâ€™s not often that can be offset by the benefit that can be derived over a long period of time.

### Hypothesis

**Financial analysts are optimistically wrong****They were more wrong in emerging markets**

### Data

According to the World Bank, there are 109 stock markets in the world, 47,000 listed companies, and a market capitalization of US$52 trillion dollars.

In this research, we used monthly data on stock exchange-listed companies across the globe trading in the stock market at any point during the 12-year period from January of 2003 until December of 2014. We used consensus estimates rather than each individual analystâ€™s estimates that were sourced from Thomson Reuters I/B/E/S Estimates which contain about 45,000 companies across 70 markets.

Letâ€™s talk about the method and some of the standard definitions. First is the definition of Forecast Error. Thatâ€™s just the difference between the forecast earnings and the actual earnings that eventually happen.

**FE (Forecast Error) = F â€“ A (Forecast earnings minus Actual earnings)**

We can scale that and we call it Scaled Forecast Error (SFE) which is just the FE or Forecast Error relative to something such as share price or actual earnings (A).

**SFE = (FE / |A|) x 100**

Again, I used actual earnings, not share price in scaling the forecast error. Using the absolute value in the denominator assures that the correct calculation emerges in cases of A (or Actual earnings) being a negative value.

The other standard definition is Absolute Forecast Error (AFE), which is the absolute value, the difference between the forecast and actual earnings.

**AFE = |F â€“ A|**

There you have the difference between forecast and actual, and then we take the absolute value of it. This eliminates the issue of whether an analyst was optimistic or pessimistic.

Scaled Absolute Forecast Error (SAFE) is absolute forecast error relative to something such as share price or actual earnings. And, yes, youâ€™ve got it right. I looked at absolute forecast error relative to actual earnings.

*SAFE = (AFE / |A|) x 100*

Now, letâ€™s talk about a six-step process to prepare the data. I spent a lot of time trying to figure this out, and Iâ€™m going to walk you through how I did it. I created the starting data set. First, you have to select the minimum size of companies; otherwise, you have too much data. Second, you must have at least one EPS forecast; or else, youâ€™re not estimating forecasts. And I require that you have to have at least one target price and recommendation because I want it to be a full forecast comprehensive coverage.

**Select minimum size of company****Must have ?1 EPS forecast****Must have ?1 target price and recommendation**

Once weâ€™ve done that, weâ€™ve got a short list. Now, you apply some filters to this starting data set. The first thing we want to do is remove tiny numbers that produce an extreme scaled absolute forecast error. Very tiny numbers can produce a 10,000% change when, in fact, itâ€™s almost meaningless. The next thing we have to do is think about how we should truncate extreme scaled absolute forecast error; meaning, cut off the extreme positive and negative numbers. Finally, we need to select the minimum number of analysts. Right now, Iâ€™ve said that it has to have at least one EPS forecast. But, maybe, if we were looking at the average of analysts, then we couldnâ€™t have one. Weâ€™d have to, at least, have two.

**Remove tiny numbers that produce extreme SAFE****Truncate extreme SAFE****Select minimum # of analysts**

First of all, for minimum size, we started with greater than or equal to US$50 million dollarsâ€™ market cap. Next, we must have at least one EPS forecast and, as Iâ€™ve said, at least one target price and at least one recommendation.

Next, I removed tiny numbers that produced an extreme scaled absolute forecast error; and the point that I selected was any tiny numbers that are less than or equal to 0.04 that also produced a scaled absolute forecast error greater than 200%. If it did not produce a significant error greater than 200%, I left that tiny number in.

Then, I truncated the scaled absolute forecast error which basically means that I removed outliers that were plus or minus 500%. If an analyst was 10,000% wrong, I removed that point because it would skew the data and it would be meaningless. Finally, I selected a minimum number of analysts. In my case, I selected greater or equal to 3 EPS forecasts. In summary:

**Select minimum size of company (?US$50m market capitalization)****Must have ?1 EPS forecast****Must have ?1 target price and recommendation****Remove tiny numbers that produce extreme SAFE (0.04 with SAFE >200%)****Truncate extreme SAFE (remove SFE outliers Â±500%)****Select minimum # of analysts (?3 EPS forecast)**

What did I end up with?

In the 2014 numbers of my data set, you can see that the total companies were 7,434; and 66.9% of them came from developed markets.

### Method

We calculated Scaled Forecast Error (SFE) which is FE relative to something such as Share Price (P) or Actual earnings (A)

**SFE = (FE / |A|) x 100**

In this case, weâ€™re going to use actual earnings. We did this for every stock for every year; and then we took an average for each year.

### Results

Analysts were 25% optimistically wrong over this 12-year period. During the best years, they were only wrong by 10%. In 2008, the worst year, when earnings collapsed, they were wrong by 55%.

Thatâ€™s interesting right there. At that point, basically, the analysts probably didnâ€™t move much. Instead, what happened was that earnings collapsed.

You can also see that the bottom of this chart is at zero. It goes from zero to ten to twenty to thirty to forty to fifty to sixty percent. Analysts were never pessimistically wrong.

Letâ€™s look at emerging markets.

We found that their error was 35%. Emerging markets analysts had been much more optimistically wrong especially from the period of 2010 to 2014.

### Action

**Maybe adjust analystsâ€™ earnings forecasts down by 20% to 30% to arrive at your own more accurate forecast****Adjust down earnings forecasts of analysts in emerging markets even more**

**DISCLAIMER:** This content is for information purposes only. It is not intended to be investment advice. Readers should not consider statements made by the author(s) as formal recommendations and should consult their financial advisor before making any investment decisions. While the information provided is believed to be accurate, it may include errors or inaccuracies. The author(s) cannot be held liable for any actions taken as a result of reading this article.