## Beating the Best Mutual Funds: Fidelity Floating Rate High Income Fund (FFRHX) Portfolio Optimization Part I

Nicholas Bhandari Follow |Our new entry this week will explain an optimization method that is unfamiliar to many. Due to the length, it will be broken up into two parts (see part II here). In the first, listed below, we will explain the CVaR Optimization method and the reasons for using it. In part II, the actual optimization will take place.

In an effort to move beyond simple equity or fixed income Mutual Funds, we have today a leveraged performing loan Mutual Fund that we are going to attempt to replicate and beat with a portfolio of ETFs.

The kind of assets that FFRHX covers include any floating rate instruments in the lower investment grade, or sub investment grade range. Companies with some performance risk, packaged institutional loans, and sovereign debt are included in their portfolios. Since these rates float and are predominantly non investment grade, expect a fund like this to perform better in solid economic times. The cash holdings however, which are as high as 15%, lie above the average for this category, showing a level of risk aversion. As a result this fund has traditionally been more understated than its peers.

This fund brings up an interesting opportunity to address a severely misunderstood area of finance, nonnormality of returns and risk. The Mean-Variance optimization technique we have been using to this point is fraught with well documented issues, but for a small portfolio assets in extremely liquid markets, it works adequately. It relies though on one extremely unrealistic assumption, that is the normality of returns for the investments in question.

This is a problem that received shockingly little attention before the financial crises, but has now been thrust into the spotlight. The evaporation of trillions of dollars will do that.

The figure above is a histogram of the returns of the S&P 500 over roughly the past 30 years. Superimposed over the histogram is the normal distribution or, as it’s more frequently referred to, the bell curve. It doesn’t take a statistician to see some immediate issues. The data is more peaked at the mean and has significantly fatter tails, something we refer to as excess positive kurtosis or a leptokurtic distribution. The implications here are that extreme events are predicted with less frequency, and mean central events are predicted with less frequency. However, believe it or not these are issues we can deal with. The distribution of returns (the blue histogram) moves closer and closer to a normal distribution as the sample size increases, and the data in the tails occurs so infrequently, that for most investors optimizing along those lines would create a portfolio that overcompensates for risk, and returns little to nothing. People who are retiring within 5-10 years, or use index options and futures are not included in this group and should be using more robust techniques. This is the reason that the MV optimization technique is acceptable for a portfolio of equities. Over time I will begin to go over the Mean Absolute Deviation (MAD) method of optimization, my personal favorite. But the math, for now, is beyond the scope of this article.

Lets now consider though the Mutual Fund at hand, a leveraged free floating loan portfolio. The characteristics of this asset class do not follow the price action of equities, something that is made very clear in the graph below.

This is the histogram of daily returns for the FFRHX Mutual Fund, with (believe it or not) exactly the same normal distribution. Clearly there is not even a passing relationship to the normal distribution. The mean is skewed to the right, known as a negative skew, and the distribution is so peaked that it dwarfs the superimposed normal distribution. This is a distribution of returns that would be horribly estimated by any optimization technique that relies on the normal distribution assumption. If you only take one point from this entry, let it be that optimization techniques are only as good as the assumptions made. We as risk practitioners make our most egregious mistakes by underestimating and misestimating the very thing we are paid to quantify.

So the standard deviation, which is only applicable when a normal distribution is at least approximate, is no longer acceptable as a proxy for risk. We need to consider more advanced methods.

It is likely that you’ve heard of Value at Risk (VaR) in the news recently. Banks use it to estimate the risks they are taking on a day to day basis. The calculations are unimportant, but the basic idea is actually very simple.

This image from Wikipedia summarizes it succinctly. The total area under a distribution plot is 100%, so here we have the red area amounting to 5% of the total distribution, and the blue as the other 95%. VaR gives a dollar figure for how much or more is at risk 5% (percentage can be adjusted depending on risk necessities) of the time. VaR has significant issues that make it generally unacceptable as a sole representation of risk. Therefore we use an adjusted version known as Conditional Value at Risk (CVaR).

For some reason they have called CVaR, Tail Value at Risk (TVaR) in the image above, they are the same thing. The key defeating term for VaR is “or more”. VaR as you can see from the above image, shows that your losses will exceed a certain dollar amount 5% of the time, however, it makes no distinction as to how large that loss can become. You could have a situation where the losses explode once they pass the threshold, as is the case in the image above. In addition, if the distribution is nonnormal, which is what we are dealing with, then VaR grossly underestimates the loss potential. To put this in dollar terms, VaR tells you that 5% of the time you can lose $1,000,000.00, but it does not explain that once this threshold is broken, these losses can explode to $100,000,000.00.

CVaR attempts to solve this issue by taking the average of all losses that occur beyond the threshold. So if the losses for one portfolio explode to $100,000,000.00 after the threshold is broken, but another has losses that only grow to $5,000,000.00 after the threshold, CVaR will differentiate between these two while VaR will not. So now instead of minimizing the standard deviation of our portfolio, we will minimize the CVaR of our portfolio. This optimization technique will solve our problem of nonnormality.

For now I will leave you with a final analogy to hopefully shed light on the concept of extreme events. Borrowing from Nassim Taleb, arguably the world’s foremost expert on extreme events or “Black Swans”, pretend that there are two countries, Mediocrestan and Extremistan. In Mediocrestan, an event that occurs 5% of the time will cost us 25% of our net worth, but in Extremistan an event that occurs .1% of the time will cost us our life. It will cost you 25% of your wealth to protect against multiple occurrences of the first event, and 76% of your wealth to protect against the second, which do you choose?

Tune in tomorrow for your answer.

**DISCLOSURE**:
*The views and opinions expressed in this article are those of the authors, and do not necessarily represent the views of equities.com. Readers should not consider statements made by the author as formal recommendations and should consult their financial advisor before making any investment decisions. To read our full disclosure, please go to: http://www.equities.com/disclaimer. *

## Comments

You have to be logged in to leave a comment.

Take me to log in Don't have an account?