Shifting Stars: Fund Ratings Get Volatile as Financial Crisis Fades From the 10-Year Window

While the health of the bull market, the raging fee wars and the ongoing active vs. passive debate continue to capture the money management industry’s attention, something fascinating has quietly taken place on fund analysts’ radars.

May 08, 2019

This is the first installment in a multi-part research series on fund ratings. In this series, we will use proprietary functionality leveraged by our clients to develop custom fund rating systems.

Fund ratings, or rankings, systems have been in massive flux over the past 12 to 16 months. Why? Because the Global Financial Crisis (GFC), which saw the market hit bottom in March 2009, has been receding from equity funds’ 10-year return windows. The withdrawal has resulted in a spike in fund rankings volatility over the 10-year measure of fund performance, when compared to previous years’ 10-year windows that incorporated performance during the GFC and the subsequent extended bull market.

Before we get into the funds, it’s worth looking at the S&P 500 Total Return Index to understand the magnitude of recent absolute changes as the GFC made its exit from the 10-year window.

Between December 2017 through February 2019, the 10-year rolling annual return of the S&P 500 increased almost two-fold, from 8.5% to 16.7%.1 That change occurred amidst 2018’s negative performance, the worst since 2008. In the chart below, we show the S&P 500’s rolling 10-year annualized return (dark blue line) and the Morningstar Risk-Adjusted Return statistic (MRAR)2 for the Index (orange line)3 against the backdrop of its monthly performance (light blue bars).

Despite Q4 2018’s selloff, the last year exhibited the most growth in both 10-year return and MRAR for the S&P 500 Index since at least 1988 with MRAR increasing in 2018 from 5.5% to 14.3%―more than 2.5 times! This demonstrates how sensitive both statistics are to large negative returns, even when viewed over a long period of strong market performance.

Like other risk metrics, MRAR places a significant penalty on large losses, hence the spread between the lines―the biggest during the GFC and then slowly diminishing to an almost negligible level by 2019. Rapid increases in both statistics and changes in the spread are attributed to the GFC’s high concentration of large market losses passing out of the 10-year window.

While MRAR is used in the ubiquitous and influential Morningstar star ratings, these shifts likely also hold true for most risk-adjusted statistics used in custom rating systems employed by institutional investors, fund platforms and wealth management providers seeking to apply their own ratings’ criteria to fund selection.

To demonstrate, the chart below shows a similar pattern for 10-year Sharpe Ratio for the S&P 500 Index. The benchmark’s Sharpe Ratio increases two-fold from 0.6 to 1.2 over the past 14 months, a remarkable and rapid rise.

Now let’s dive into the equity funds to see how their rankings have been impacted by the withdrawal of the GFC from the 10-year window. For our purposes in this post, we will focus on Morningstar’s Large Blend category, a universe of stock mutual funds that are typically benchmarked to the S&P 500.

The charts below rank the Large Blend category on MRAR,4 the risk-adjusted statistic behind Morningstar’s quantitative-based star rating system. We take the top and bottom decile funds5 in the category as of March in each year (2016, 2017 and 2018). We then calculate the statistic ranking through the following year by rolling the 10-year window forward monthly.

The increase in the volume and degree of shifts over the recent 12-month period (rightmost chart) relative to the prior two is remarkable. As mentioned, MRAR is not alone, and the observations above are likely going to be similar for most performance measurement statistics if used to rank the same groups over the same period.

The charts below display Sharpe Ratio.

The observed pattern is also true of the other eight of the traditional “style box” categories (MPI Research registration required). So how is this all impacting the way funds are perceived and received?

Unsurprisingly, the longest-term (10-year6) Morningstar star ratings (again, based on MRAR) have been unusually volatile in recent months. Over the past 12 months, U.S. equity funds have had their 10-year star rating change more often, and by a larger amount, than in either of the two 12-month periods prior to that.

Over the past year (between Mar 2018 and Feb 2019), more than 300 funds saw a cumulative change of at least two stars in their 10-year rating. That represents a 650% increase over the prior year. It also represents 10% of all the funds tracked.7

Just as no investor or advisor wants to see their top-ranked fund fall two tiers, having it vary within a two-tier range of ratings can also as concerning. In the chart below, we look at the range8 of distinct 10-year ratings that funds received over the last three one-year periods. Fewer than 30% of funds maintained their same 10-year rating for the last 12 months, compared to more than 50% in the earlier two periods.

Roughly one in six funds (15%) saw their 10-year rating change by at least two stars sometime within the past 12 months, as compared to 1% and 2% in the  two years prior to that. Thus, by February 2019, the 10-year rating for 417 funds had changed by two stars, and 59 funds saw a change of three stars.

Let’s put this in context. Randomly selecting a 10-year, 5-star fund two or three years ago would likely have left you with a 5-star fund a year later, or almost certainly with at least a 4-star fund. Not so over the past year, however, where the random fund would be more likely not to be a 5-star fund, and quite possibly be 3-star fund or lower.

What we have seen with the 10-year window echoes what happened in 2014 with 5-year rankings, as discussed in our previous post from 2014. There, we mention the shrinking dispersion in standard deviation, which also holds true for 10 years. It is worth repeating: The dispersion, or distribution, in downside deviation between top and bottom ranking funds narrows significantly (as the GFC recedes). This makes it more difficult to distinguish the riskier funds from their more conservative counterparts in the absence of any large downside events.”

When assets are highly correlated, it can be hard to distinguish between them statistically. Significant differences in rank can result from trivial differences in the statistic the rank is based on. Conversely, substantial differences in skill or strategy may be masked by general buoyancy. In both cases, performance during the GFC is a prime differentiator.

With the elimination of the GFC, potentially more aggressive funds are being rewarded in 10-year rank for the higher returns their strategies have generated in the post-Crisis period without fully accounting for the risk they may have assumed in the process.9 With many investors worried about the longevity of the bull market, analysts would do well to scrutinize funds that have received significant rating increases. Active managers have been pressured to produce results and keep up with the market, a feat few have achieved with any regularity in this prolonged bull market buoyed by unprecedented monetary policy.

What else does this mean?

For one, these changes could lead to a flows cycle as investors could be prompted to withdraw from downgraded funds and/or allocate to recent winners, not to mention shifting preferences for newly allocated capital, particularly to active managers. It’s no secret that investment dollars pour into highly rated funds.10 It’s equally true that funds are likely to shed dollars when they lose stars.

Despite the GFC’s removal from all typical rating windows, the observed behavior remains entirely relevant. It raises a number of questions, including how to account for market regimes in fund rankings. It is reasonable to assume that some funds shine in different regimes, and it is also reasonable that some funds’ tilts within their broader strategy will perform differently depending on market regime. The effect of a handful of extreme returns, however, can be long lasting, perhaps past their point of relevance.

Given the implications investors assign to fund rankings, the fact that the GFC has faded from history would ideally have had a minimal effect, assuming that nothing material has changed about the majority of funds. For the most part, a “good” fund doesn’t suddenly become a mediocre or poor one over a handful of months. And if it does, we would want to know about it sooner rather than three, five or 10 years after the fact. That underscores the importance of paying close attention to peer groups and outliers by style.

In our next post, we plan to examine some illustrative examples, and delve into what can be done in terms of peer group refinement, scenario analysis and additional metrics that can be included or modified in custom rating systems to help differentiate in the absence of another crisis.

Appendix: Changes in Fund Rankings for all US Equity Categories in 2016-2018. (MPI Research registration required.)

Footnotes

  • 1DISCLAIMER: MPI conducts performance-based analysis  and, beyond any public information, does not claim to know or insinuate actual strategy, positions or holdings of funds, portfolios or organizations discussed  herein,  nor are we commenting on the quality or merits of the strategies. This analysis is purely returns-based and does not reflect insights into actual holdings. Deviations between our analysis and the actual holdings, performance and/or management decisions made by funds and/or organizations are expected and inherent in any quantitative analysis. MPI makes no warranties or guarantees as to the accuracy of the statistical analysis contained herein. This analysis should not be interpreted as legal, tax or investment advice nor does MPI take any responsibility for investment decisions made by any parties based on this analysis. The use of this document is entirely at your own risk. Under no circumstances will Markov Processes International, Inc., and/or its affiliates, employees, third-party data providers and agents be liable for any losses, including but not limited to trading and investment losses, direct, indirect, incidental, consequential, special, exemplary, punitive, or any other monetary or other damages, fees, fines, penalties, or liabilities arising out of or relating in any way to this document, and/or content or information provided herein.
  • 2MRAR is Morningstar’s alternative risk-adjusted return measure, which is stated in returns as opposed to a ratio like the Sharpe ratio, and forms the basis of the oft-cited, influential and relied upon Morningstar star ratings, the fund data and research provider’s quantitative measure for nearly any fund with at least a 3-year track record; to be clear it is not used in the company’s forward-looking qualitative Analyst Rating. MRAR is the estimated return (based on historical) with a penalty applied for volatility.
  • 3Calculated in MPI Stylus as an illustration. Results will not perfectly match those from Morningstar.
  • 4Calculated in MPI Stylus as an illustration. Results will not perfectly match the stars assigned by Morningstar.
  • 5All share classes are ranked, only distinct share classes are plotted.
  • 6Morningstar assigns a separate 3-, 5- and 10-year rating for funds with sufficient history. The Morningstar Rating weights the periodic ratings according to available history. The 10-year rating is weighted at 50% in the calculation of the Morningstar Rating.
  • 7The funds tracked are all share classes that have a 10-year Morningstar Rating for all periods and are in the Large Growth, Large Blend, Large Value, Mid-Cap Growth, Mid-Cap Blend, Mid-Cap Value, Small Growth, Small Blend or Small Value Morningstar categories.
  • 8Maximum-minimum 10-year rating over each 12-month period.
  • 9By no means do we suggest that this is the case for all funds, or that this is the only reason for changes in fund ranks. There are a large number of reasons beyond this for a funds’ change in ranking, including manager change, changes in investment approach or risk management practices and, of course, changes in the behavior of the other funds in the universe.
  • 10The Financial Times article (subscription required) is referencing the April 2018 study “Solar-Powered Fund Flows” by Warren Miller. This study https://www.flowspring.com/research/Solar-Powered-Fund-Flows as well as Morningstar’s own recent 2018 research https://www.morningstar.com/blog/2018/05/29/factors-fund-flows.html support earlier academic research in that “star ratings” along with other factors, such as fund age and expenses, are influencing fund flows.
Tags
Comments
Leave a Reply
Fields to complete: