When it comes to making financial decisions, savvy folks try to make the most informed choices possible—they scrutinize all available relevant data and information in the name of basing decisions on fact, not whim or gut feeling.
For many people, academic studies seem like a great addition to their information arsenal—data collected, analyzed and interpreted by some of the world’s finest minds. Investors and economic policymakers alike have used studies to underpin many decisions over the years, from Modern Portfolio Theory’s guiding portfolio construction to the Laffer Curve’s impact on fiscal policy.
But what happens if widely held studies are debunked?
That’s the question facing policymakers globally right now, after some University of Massachusetts Amherst researchers poked a few glaring holes in a popular 2010 study on the impact of high public sector debt, which claimed a debt-to-GDP ratio above 90% causes economic stagnation or decline.
In the initial study, the authors aggregated post-WWII economic and fiscal data from around the world and found that when debt-to-GDP reaches 90%, average growth falls from roughly 3% to -0.1%. Those findings held for three years, during which a few finance ministers and other economic officials used it to support their fiscal tightening plans. But then the study’s authors released their dataset, and the U-Mass researchers found a few critical errors—including an Excel coding flub, omitted data and an oddly high weighting of one high debt/low growth datapoint (New Zealand in 1951). After accounting for these factors, the U-Mass crew re-ran the analysis and found the average growth rate at a 90% debt ratio rises to 2.2%. And in the days following, one of their colleagues examined the data further and found that the debt ratio correlated with past growth, not future growth—debt is a backward-looking indicator. As a result, many pro-austerity politicians are finding their fiscal policies under a bit more scrutiny.
To me, this saga isn’t about who’s right and who’s wrong—it’s about the fallibility of relying on studies in general. Even studies from the most trusted, vaunted researchers can be faulty. Data can be entered incorrectly. Or it can be massaged and mined to suit a particular bias or support a preferred outcome. Methodologies can be flawed and lacking real-world applications. Conclusions can rest on faulty assumptions or confuse coincidence with causation. Rely on a study’s conclusion at face value, without digging into the data, and you could potentially go astray.
This holds true for another study making headlines in recent weeks: a Cass Business School report claiming randomly generated stock portfolios—“effectively simulating the stock-picking abilities of a monkey”—outperform cap-weighted market indices and the fund managers that follow them. Commentators seized on the study, claiming it showed everything from the weakness of cap-weighted indices to the pointlessness of actively managing a stock portfolio.
In reality though, it showed neither—because of its wonky methodology, it has zero real world implications.
Here’s why: The study’s authors didn’t compare randomly generated “monkey” portfolios to a real cap-weighted market index, like the S&P 500 or MSCI World. Instead, they built a hypothetical index comprised of the 1,000 largest US stocks with a five-plus year performance history on December 31 in each year from 1968 through 2011, rebalanced annually. So 1968’s returns took the 1,000 biggest stocks on 12/31/1968 and aggregated their returns for the full year, 1969’s returns aggregated the full-year returns of the 1,000 biggest US stocks on 12/31/1969, and so on. The “monkey” portfolios were simply portfolios randomly selected from this same universe of stocks.
This methodology suffers from extreme survivorship bias—because it assumes the 1,000 biggest stocks on the year’s last day were held all year, it automatically excludes every firm that failed throughout the year. It also likely omits many of a given year’s worst performers—those that fell out of the top 1,000 between January and year-end—in favor of those that grew into the top 1,000. Real indices don’t work like this—they include all the failures and laggards. Ditto for real portfolios. I can’t gaze into a crystal ball, see which 1,000 companies will have the largest market cap next December, and buy them now. No investor can know for sure which 1,000 stocks will be biggest at any point in the future and load up on them now. No one can construct a portfolio in hindsight the way the researchers did. Instead, we humans have a much, much larger global universe of stocks to choose from, including all the firms that could fail or get whacked as well as all those that could shoot skyward. There’s no way we can replicate an approach that picks only from 1,000 of the US’s better-performing stocks on a forward-looking basis.
Thus, in the real world, leaving stock selection to chance carries a smaller likelihood of success than the study suggests—and greater potential downside. Investors can improve their chances through sound fundamental analysis, aided by a real cap-weighted index’s useful blueprint for broad portfolio construction (e.g., sector and country allocations)—but you won’t get that conclusion from the study.
I’m not saying all studies should be summarily dismissed—plenty have merit. But in order to divine which ones are legit, you have to look past the editorials, abstracts and conclusions, and dive deep into the data and methodology. Only then can you truly make the most informed decision—whether you’re a policymaker or investor.