Eurozone bank stress tests, designed to boost public confidence in banks, showed eight failures in results published Friday.
But many charge regulators with grading on a light curve.
Beyond grades, the release was rich with useful data adding transparency.
Governments would do better to save the grades and just publish the data.
That, in our view, was the goal of recent eurozone bank stress tests, concluding with a results release on Friday. Covering 90 banks and testing both trading and banking books to determine capital adequacy in the event of a hypothetical downturn, the European Banking Authority (EBA) sought to publish data and grades in an effort to bolster public confidence in banks. When unveiled, the long-awaited and much-delayed test results showed eight banks “failed” (though failed doesn’t really mean either pass or fail, according to the tests’ organizers).
So there you have it. Eight banks failed—five Spanish cajas (long known to be troubled), two Greek banks (ditto) and one Austrian—with a total capital need of €2.5 billion. Sixteen more came close. Consensus analyst expectations called for 20 failures—so the results were a positive surprise…better than expected…confidence!
But hold the phone. Immediately following the results’ announcement, many began to charge the EBA’s criteria for “passing” banks were far too lenient—akin to a college professor tweaking a test’s absolute results to reflect the distribution of scores within a class. Grading on a curve—and, skeptics claim, a light curve at that.
The main point of contention appears to be (in a 2010 redux) how the EBA dealt with sovereign debt from the likes of Greece, Portugal and Ireland. And it appears the skeptics have a valid point: The EBA’s test criteria didn’t include the possibility of a peripheral sovereign default (which one can quibble is right or wrong, depending on your view of eurozone politicians’ efforts to stanch the crisis), and the haircuts applied ranged mostly between 15%-25% for Greek debt—and only debt held in banks’ trading books. Considering the market currently values some Greek debt at a 50% discount, that seems quite light.
While the grades’ credibility is seriously in doubt, the tests’ real benefit is beneath the surface. Along with the results, the EBA published voluminous data detailing banks’ capital standing and sovereign debt exposure. The fact is, you don’t have to believe the EBA’s grades—they’ve provided sufficient data permitting investors to keep score using whatever criteria they choose.
Only two business days from the announcement, it’s premature to say whether the release has boosted public confidence in eurozone banks. That questions linger today aren’t a huge surprise. Last year’s eurozone tests met similar criticism and were undercut by subsequent Irish bank capital needs the tests didn’t unveil. Now, the 2009 US stress tests did seemingly boost confidence—despite skepticism—at a time when it was severely lacking. But that was the first foray into stress tests.
The law of diminishing returns seems to apply here: The more times governments go to the stress-test well, the more watered down the confidence boost seems—irrespective of the specifics of the release. There’s just little potential surprise (in a positive way) about greater transparency and questionable test criteria permitting most banks to pass (especially when fears aren’t nearly as dire as in 2009). So when that happens, the result is largely a dud—even if the data released (like Friday’s) is tremendously rich.
It seems to us governments even question the value of these stress tests. For example, earlier this year the US conducted tests of the SIFI or TBTF banks to determine whether they had sufficient capital to be permitted to pay dividends. Most passed, yet weeks later, a new global SIFI/TBTF capital buffer was announced, and some in the Fed thought it too low.
All this raises a salient question: Why have the EBA—or any banking authority—announce it as a test? Why not just call it a “banking sector data dump” and publish all the underlying information without a pass/fail (whatever you call it)? Since the pass/fail criteria are mostly what’s questioned—and the data applauded—wouldn’t that quiet most critics?
Bank stress tests are no panacea for warding off threats to the banking sector—or for repeatedly boosting confidence—but the transparency provided is no doubt highly useful for further analysis. What could generate greater investor confidence is to let market participants come to their own conclusions regarding the data. Force-feeding a government report card time and again is unlikely to repeatedly hit the confidence target.