72nd International Atlantic Economic Conference

October 20 - 23, 2011 | Washington, USA

How to rank forecasters in an unbalanced panel: A new approach

Sunday, 23 October 2011: 9:40 AM
John Silvia, Ph.D , Economics Group, Wells Fargo Securities, LLC, Charlotte, NC
Azhar Iqbal, Economic Forecasting , Economics Group, Wells Fargo Securities, LLC, Charlotte, NC
Many media services rank professional forecasters every year, e.g., Bloomberg published top-10 forecasters for many major macroeconomic variables in 2010. The ranking from Bloomberg (for example) is very important for a forecaster to demonstrate forecasting accuracy. While, an accurate ranking is also crucial for Bloomberg because accurate rankings increase Bloomberg’s credibility as a source of market intelligence.

  The current Bloomberg forecasters’ ranking methodology is based on the average forecast error over two years, with the minimum average forecast error declared the winner. To qualify for the ranking, forecasters must have made at least 15 of the 24 forecasts.

The Bloomberg methodology, however, has some serious issues. Firs, because not all forecasters forecast every month, and some of them have missing values for many months, it is an unbalanced panel. Moreover, Bloomberg implicitly considers the forecast error equal to zero if a forecast is not available for a given month. Interestingly, if a forecaster makes a perfect forecast (forecast equals to the actual release) then his forecast error is also zero. That implies a missing forecast equals to a perfect forecast. Consequently, skipping a month or two where forecast errors are potentially large historically would be beneficial for a forecaster. This may discourage forecasting.

The second problem with the current methodology is that a forecaster is competing against those who did not submit their forecast. It would be fair to compete only against those who submit their forecast.

This paper proposes a new approach which provides a fairer and more accurate forecasters’ ranking. Our methodology is based on ranks—where a lower rank is associated with a lower forecast error. In the first step, we rank only those forecasters who submitted a forecast in a given month. That limits the competition to only those forecasters who submit their forecasts to Bloomberg for that month. In the second step, we take the average rank of a forecaster from the months when he did submit forecast to Bloomberg for the months no forecast was submitted. That way nobody will get rewarded for being absent or be panelized by submitting forecast in times of greater data volatility. Our methodology provides more accuracy and can be used for forecasters’ rankings in any unbalanced panel.

Key Words: Forecasters’ Ranking; Unbalanced Panel; Bloomberg; Forecast Error.

JEL Classification: C5; C53; E27.