Lost your password?
Click here

A PDF version can be downloaded here.

 

Analysis of Short-Term Probability of
Precipitation Forecasts
(October 2006 through June 2007)

By ForecastWatch.com, a Service of Intellovations, LLC
July 10, 2007

 

Executive Summary

One-day-out probability of precipitation (POP) forecasts were evaluated over approximately 800 locations in the United States between October 1, 2006 and June 30, 2007. Publicly available POP forecasts were collected from CustomWeather, The National Weather Service, The Weather Channel, and a non-public feed from DTN Meteorlogix (collected at the same time as the public forecasts). All private forecasting companies did much better than the National Weather Service. DTN Meteorlogix had the best scores for the nine-month POP accuracy period, as well as for the winter months only.

Importance of POP Forecasts

Many organizations rely on good precipitation forecasts. Concrete pouring and asphalting decisions depend on reliable rain forecasts. Missing the rain can result in costly re-dos, while forecasting rain when there ends up being none results in lost revenue opportunities. Public works departments and state DOTs rely on accurate snow and ice forecasts to know when to call out crews and pre-treat roads. For them reliable forecasts are critical for public safety, and for avoiding unnecessary and costly crew callouts.

Accurate precipitation forecasts are similarly important to electrical utilities, airports, golf courses, outdoor sports and recreation, and police/emergency management. Accurate precipitation forecasts add to the bottom line of weather-dependent businesses. And they help cities, counties and other organizations better meet their mission.

How POP Forecasts Are Evaluated

There are two components to measuring the accuracy of a probability-of-precipitation forecast. The first is accuracy. If, over the forecasts being measured, there was precipitation the same percentage of time as forecast, the forecast is said to be accurate. For example, if it rained 10% of the time the POP forecast called for a 10% chance of rain, the POP forecasts would be accurate. If, on average, there is precipitation three out of ten days at a given location, and the forecaster always predicted a 30% chance of precipitation every day, the forecaster would be accurate. While accurate, the forecast isn't useful.

The second measure of a POP forecast is resolution. A perfectly resolved POP forecaster would always predict no chance of precipitation for dry days, and 100% precipitation for days on which there was rain or snow. The forecaster above who always forecast a 30% chance of precipitation would be said to be fully unresolved. However, a forecaster who predicted 100% chance of precipitation on dry days, and zero percent on wet ones is still perfectly resolved, but completely inaccurate. While resolved, the forecast isn't useful.

Evaluating a POP forecast fully, therefore, must take both the accuracy and the resolution of the forecast into account. The calculation used to evaluate POP forecasts is called the Brier score. The Brier score takes both accuracy and resolution into account. A Brier score ranges from zero to one, with zero being perfectly accurate and resolved (0% POP forecast on dry days, 100% POP forecast on days with precipitation).

Methodology of the Comparison

Brier scores are much more useful the larger number of forecasts and observations there are to calculate. This study evaluated POP forecasts for approximately 800 locations within the United States over the period of October 1, 2006 through June 30, 2007. Forecasts were collected at approximately 6pm ET each day. The next-day, or one-day-out forecast, was entered in at that point. The forecasts were compared against precipitation measured by the National Weather Service. If more than 0.01 inches of precipitation fell, it was considered to have been a precipitation event.

About 200,000 one-day-out POP forecasts were collected over the period for each forecaster. The number used is less that the total collected because occasionally a weather observation station was down for maintenance or a weather forecast was invalidated because of errors (for example, rarely the National Weather Service reported precipitation probabilities greater than 100%).

Results of Short-Term POP Forecast Comparison

The following tables detail the Brier scores for each weather forecast provider for one-day-out probability of precipitation forecasts. Table 1 shows Brier scores for the nine month period of October 1, 2006 through June 30, 2007. The second column is the number of forecasts that were evaluated in calculating the one-day-out forecasts. The third column is the number of possible forecasts that could have been evaluated had every actual and forecast been collected and considered valid. The fourth column is the calculated Brier score for the period.

 

Provider

Number of Forecasts

Percent of Possible Forecasts

Brier Score

DTN Meteorlogix

174743

78.7%

0.1219

CustomWeather

195862

87.8%

0.1271

The Weather Channel

195681

87.9%

0.1382

NWS

166737

77.7%

0.1903

Table 1: Results of nine month short-term POP forecast analysis (lower is better)

 


Many businesses, governments, and individuals are particularly interested in winter forecasts. Preparations for snow, such as changing business processes, salting roads in advance, and keeping employees on standby are real costs. Better prediction of winter precipitation means that businesses, governments, and individuals can plan better, save money, and provide better service. POP scores for the winter months of December 2006 through February 2007 have been broken out in Table 2.

 

Provider

Number of Forecasts

Percent of Possible Forecasts

Brier Score

DTN Meteorlogix

44092

62.4%

0.1104

CustomWeather

64847

90.3%

0.1228

The Weather Channel

64645

90.2%

0.1351

NWS

59561

85.1%

0.1846

Table 2: Results of short-term POP forecast analysis for winter 2006-2007 (lower is better)

About ForecastWatch.com

ForecastWatch.com has been helping meteorologists improve their weather forecasts for over four years. Created by Intellovations, LLC, a full-service technology consulting company, ForecastWatch.com provides weather forecast accuracy and skill information for over 800 locations within the United States and Canada. ForecastWatch.com has been used by meteorologists and companies whose business depends on the weather to:

  • Evaluate weather forecast providers

  • Improve decision-making where weather forecasts are used as input

  • Improve weather forecasts by providing useful feedback

  • Compare weather forecast providers

  • Educate customers

  • Improve the quality of weather forecast websites

ForecastWatch.com is the only company that provides ongoing weather forecast accuracy and skill information to meteorologists, utilities and energy companies, the agriculture industry, futures traders, and anyone whose reputation or business depends on being right about the weather. In 2003, ForecastWatch.com released the results of the largest public weather forecast accuracy study undertaken to that point. ForecastWatch.com's ongoing weather forecast accuracy information system has been used to perform custom analysis and reporting, and was instrumental in making this comparison possible.