Opinion: SafeStat Not Ready for Prime Time

Click here to write a Letter to the Editor.

b>By Mike Koppenhofer

i>Watkins Motor Lines

b>and Robert Petrancosta



i>Con-Way Transportation Services

As chairmen of American Trucking Associations’ Safety Policy and Regulations committees, we are concerned by the Federal Motor Carrier Safety Administration’s recent announcement that it would begin reposting for public access SafeStat data and scores.

Although the agency is aware of how critical SafeStat or any “official” evaluation of a motor carrier’s safety record is to a trucking company’s business interests, it is apparently moving forward with public dissemination of a collection of data that is incomplete, inaccurate, untimely and biased.

Publication of SafeStat crash data, which does not meet minimal data quality guidelines, advances neither public nor safety interests. Unfortunately, all publication does is harm the reputations and businesses of many responsible motor carriers — the type of carriers actively involved in ATA and the safety-related committees we chair.

SafeStat is a safety scoring system developed by the Volpe National Transportation System Center in the mid-1990s as a mechanism for the Department of Transportation to more efficiently target at-risk motor carriers for compliance reviews. While marginally useful for its original purpose (it has proven to be more efficient than simple random selection), SafeStat has never been useful for providing reliable comparative information about motor carrier safety performance.

SafeStat suffers from a variety of problems. First, as with any formula-based tool, its conclusions are only as good as the data entered into the formula. Large holes in carrier census information and crash data led FMCSA to discontinue publication of the crash data, crash scores and overall SafeStat scores on its Web site in 2004. At that time, a DOT inspector general’s report concluded that more than 270,000, or 42%, of the motor carriers failed to provide the driver and equipment count records necessary to do a proper safety comparison among motor carriers.

Maybe even more important, the IG report also concluded that about one-third of the truck crashes were unreported by the states — with a few states reporting no crashes in a six-month period, thus introducing a geographic bias into the data. These data failings were reiterated in a later 2005 Government Accountability Office report that concluded “[t]he completeness, timeliness, accuracy, and consistency of CMV crash data is currently not meeting generally accepted data quality standards.”

Second, the IG found that even for the crashes and inspections reported, there were significant inaccuracies in the data. In total, the IG report’s sample showed that 13% of crash-related data had significant errors and 7% of the inspection reports had erroneous information, such as misidentification of the motor carrier. This accuracy problem has been confirmed by almost 11,000 motor carrier requests for correction received by FMCSA since its DataQs system was put in place in 2004 to allow motor carriers and states to seek correction of data used in the SafeStat system.

In addition, there are problems with the “algorithm” or model that the agency uses to generate SafeStat scores and carrier safety rankings. Both the IG and GAO reports recognized that problem and an analysis by Oak Ridge National Laboratory’s Center for Transportation confirmed it. ORNL, in a report titled “Review of the Motor Carrier Safety Status Measurement System (SafeStat),” revealed significant deficiencies in the SafeStat algorithm, finding that about 90% of carriers identified as at-risk did not have a high crash risk. ORNL also found that most carriers identified as at-risk by SafeStat were designated as such because of random variations in the underlying data rather than a significant change in carrier risk.

Further, ORNL found that geographic and year-to-year variations in missing and late data are likely to bias the SafeStat rankings and found that other statistical methods were 30% better at making predictions than the current SafeStat algorithm.

Now, in April 2006, FMCSA’s announcement indicates that the agency believes it has fixed SafeStat’s shortcomings sufficiently to repost the crash data and scores for public access. With all due respect, we and the members of the ATA committees we chair believe FMCSA still has work to do. From our own company experience, our communications with industry peers on the committees, and based on an analysis done by ATA, the SafeStat data is still significantly incomplete, still filled with errors, and the SafeStat model remains less than optimal for successfully targeting at-risk carriers.

With respect to data completeness, the agency’s recent suggestion that it is now receiving from the states 99% of truck-involved crash data is not likely to be correct. FMCSA-sponsored studies by the University of Michigan Transportation Research Institute show that, on average, FMCSA’s crash data file, while improving, is still missing a very substantial level of relevant crash information. UMTRI evaluated the accident files in nine states between August 2003 and July 2005 and found widespread underreporting of fatal and nonfatal crashes, with nonfatal crashes continuing to be the biggest problem. In one state, FMCSA received only 9% of the crash data, while overall — in the nine states checked — there was 50% underreporting, on average. This amount of underreporting is consistent with our own company experiences. Accuracy also remains a significant problem.

All this would be concern enough if all SafeStat did was identify carriers for heightened FMCSA attention and possible compliance reviews. Unfortunately, that is not all it does when the data is posted for public viewing. When that happens, it affects a wide range of motor carrier business interests, including shipper and freight broker perceptions and attendant business opportunities, insurance costs, stock values and other related interests. The effect of SafeStat on these business considerations is not lost on FMCSA. The DOT has labeled SafeStat data as “influential,” that is, data which “will have or does have a clear and substantial impact on important public policies or important private sector decisions.”

Influential data, because of its importance, is required by those same guidelines to be objective, that is: “DOT organizations [are to] ensure disseminated information is accurate, clear, complete and unbiased, both as to substance and presentation and in a proper context.” The IG report similarly recognized the impact SafeStat information can have on motor carriers’ business, saying, “[M]otor carriers may lose business or be placed at a competitive disadvantage by inaccurate SafeStat results.” The IG report also noted that “firms involved with motor carriers, such as shippers, insurers and lessors” use SafeStat “when making business decisions” and that it will have “an economic impact on motor carriers.”

Given the problems in the Safe-Stat system, and the negative impact on motor carrier economic interests from flawed data, it is difficult to understand why FMCSA wishes to rush the reposting of SafeStat crash data and overall scores. Providing incomplete, inaccurate data does not serve the public interest. Interested third parties are provided incomplete and misleading information that does not help in their decision-making and, because of legal considerations, they may feel bound to follow the “official” safety evaluation even though they believe it to be wrong — a safety Catch 22, refusing to do business with a carrier with a solid safety record and entrusting freight to a motor carrier with safety problems, all because of inaccurate, biased or misleading SafeStat information. To the best of our knowledge, FMCSA has not explained the value in disseminating incomplete and inaccurate information to the public.

We believe that FMCSA should acknowledge SafeStat’s continued shortcomings and work as long as needed to properly address them before making the information public. While the SafeStat system will never be perfect, and we are not suggesting it be so, the data and model it relies upon can be substantially improved before the scores and data are reposted. As part of this improvement process, FMCSA should work with stakeholders to establish and communicate a clear benchmark for data quality and completeness. When that benchmark is met, FMCSA should move forward to repost the data. At this point, though, SafeStat is not ready for prime time.

Mike Koppenhofer is director of safety for Watkins Motor Lines, Lakeland, Fla. Robert Petrancosta is director of safety and environmental compliance with Con-Way Transportation Services, Ann Arbor, Mich.

This opinion piece appears in the April 10 print edition of Transport Topics. Subscribe today.