Opinion: Enforcement Disparity = Flawed CSA Scores

By Brett Sant

Vice President, Safety and Risk Management

Knight Transportation

This Opinion piece appears in the July 29 print edition of Transport Topics. Click here to subscribe today.



Independent studies have confirmed that anomalies and flaws exist in the relationship between crash likelihood and a trucking company’s BASIC scores as calculated under the Federal Motor Carrier Safety Administration’s Compliance, Safety, Accountability program.

Despite this generally accepted fact, FMCSA insists on making the flawed scores available to the public. But why?

The stated reason for peer-based scores is “prioritization,” but prioritization does not explain making the scores visible to the public. So, what does? Is it the creation of an environment leveraging scorn, fear and competitive pressure to compel motor carriers to improve compliance with regulations?

To its credit, FMCSA knows that the issues with the methodology and data must be addressed and corrected before CSA’s final piece — the Safety Fitness Determinations — can be implemented. FMCSA continues to work with stakeholders to refine the program’s Safety Measurement System. We applaud those efforts.

However, the underlying data and their source — roadside inspections and crash reports — remain problematic, and a crucial flaw in SMS peer comparisons exists that renders the results questionable at best.

The most basic assumption of a peer comparison is that each subject is evaluated using the same criteria applied in the same way. Regrettably, in the case of the SMS — which replaced the old SafeStat system — regulations applying to all carriers are not enforced uniformly and evenly. Unless variables causing subjects in the same cohort to experience different conditions are accounted for, a peer comparison is statistically meaningless.

FMCSA’s method of normalizing data in most CSA BASICs — the program’s Behavior Analysis and Safety Improvement Categories — attempts only to account for the disparity in inspection frequency, the denominator being the number of relevant inspections. However, the methodology doesn’t account for widely divergent violation-to-inspection ratios between jurisdictions.

Suppose you have a large fleet with terminals in different parts of the country. Your hiring standards, training, equipment, communication, controls and operational practices are either the same or very similar, regardless of terminal location or where your fleet operates. How do you explain gross differences in violation-to-inspection ratios between fleets operating in, say, Mississippi and Wisconsin?

If your internal practices are consistent, why aren’t your BASIC Measures — the basis for your percentile ranking in any BASIC — consistent when evaluated by location? Minor differences may be explained by terminals’ geographical location, but when BASIC Measures are broken down by jurisdiction, the disparity is often enormous.

Why do widely divergent violation-to-inspection ratios among jurisdictions render peer comparison scoring invalid?

Your Measure is determined by a simple formula for most categories: the sum of time and severity weighted violations (points) divided by relevant inspections.

Assuming all violations are weighted equally for time and severity, the formula can result in a fleet’s Measure in one state being much better than in another because of the two states’ relative disparity in violation-to-inspection frequency.

If you factor in severity weighting, the potential disparity can be even greater. Severity weighting further complicates the problem of enforcement disparity.

If you operate heavily in a state with “strong” enforcement activity, you will rate poorly compared with a peer operating largely in a state with weaker enforcement efforts.

Given the apparent impact of enforcement disparity, where and how you operate may matter greatly to your BASIC Measure. The very fact that normalization doesn’t account for the differences among jurisdictions in violation-to-inspection ratios makes current “peer comparisons” invalid, and as such they should not be displayed publicly.

FMCSA acknowledges the reality of inconsistent enforcement, and stakeholders are helping to ensure consistent and effective enforcement. Knight Transportation supports CSA’s stated objectives and is committed to helping FMCSA fulfill its mission of eliminating serious on-road crashes.

Meanwhile, our industry and its concerned stakeholders should:

• Ask FMCSA to share state-by-state data, permitting comparison of violation-to-inspection ratios and average violation weights to objectively quantify enforcement disparity and permit discussion about an effective method of normalization to ensure all peers are subjected to truly similar conditions.

• Ask that until enforcement disparity is accounted for in the SMS, peer-based scores be removed from public view — or stipulate that, in some cases, they may not be accurate. Make sure the public, including customers, can’t use BASICs rankings to assess carrier fitness or safety.

• Continue working with FMCSA and stakeholders to make the SMS more valuable for identifying unsafe carriers — including the hundreds of thousands with no data available.

• Ask FMCSA to publicly display only the carrier’s Measure in the various BASICs and continue to permit carriers to privately see their peer-based rankings, deriving the competitive benefit FMCSA seeks without making flawed comparisons public.

• Allow FMCSA to use flawed peer comparisons privately for enforcement prioritization, but limit the agency’s use of peer ranking until data and methodology issues are settled. Safety fitness determinations can’t be made until enforcement disparity is accounted for and removed from the determination.

• Ask that FMCSA not issue a rulemaking connecting safety fitness determinations to SMS data without according due process to individual carriers to ensure the proposed rating based on SMS data is not biased by data or methodology and accurately reflects the carrier’s standards, controls, operational methods and actual level of performance.

If safety fitness determinations are based on nonrelative performance — as they should be — and can’t be tied directly to peer-based rankings because of data reliability and methodology flaws, how can FMCSA justify using those scores for any other purpose?

Displaying flawed peer comparisons in public is CSA’s most critical problem and must be corrected.

Knight Transportation, Phoenix, ranks No. 31 on the Transport Topics Top 100 list of the largest for-hire carriers in the United States and Canada.