BrusselsBrussels +32 (0) 2 372 91 68

Here’s my beef! If you use AI naively, it makes things worse

An F&L The European Freight and Logistics Leaders’​ Forum series of short personal comments on the global freight logistics market


Here’s my beef!  If you use AI naively, it makes things worse

On April 15 at F&L, I’ll be speaking about a problem most of the industry just lives with, the chaos of rate data.

Thousands of rate sheets. No standards. Excel, PDFs, scans, screenshots embedded inside other files. Multiple languages. Free-text surcharges buried in paragraphs. Location names that don’t match anything in your system, including custom or modified UN/LOCODEs defined by vendors and shippers for their own internal use.

For large players, this is painful. For small & medium businesses, it is often unmanageable.

AI looks like the answer. In practice, if you use it naively, it makes things worse. It does not just fail, it hallucinates. It generates clean, structured, completely wrong data with high confidence. In freight, that is a real risk.

So the question is not can AI read rate sheets. The question is how to design AI workflows that do not lie, and how to do it at a cost that still makes business sense.

In this session, I’ll share a practical approach built for non-technical teams:

  • Breaking the problem into steps instead of one-shot extraction. First classify the file, then interpret it
  • Combining AI with deterministic code like Python for actual extraction, calculations, and normalization
  • Using AI as a second layer of validation, not just generation, especially for tricky fields like surcharges, routing rules, and location mapping
  • Adding confidence scoring so uncertainty is visible instead of hidden
  • Keeping humans in the loop only for low-confidence cases, so cost and SLA stay under control
  • Using fingerprinting to recognize known file formats even when they vary slightly, for example with different headers or minor layout changes. Once a format has been identified and validated manually, there is often no need to call AI again. That is a major lever for reducing cost per file
  • The constraints are real. Thousands of messy files. A 4 to 12 hour SLA. Cost has to come in well below manual processing, not 10 to 20 dollars per file but closer to 4, without sacrificing accuracy.

The goal is simple. Take unstructured, inconsistent rate data and turn it into something reliable, fast, and scalable.

Because if we do not change how we use AI here, we are not solving the problem.

We are just automating bad data. 

F&L online 15 April – 14:00 – 14:30 CET, ask for a join link.

Anton Barr
April 2026