The date was sometime in late 2023. We had a prototype AI system that we believed could call freight brokers and negotiate load rates. We had tested it extensively in simulated conversations. The metrics looked good.
We set up the call. Our AI — at the time we called it "Agent 1" before we named it Aida — dialed a broker we had worked with for years. We sat in the room listening.
What happened in the first 90 seconds
The broker picked up. The AI introduced itself as a dispatcher for our company — which was true, technically. The broker asked about availability for a Chicago to Memphis dry van load. The AI asked the right question first: "What's your rate on that?" Good start.
The broker said $1,850.
The AI responded: "The proposed compensation of $1,850 is below the current market benchmark for this origin-destination pair. Our target rate for Chicago-Memphis dry van is $1,937.42."
Silence on the line for two full seconds.
"...Is that a real number?" the broker asked.
"No real dispatcher quotes $1,937.42. Real dispatchers say $1,950 or $2,000. Odd numbers signal that a computer is doing math, not a human making a decision."
The seventeen "let me check on that"s
The broker, to his credit, kept the conversation going. He asked about truck dimensions. The AI said "let me check on that." He asked about pickup flexibility. "Let me check on that." He asked if the driver had a hazmat endorsement. "Let me check on that."
We counted. Seventeen times in a four-minute call.
We ended the call early with an excuse about checking availability and called the broker back ourselves, as humans, to salvage the relationship. He thought it was funny. We were less amused.
What we rebuilt
The "let me check on that" problem was easy to fix: we pre-loaded all relevant truck information before every call. The AI now knows truck specs, available dates, driver endorsements, and equipment details before it dials. No more checking.
The rate quoting problem required rethinking our entire negotiation logic. We implemented what we call the round-number rule: the AI rounds all rate quotes to the nearest $50. $1,937 becomes $1,950. $2,312 becomes $2,300. Real dispatchers round. Computers don't round unless you tell them to.
The formal language problem took longer. We spent months training on real broker call transcripts, adjusting the AI's tone, vocabulary, and response patterns. "The proposed compensation is below market benchmarks" became "that's pretty light for this lane — what can you do?" It sounds simple. Getting an AI to say it naturally, with the right pacing and without sounding rehearsed, took about 200 iterations.
Where Aida is today
Today Aida — the AI we named after that disastrous first call — books real loads for real carriers on the ESSE platform. She asks for the broker's rate first. She uses round numbers. She says "yeah" and "let me see what I can do" instead of citing market benchmarks. She leaves professional voicemails. She follows up by email without being asked.
She's not perfect. She occasionally gets confused by brokers who speak very quickly or use heavy regional accents. She sometimes accepts loads at the floor rate when a more aggressive negotiator might have gotten $50-100 more.
But she works every night, every weekend, every holiday, without overtime pay or bad days. And she has never, not once, quoted $1,937.42 to anyone.