There’s often a gap between what people claim to “trust” and how they actually rely on systems. Organisations, too, rarely design mechanisms to recalibrate that reliance when circumstances shift. This creates a dangerous dynamic: the model’s perspective becomes the organisation’s default worldview.
In today’s era of constant disruption, tools risk offering “optimal” solutions for a reality that no longer exists. Discussions about AI in shipping still revolve around familiar questions: Are the forecasts accurate? Are the optimisers reliable? While these are important, a new and growing risk demands attention: what happens to human judgment when a system is treated as the operational brain?
Modern planning engines integrate demand forecasts, production schedules, and sailing plans into precise outputs. Kim Sørensen of StormGeo predicts that AI will soon become a standard part of daily voyage planning and optimisation. Risk dashboards now provide real-time insights into war-risk zones, canal draft limits, and congestion. Given the complexity of today’s shipping networks – with their intricate alliances, rotations, feeder links, and hinterland bottlenecks – it’s logical to lean on smarter tools.
However, as systems take on more of the workload, the critical skills needed in crises – scenario-building, challenging assumptions, and improvising under uncertainty – begin to atrophy. A review of 74 studies on automation bias reveals a troubling trend: when automated systems perform well most of the time, users tend to accept their outputs without question, search less for alternative information, and overlook rare but critical errors, especially under pressure.
Consider a shipper using an AI engine designed to minimise costs and lead times on Asia-Europe routes. Over time, the system identifies the cheapest strategy: concentrating volumes through a few hyper-efficient corridors and a small group of low-cost, high-frequency carriers. This works – until a chokepoint is disrupted. The AI continues recommending routes that are no longer viable, safe, or insurable. Planners, conditioned to justify overrides, hesitate. By the time human judgment intervenes, ships are stuck in the wrong queues, and the scramble for capacity has already begun.
The issue isn’t malicious code – it’s an organisational vulnerability. AI shifts from being a tool to becoming an oracle. This creates a self-inflicted single point of failure within the digital layer of supply chain networks. As platforms increasingly integrate planning, performance, and risk into unified interfaces, the risk of AI dominating decision-making grows. Boards must take deliberate steps to ensure humans retain the ability to interrupt, override, or disable these systems.
Shipping has long understood physical chokepoints – Suez, Panama, Hormuz, Malacca. Boards are well-versed in identifying which canals or terminals could cripple operations if disrupted. But the next generation of chokepoints won’t be geographic; they’ll be digital, embedded in code.
This demands a new approach to AI. The objective shouldn’t be to automate everything but to create systems that are powerful yet resistible – tools that enhance, rather than dull, human judgment. As Kris Vedat of SmartSea aptly notes, “Integration matters more than feature lists, and execution matters more than vision statements.” This mindset is crucial for boards deciding where and how AI fits into the decision-making chain.
Three key questions must guide this agenda:
- Does the system highlight what has changed and present multiple options, or does it offer only a single “recommendation”?
- Are overrides treated as opportunities for learning, or are they dismissed?
- When the next disruption occurs, will dissent still have a secure place in decision-making?
The future of shipping depends on building AI systems that empower human judgment, not replace it.

