Blog
Here you’ll find technical articles, research notes, and reflections on projects we've worked on. Topics include infrastructure, security, systems engineering, and emerging technologies.
When the Model Lies: Observability, Risk & AI Transparency
A Canadian traveller, Jake Moffatt, asked Air Canada’s website chatbot whether bereavement fares could be claimed after travel. The bot invented a 90-day refund window, Mr Moffatt bought a CA \$1600 ticket where he should’ve paid CA \$760, and the airline later refused to honour the promise. In February 2024 A civil tribunal ruled the answer “misleading” and ordered Air Canada to reimburse the fare, interest, and costs—more than CA \$812 in damages. One hallucination became a legal court case, caused reputational damage, and about CA \$1,000,000 in indirect costs. That story is no longer an outlier. LLM errors are creeping into contracts, trading systems, and operational dashboards. The common thread: a lack of deep observability.
