Keep It Simple

Why Emissions Models Should Favour Transparency Over Complexity

Dr. Elliott More

8/3/20253 min read

Artificial intelligence has achieved feats once thought impossible. Deep learning systems, a subset of machine learning that mimic the structure and learning process of the human brain, have vanquished grandmasters in Go, out-bluffed professional poker players, and outpaced radiologists in diagnosing disease from medical scans. With enough labelled images—of sheep and horses, or of tumours and clear lungs—such systems learn to identify what is present with astonishing accuracy.

Yet brilliance can sometimes be brittle. One deep learning algorithm trained to detect pneumonia from X-rays appeared to perform admirably—until its creators examined how it reached its conclusions. Rather than analysing the lungs, the system focused on a marker placed by a particular hospital where pneumonia rates were unusually high. It had not learned to identify pneumonia; it had learned to spot a proxy.

This episode encapsulates the double-edged nature of machine learning. Left to its own devices, an algorithm will exploit whatever patterns help it meet its stated target—even if those patterns are spurious, irrelevant, or unhelpful outside the original context. The model performed well on paper but failed the test of generalisability.

This is not just a philosophical concern. In sustainability reporting and climate modelling—where the stakes are high and the future uncertain—similar temptations to complexity abound. But here, the guiding principle should be an old and simple one: Occam’s razor.

The Razor’s Edge

Occam’s razor is the principle that, when faced with competing explanations or models, the simplest one that still fits the data should be preferred. In practice, simplicity means fewer assumptions, fewer parameters, and clearer logic. This is not a rejection of sophistication, but rather a check on unnecessary complexity.

In emissions modelling, this principle matters more than ever. A growing number of jurisdictions, including Australia, now require companies to disclose their greenhouse gas emissions and scenario analyses under emerging climate-related financial disclosure rules such as AASB S2. These rules demand auditability—that is, companies must be able to show how their emissions estimates were calculated, and how they arrived at their assumptions about future emissions pathways.

This is where AI-powered projections fall short. A black box model, built from reams of data scraped from dozens of sources and filtered through opaque neural networks, may produce plausible estimates. But if no one—including the model’s creators—can explain why it produces those estimates, then it cannot be audited. And if it cannot be audited, it cannot be trusted by regulators, investors or auditors.

Garbage In, Garbage Out

There is a second, more subtle reason to favour simpler models: resistance to bias. Deep learning systems do not just replicate the logic of human cognition; they inherit its flaws too. Just as humans fall prey to availability bias, anchoring bias, and confirmation bias, so too do models trained on biased data. If a human overweights recent or vivid events, so too will an algorithm trained on datasets skewed by media attention or regional anomalies.

Indeed, by taking in more information—market data, policy updates, ESG reports, social media trends—AI systems increase their exposure to noisy, biased, or irrelevant signals. The result is not greater accuracy, but greater vulnerability. As the old programming adage puts it: garbage in, garbage out.

A simpler model, with carefully defined assumptions and transparent logic, does not promise to be “smarter” than its AI counterpart. But it is easier to inspect, test, and improve. Its flaws are knowable and correctable. In emissions forecasting, that is a virtue.

Complexity Is Not Understanding

When projecting trends in emissions factors—which describe the amount of greenhouse gases emitted per unit of activity—the aim is not to dazzle with sophistication but to capture the range of plausible futures in a manner that is both defensible and transparent.

A simple model, based on structured assumptions about grid decarbonisation, vehicle efficiency or industrial electrification, may lack the seductive power of machine learning. But it is more likely to withstand regulatory scrutiny, especially under standards that require organisations to demonstrate the reliability and replicability of their climate disclosures.

Simplicity is not the enemy of rigour. It is often its foundation.

In Defence of the Modest Model

As regulators step up scrutiny and investors demand more credible net-zero plans, the pressure to produce elegant, explainable models will only grow. Firms should resist the lure of inscrutable AI projections in favour of methods that can be interrogated and improved over time.

Deep learning definitely still has a role to play, in data cleaning to produce structured activity data, but when it comes to building long-term emissions scenarios that underpin public disclosures and capital decisions, Occam’s razor should rule. The simplest model that fits the data is not only good science. It is also good governance.