Sustainability and AI: what boards must decide now
Noela Nakos
Chapter Zero member and former Google Director
Patricia Rodrigues Jenner
Chapter Zero Fellow, Global Non-executive Board Director and co-Chair of InfraNEDsBoards must see AI through the lens of sustainability and long-term strategic opportunity, and they must understand its economics, governance implications and energy footprint as part of responsible oversight.
Boards should also recognise that AI business models are still opaque and volatile, requiring caution and experimentation.
Andy Wu, Associate Professor of Harvard Business School, points out that most generative AI companies are a long way from profiting from their foundation models – models that have required massive investment. For example, OpenAI says its business will burn a staggering $115bn through 2029.
Today it is hard to articulate the economics of AI. A handful of companies are in an investment arms race driven by fear of underinvestment, being left behind or not being at the geopolitical table. Even Sundar Pichai, Google CEO, says no company is immune if the 'AI bubble' bursts.
What does this mean for boards? They should request scenario analyses rather than linear adoption plans, as AI economics will not evolve predictably.
And many underestimate how expensive the entire AI ecosystem, including foundation models, may become to use. Wu argues that sustainable economics, rather than scale for its own sake, will determine which AI players endure. We are already seeing emerging, variable pricing models for AI services that are shifting from traditional fixed subscriptions to more flexible, data-driven approaches that are propelled by fluctuating computational costs. Boards should be prepared for this transition to energy-linked pricing and develop scenarios that accommodate these types of charging methodologies.
An energy-consuming beast
AI’s rapid growth mirrors a sharp increase in energy use.
As described in the UN environmental program report, a typical Chat GPT search needs about 10 times as much electricity to support the required compute power as a conventional Google search. In reality, the type and size of the model, the type of output generated, and countless variables beyond a user’s or system control – like which energy grid is connected to the data centre a request is sent to and what time of day it is processed – can make one query thousands of times more energy-intensive and emissions-producing than another.
And that is just text; we are yet to understand the scale of energy demand that is forthcoming with more complex generative AI uses. As pointed out by James O'Donnell and Casey Crownhart, in the future, we won’t simply ping AI models with a question or two throughout the day, or have them generate a photo. Instead, leading labs are racing us toward a world where AI ‘agents’ perform tasks for us without us supervising their every move.
We will speak to models in voice mode, chat with companions for two hours a day and point our phone cameras at our surroundings in video mode. We will give complex tasks to so-called ‘reasoning models’ (that work through tasks logically but have been found to require 43 times more energy for simple problems) or ‘deep research’ models that spend hours creating reports for us.
This need for additional compute power has led to an explosion in data centres. As reference, computing power (or ‘compute’ as known in Silicon Valley) consists of (1) the compute power to train the program and (2) the compute to use the outcome of that training to provide customers and systems the ‘answers’ (which are called ‘inferences’). The former is largely fixed while inferences represent a variable cost.
AI is becoming a grid-level issue, not just a digital one. Ben Evans notes that AI alone could add an extra 1% to US power demand growth, on top of the existing ~2% growth. In addition to electricity grid access, power consumption is a key constraint to data centre construction. And Casey Crownhart notes that data centres present a unique challenge, because they tend to be clustered together, so the demand tends to be concentrated around specific communities and grids.
Beyond the feasibility of supporting this power demand, the key factor that’s going to determine what all this means for climate change is what’s supplying the electricity we’re using. The world’s grids still primarily run on fossil fuels, so every bit of electricity growth comes with planet-warming greenhouse-gas emissions attached.
The need for renewables is clear, as is the need for delivering efficiency in AI systems. Nations that have invested in diversified, resilient energy systems will benefit most. The energy transition needs to move faster: “It’s been almost impossible to build capacity fast enough since ChatGPT launched,” says Microsoft’s CTO, Kevin Scott.
Hydrogen is not the ‘Swiss Army knife’ of energy transition. The idea of connecting grids to nuclear plants/SMRs is brimming with complexity. Neither hydrogen nor nuclear are the silver bullets to this challenge. Hence why companies like Google are exploring alternatives, such as moonshot projects like Google’s Suncatcher and others.
Understanding compute demand is key to successful AI adoption
Boards should be asking: what is our AI energy strategy and how exposed are we to future compute constraints, regulation or carbon pricing? Current reporting frameworks do not yet capture AI energy use. Expect regulatory expansion into ‘compute-attributed emissions’.
A core barrier to evaluating data centre sustainability is the lack of public disclosure of actual consumption data. Recent EU energy efficiency legislation now mandates the monitoring and reporting of data centre energy performance. And by 2027, all new data centres in Germany must be powered by renewable energy.
But disclosure alone will not solve the problem. Boards should actively demand transparency from AI providers/data centre operators, and demonstrate plans to reduce energy intensity.
Generative AI is not the Swiss army knife for all problems
As Rama Ramakrishnan points out, generative AI doesn’t suit every problem. Other classic AI solutions, such as predictive AI, may still fulfil a lot of your business and strategic needs; but determining which technology is best suited to your needs demands careful analysis and oversight regarding your business, data and AI strategy.
This brings us to our key point: it is in companies’ best interests to know when to use generative or agentic AI so that it becomes a sustainable business enabler. Agentic AI and AI foundation models are not the answer to every challenge. Board members have a fiduciary duty to understand when generative or agentic AI is truly needed. And when generative AI is the answer, AI maturity requires model-rightsizing: choosing the smallest effective model, not the most fashionable.
It is not a question of spreading fear, rather it is more a case of ensuring the ethical, energy-aware use of these game-changing models with myriad applications.
Consider whether you need a trillion-parameter model to achieve your business aim. Large-language models can be inefficient for specific business tasks. Small, specialised models often perform better at a fraction of cost and energy. Boards should require ‘model selection guidelines’ to justify the use of energy-intensive models only where necessary.
Perhaps we have already become seduced by these large-language models that search everything, using huge quantities of data, drain water resources for cooling, put pressure on the energy grid and have unknown social impact.
Before the recent hype, smaller models were standard across advanced sectors such as aerospace, cyber and defence. There is nothing wrong with using foundation models for broad reasoning and innovation, but specificity should guide deployment.
AI, accountability and the net zero mandate
The choices boards make now about AI will impact across their long-term competitiveness and net zero pathways. AI is not just a technological leap, it is a governance and energy challenge that demands thoughtful oversight.
Ben Evans notes that major technology waves, from the early internet to mobile to generative AI, arrived with hype but ultimately, when the dust settled, reshaped the world.
Boards should plan for AI that becomes foundational, scrutinised and inseparable from carbon accountability. Those who align AI adoption with sustainability, transparency and disciplined governance will shape, not chase, the future.