Why Energy, Data, and Mobility Can No Longer Be Underwritten Separately

The convergence of power, compute density, and physical access is rewriting the risk architecture of AI data centre infrastructure. Investors who assess these layers independently are mispricing the asset class.
Aleksander Meidell-Hagewick
Read Time
15
Minutes

A 200 MW AI campus is not a larger version of a 20 MW colocation facility. It is a different category of infrastructure. It draws power at a scale that strains regional grids. It generates heat that air cannot remove. It requires fibre connectivity dense enough to justify rack densities ten times those of the previous generation. And it must be built, supplied, and commissioned within timelines that are shrinking even as complexity grows.

Yet most underwriting frameworks still treat these domains as separate risk factors: energy as a utility procurement question, cooling as a mechanical engineering detail, connectivity as a networking decision, logistics as a construction management problem. Each is assessed in isolation, then aggregated into a composite risk profile. This approach made sense when data centres were general purpose computing facilities with modest power draws. It does not hold for the infrastructure now being deployed.

The shift matters because capital is flowing at speed. More than $30 billion in AI data centre investment has been announced across the GCC alone through 2030 (Analysys Mason). McKinsey estimates that data centres worldwide will require $6.7 trillion in cumulative capital expenditure by 2030 under its base case scenario, with the range spanning $3.7 trillion to $7.9 trillion depending on the pace of AI adoption (McKinsey). The question is not whether the capital will be deployed, but whether it will be deployed into projects whose risk architecture reflects the actual interdependencies of the asset.


Power as a structural constraint, not a procurement exercise

For decades, electricity was among the more predictable variables in a data centre investment case. Grid connections were available within reasonable timescales. Power costs were a known input. The primary question was efficiency, not availability.

That era is over. Deloitte’s 2025 AI Infrastructure Survey found that 72% of data centre and power company executives now consider power and grid capacity to be very or extremely challenging (Deloitte). In EMEA, new commissioned data centre capacity fell 23% in 2025, not because of weak demand, but because the grid could not keep pace (Techerati). In the United States, interconnection queues now stretch to seven years in some regions (Deloitte).

The scale of projected demand compounds the challenge. The IEA estimates that global data centre electricity consumption could exceed 1,700 TWh by 2035, approximately 4.4% of total global electricity demand (IEA). A RAND Corporation study projected that AI data centres could require 68 GW of additional power capacity by 2027, a near term target nearly equivalent to California’s entire installed base (RAND). The World Resources Institute found that power constraints are extending construction timelines by two to six years in affected markets (WRI).

The implication for investors is straightforward. Power is no longer a line item in the operating model. It is the primary determinant of whether a project reaches operations on schedule, and in many markets, whether it reaches operations at all. Nixon Peabody’s analysis of data centre real estate development concluded that investors should now treat grid interconnection queue position, upgrade scope, curtailment exposure, and power firmness as core value drivers (Nixon Peabody). In the GCC, where major transmission infrastructure expansions can take several years from planning to energisation, power strategy must be embedded at the point of site origination, not resolved downstream.


Cooling as a determinant of offtake viability

The second layer that can no longer be treated independently is thermal management. NVIDIA’s Blackwell architecture GPUs consume up to 1,000 watts per chip, and successor architectures are expected to push further. AI rack densities have risen from the 5 to 15 kW typical of traditional enterprise computing to between 100 and 132 kW for GPU native training infrastructure (MLQ AI). At these levels, air cooling alone cannot maintain acceptable operating temperatures; above approximately 40 to 50 kW per rack, direct liquid cooling to the chip or immersion cooling becomes a prerequisite rather than an option.

This is not a marginal engineering question. It determines whether a facility can fulfil the technical specifications embedded in its offtake contracts. The Uptime Institute recorded a 38% increase in average rack power density between 2022 and 2024 (Datacenters.com). Cooling now accounts for roughly 40% of total data centre energy consumption (AIRSYS). As compute density rises, that share intensifies, compressing margins and complicating power budgets.

In the GCC, ambient conditions sharpen these dynamics. Facilities in Saudi Arabia and the UAE contend with summer temperatures exceeding 45°C. One major Saudi operator experienced PUE degradation of 40% during peak summer before retrofitting with closed loop liquid cooling (VLink). GCC operators such as Khazna have responded with adiabatic and indirect free cooling approaches alongside modular design that reduce energy consumption and water use (DCD). Immersion cooling, which eliminates the need for cooling towers and their associated water consumption, is increasingly relevant in arid environments where water scarcity imposes its own constraint.

The underwriting point is this: a site’s cooling architecture is not a detail to be resolved after demand has been contracted. It is a precondition of whether that demand can be served. If a facility cannot maintain contracted power densities at sustained ambient temperatures, the offtake agreement it was designed to anchor becomes undeliverable. This is a structural risk, not an operational one.


Connectivity as a prerequisite, not an amenity

The third domain is fibre and network infrastructure. The observation from Meta’s vice president of network investments captures the issue concisely: without the connectivity linking data centres together, they are expensive warehouses (Light Reading).

The numbers reflect the industry’s recognition of this reality. Investment in new subsea cable projects is expected to reach $13 billion between 2025 and 2027, nearly double the preceding three year period (Light Reading). The GCC is a direct beneficiary. Ooredoo’s Fibre in Gulf (FIG) submarine cable will connect all six GCC states plus Iraq with 720 Tbps capacity, exceeding the combined capacity of all existing and planned Gulf cables (Ooredoo). Google’s Dhivaru cable will create a new trans Indian Ocean route linking Oman to the Maldives and Christmas Island (Subsea Cables).

Connectivity quality directly affects which tenants a facility can attract. Hyperscale and sovereign cloud workloads require redundant, high capacity fibre with direct international routes. The 2024 Red Sea cable disruptions, which severed an estimated 25% of telecommunications traffic between Asia, Europe, and Africa (CSIS), demonstrated that route diversity is not optional for mission critical infrastructure. As one industry analysis noted, subsea cables have become strategic infrastructure on par with energy, transportation, and defence (Subsea Cables).

The GCC’s position at the intersection of Europe, Africa, and Asia makes connectivity both a structural advantage and a necessary condition for the region’s AI infrastructure ambitions. A campus with abundant power and advanced cooling but weak international fibre connectivity will not secure the offtake it needs to justify its capital structure.


Physical access: the risk layer that underwriting models underweight

There is a fourth convergence layer that most investment frameworks treat as secondary: physical logistics, supply chains, and site accessibility. A hyperscale AI campus is a construction project of industrial scale. It requires the coordinated delivery of transformers, switchgear, cooling plant, GPU servers, and fibre infrastructure within compressed timelines. Shortages of transformers, switchgear, and gas turbines are compounding delivery challenges (WRI). Supply chain disruptions and security concerns were cited by 65% and 64% of respondents respectively in Deloitte’s survey (Deloitte).

In the GCC, physical access variables also include sovereign partnership structures, permitting timelines, grid connection lead times, and the practical realities of operating at scale in markets where regulatory and institutional frameworks are being developed with considerable ambition and pace. These interact with energy and cooling constraints in ways that linear risk models do not capture. A site with excellent power availability but constrained permitting may be no more viable than one with abundant land but no grid connection.

Morgan Lewis’s 2026 data centre outlook identified location, buildout timelines, and load capacity as the key factors shaping project risk, noting that traditional project finance lenders are underwriting large, syndicated loans supported by long term leases, stable power supply arrangements, and other risk mitigation measures (Morgan Lewis). This reflects a growing understanding that construction execution and commercial structuring cannot be separated from the technical characteristics of the underlying facility.


The GCC: convergence as both advantage and execution test

The GCC illustrates these dynamics with particular clarity. Middle East data centre capacity is projected to triple from 1 GW in 2025 to 3.3 GW within five years (PwC). MEED tracks 174 active and planned projects worth more than $93 billion across the region (MEED). Saudi Arabia’s HUMAIN, a PIF backed AI company launched in 2025, alongside partnerships with AWS, NVIDIA, and Google Cloud, is positioning the Kingdom as a significant compute hub (Middle East Institute). In the UAE, Stargate UAE, a 1 GW AI infrastructure cluster being built by G42’s Khazna Data Centres in partnership with OpenAI, Oracle, NVIDIA, Cisco, and SoftBank, is targeting its first 200 MW phase for completion in 2026 (The National).

The region’s advantages are anchored first in sovereign strategic vision: national AI programmes, digital sovereignty mandates, and long term economic diversification objectives that position compute infrastructure as a pillar of national development. These are reinforced by structural enablers including competitive energy pricing, available land, geographic positioning between three continents, and deep sovereign capital pools. Saudi Arabia’s Cloud Computing Special Economic Zone is expected to account for 30% of the Kingdom’s ICT spending by 2030 (PwC). Data sovereignty requirements under Saudi Arabia’s Personal Data Protection Law and the UAE’s Federal Data Law create captive demand for domestic infrastructure (Addleshaw Goddard).

A further variable is the evolving US export control regime governing advanced AI chips. The Biden administration’s January 2025 AI Diffusion Rule, which classified GCC states as Tier 2 destinations subject to GPU quantity limits, was rescinded by the Trump administration, which signalled a more permissive approach to Middle Eastern allies while tightening enforcement against China (Data Center Knowledge). The UAE’s authorisation to procure up to 35,000 NVIDIA Blackwell chips through G42 demonstrates the practical impact of these policy shifts. For investors, the regulatory trajectory of GPU export controls is now a material factor in assessing GCC AI infrastructure capacity and timeline risk.

But every one of these advantages must be activated through integrated execution. Power procurement, grid expansion, cooling design, fibre routing, permitting, and construction logistics must be sequenced together. The projects that reach operations successfully will be those where these layers were coordinated from inception, not assembled after capital had already been committed.


Systemic risk and the limits of disaggregated analysis

The insurance market is recognising what infrastructure investors are learning. Lexology’s analysis of data centre insurance found that the sector now requires integration between construction risk, cyber risk, and operational resilience planning, with bespoke provisions for cooling systems, MEP installation, grid connection delays, and technology exposures (Lexology). It also flagged the risk of catastrophic accumulation losses where clustered facilities share the same grid, power source, or water supply.

Ashurst’s analysis of emerging market data centre projects described the sector as combining heavy capital expenditure, complex technology, long lead supply chains, and intensive energy and connectivity dependencies within fast moving regulatory contexts, a combination it characterised as creating an abundant environment for disputes (Ashurst).

The core problem is this: energy, cooling, connectivity, and logistics are not independent variables. They are interdependent components of a single system. Failure in one cascades across the others. A facility with contracted power but inadequate cooling cannot serve its offtake. A campus with advanced cooling and abundant power but weak fibre will not attract hyperscale tenants. A project with all three capabilities but no clear pathway through permitting and grid connection will never reach operations. Assessing these risks separately, then summing them, systematically understates the probability and magnitude of compound failure.

What follows from this

The convergence of energy, data, and physical access into a single risk architecture has consequences for how AI data centre infrastructure should be originated, structured, and underwritten.

First, energy strategy must be embedded at the point of site origination, not treated as a procurement exercise conducted after land has been secured. Queue position, power firmness, and grid upgrade timelines are now primary determinants of project viability.

Second, cooling architecture must be validated against the specific compute densities and ambient conditions a facility will operate under. A mismatch between thermal management capability and contracted workload specifications is a structural risk to offtake delivery.

Third, fibre connectivity and international route diversity must be evaluated as preconditions for offtake origination, not supplementary infrastructure to be arranged after tenants have been identified.

Fourth, physical logistics, including supply chain timelines, permitting processes, and sovereign partnership requirements, must be integrated into the capital sequencing model rather than managed as downstream execution tasks.

The investors and operators positioned to capture value in this environment will be those capable of working across all four domains simultaneously, treating demand origination, capital deployment, and physical execution as a single integrated discipline rather than sequential workstreams. In a market where data centres have been reclassified as critical national infrastructure (Securitas), and where data centres are transitioning from passive energy consumers to active grid stakeholders (ING), the ability to integrate these layers is becoming a condition of relevance.

Regulatory and policy references in this article reflect the state of play as of February 2026. Export control frameworks, data sovereignty requirements, and permitting regimes across the jurisdictions discussed are subject to change.

 

 

Sources

1. Analysys Mason, “Accelerated investment in AI data centres in the GCC region” (2025) Link

2. McKinsey, “The cost of compute: A $7 trillion race to scale data centers” (2025) Link

3. Ashurst, “Data centre dash: disputes risk in emerging markets” (2025) Link

4. Deloitte, “Can US infrastructure keep up with the AI economy?” (2025) Link

5. IEA, “Energy demand from AI” (2025) Link

6. Techerati, “The Real Constraint is Power” (2025) Link

7. RAND Corporation, “AI’s Power Requirements Under Exponential Growth” (2025) Link

8. World Resources Institute, “Powering the US Data Center Boom” (2025) Link

9. Nixon Peabody, “Energy-first strategies for data center real estate development” (2025) Link

10. MLQ AI, “AI Data Center Cooling” (2025) Link

11. Datacenters.com, “Why Liquid Cooling Is the New Standard” (2025) Link

12. AIRSYS, “Data Center Trends and Cooling Strategies to Watch in 2026” (2026) Link

13. VLink, “Scaling Data Centers and AI Infrastructure: Saudi and UAE” (2025) Link

14. Data Center Dynamics, “Why the next generation of AI infrastructure starts in the Middle East” (2025) Link

15. Light Reading, “Private investments supercharge subsea cable buildouts” (2025) Link

16. Ooredoo Group, “Fibre in Gulf submarine cable” (2025) Link

17. Subsea Cables, “Oceans of Data” (2026) Link

18. CSIS, “The Strategic Future of Subsea Cables: Egypt Case Study” (2026) Link

19. PwC, “Unlocking the data centre opportunity in the Middle East” (2025) Link

20. MEED, “The GCC Data Centre Projects Market 2026” (2026) Link

21. Middle East Institute, “From Crude to Compute: Building the GCC AI Stack” (2025) Link

22. The National, “Stargate UAE’s first phase to be completed in third quarter of 2026” (2025) Link

23. Addleshaw Goddard, “The Future of Data Centres in the GCC” (2025) Link

24. Data Center Knowledge, “AI Chip Export Controls: A New Challenge for Data Centers” (2025) Link

25. Morgan Lewis, “Data Center 2026 Outlook” (2025) Link

26. Lexology, “How Data Centres are Reshaping Insurance Markets” (2026) Link

27. Securitas, “Data centers: The latest pillar of critical infrastructure” (2025) Link

28. ING, “How data centres can be better integrated into the energy ecosystem” (2025) Link

 

This article is published for informational purposes only and does not constitute investment advice, a financial promotion, or an offer of securities. The views expressed reflect analysis of publicly available information and should not be relied upon as the basis for any investment decision.