AI governance gaps are creating ESG risks, world’s largest dataset finds

Analysis of 1,000 companies by the Thomson Reuters Foundation finds lack of transparency on AI governance that could expose companies and investors to regulatory and reputational risks

Table of Contents

AI governance has emerged as a critical ESG risk factor as businesses adopt artificial intelligence faster than they are governing it.

The Thomson Reuters Foundation’s AI Company Data Initiative (AICDI) analysed publicly available information from 1000 companies from 13 sectors across the globe, making it the world’s largest data repository on corporate AI adoption.

It found that while companies are implementing AI systems and tools at unprecedented speed, less than half have established robust governance frameworks to manage associated risks.

48% of the companies sampled disclosed AI strategies or guidelines, yet there are significant transparency gaps related to the ESG impacts of AI adoption.

AI governance: an ESG implication for corporate leaders

Of companies that disclosed an AI strategy or policy, 71% included principles like ‘ethical’, ‘safe’, ‘secure’ or ‘trustworthy’ AI.  However, the data reveals that AI adoption could create ESG-related risks if not properly governed.  

  • Environmental: 97% of companies sampled did not consider environmental impact, including energy use and carbon footprint, when considering which AI systems to deploy. With growing questions about measuring AI’s contribution to emissions, companies face scrutiny on its climate implications  
  • Social: 68% of companies with AI strategies did not assess the societal impact of the technology beyond end users – overlooking wider implications of how AI is changing society 
  • Governance: Of companies with an AI strategy, 76% reported management level oversight of AI, but only 41% made AI policies accessible to employees or required their acknowledgement, potentially creating a gap between policy and practice

Investment analysis missing key insights

The transparency gap on AI governance creates regulatory, reputational, and operational exposures that traditional investment analysis may not capture.  

The data reveals significant variations by sector and region, with implications for investors’ stewardship: 

  • Just 38% of companies sampled in the Americas published an AI policy, compared to over half (53%) in EMEA – despite US dominance in AI innovation 
  • Of companies with AI strategies, 46% of EMEA firms had dedicated AI governance teams – likely driven by EU AI Act compliance – versus only 26% in the Americas 
  • Financials, Communication Services and Information Technology firms were three times more likely to have responsible AI teams or roles than those in the Energy & Materials sector.

Helping companies harness opportunities

To mitigate the risks offered by AI adoption, companies need a thorough understanding of where the technology is being used.  

The AI Company Data Initiative supports companies to audit where AI is used across products, operations and services. Through the free assessment tool, companies can: 

  • Evaluate their current AI governance maturity 
  • Benchmark performance against industry peers 
  • Identify areas for improvement 
  • Demonstrate transparency and leadership to investors and stakeholders 

The Initiative is grounded in the first global standard on AI ethics developed by UNESCO. It is open to support companies until 30 November 2025.

Companies taking part include Vodafone, Infosys, Telefónica, Iberdrola and BASF. 

“Responsible AI is not only a driver of efficiency but also a powerful enabler for delivering the best digital experience to our customers. It is therefore a source of competitive advantage. At Telefónica, we have established responsible AI frameworks to steer our innovation toward sustainable transformation. This includes participating in the AI Company Data Initiative to gain a comprehensive understanding of both risks and opportunities.” – Joaquina Salado, Head of AI Ethics, Telefónica

Next steps for investors

Investors are critical to ensure that companies disclose AI-related risks and opportunities. They can do this by: 

  • Requesting an AI governance brief during due diligence 
  • Incorporating specific disclosures on oversight, transparency (including environmental and risk disclosures), and regulatory exposure 
  • Benchmarking responses against sector and regional peers to identify leaders and laggards 

The AI Company Data Initiative is the largest repository of data on corporate AI adoption, combining publicly available and voluntarily disclosed data. 

By joining the investor coalition, investors gain access to the full dataset from 1000 companies to evaluate AI governance, transparency and value across portfolio holdings.

Share