The world’s largest dataset on corporate AI adoption shows workforce protections and board oversight are lagging behind the pace of AI deployment — with implications for companies and investors alike
The first report from the AI Company Data Initiative (AICDI) reveals a gap between businesses’ ambition to harness the potential of AI, and the mechanisms in place to manage potential material risks.
Responsible AI in practice: 2025 global insights from the AI Company Data Initiative analysed publicly available information on AI policies from 2,972 companies across 11 sectors. It found that companies are embedding AI into their products, services and operations at a speed that outpaces governance. The result is a widening transparency gap that creates long-term value risk for businesses and investors.
Governance gaps
While 44% of companies in the sample published an AI strategy, the report found it was less clear how oversight of AI works in practice.
For example, 40% of companies publicly shared that they have board or committee-level oversight of AI. However less than a third (31%) could evidence a dedicated team or resource for AI governance. Only 2.7% publicly had a formal AI model registry, a key tool for tracking what AI systems a company runs and how they are managed.
“While AI provides great opportunities for innovation and efficiency, it is also a major governance challenge, especially if AI decision making is a ‘black box’ with little oversight and accountability. This can present considerable risks to companies and investors. We believe that pension funds have a role to play in encouraging more transparency and clear accountability in corporate behaviour to support long-term economic stability and inclusion that ultimately benefits our members. The AICDI and its tools provide a useful resource in support of these goals.”
Eva Cairns, Head of Responsible Investment, Scottish Widows, AICDI investor signatory
Risks to investment in talent and technology
The findings also reveal a lack of information on how companies on are preparing the workforce for an AI-enabled future, potentially undermining investment in this technology.
- Only a third (31%) of companies said publicly that they offer some AI-related training or reskilling to employees.
- Only 14% of companies evidence policies to protect workers from the negative impacts of AI systems
- Just 2.3% had a dedicated complaints mechanism for AI-related issues, meaning the majority of companies have no early warning system for workforce risks related to AI.
Together, these issues could lead to workforce disruption, talent retention challenges, and labour relations conflicts that could impact operations and margins.
Ethical risks left unmanaged
The policies analysed in the report frequently covered high-level responsible AI principles, but there was less specific evidence of how ethical risks were managed. This matters both for companies seeking to build consumer trust and for responsible investors assessing non-financial risk.
12.4% of companies reported having a policy to ensure human oversight of AI systems. Only 7% report conducting a human rights impact assessment of their AI use, and just 5% conduct ethical impact assessments.
Helping companies harness opportunities
There is currently limited transparency on key responsible AI indicators, meaning firms that take a proactive approach can position themselves as leaders in a fast moving environment.
To mitigate the risks offered by AI adoption and overcome governance gaps, companies need a thorough understanding of where the technology is being used.
The AI Company Data Initiative supports companies to self-assess where AI is used across products, operations and services. Through the free assessment tool, grounded in the UNESCO Recommendation on the Ethics of AI, companies can:
- Evaluate their current AI governance maturity
- Benchmark performance against industry peers
- Identify areas for improvement
- Demonstrate transparency and leadership to investors and stakeholders.
“These findings suggest that the challenge of responsible AI is no longer awareness but ensuring principles translate into practice. Our AI Company Data Initiative provides a comparable, actionable dataset so that companies and investors can identify good practice and material risk.”
Katie Fowler, Director of Responsible Business, Thomson Reuters Foundation
Next steps for investors
For investors, the report points to a category of risk that is not yet consistently priced but could be material.
Data from AICDI is shared with a group of investors with combined assets under management of $1.2 trillion. The initiative’s self-assessment survey is mapped against more than 30 regulations and standards across 10 jurisdictions.
By joining the AI Company Data Initiative’s investor signatory group, investors gain access to the full dataset from nearly 3,000 companies to evaluate AI governance, transparency and value across portfolio holdings.
This report is the product of the world’s largest data repository on corporate AI adoption, the AI Company Data Initiative. Powered by the Thomson Reuters Foundation and grounded in the UNESCO Recommendation on the Ethics of AI, the initiative is a framework to inform responsible investment decisions, and for companies to self-assess their AI adoption and mitigate risks.

