We need your help to update the AICDI survey to ensure it keeps abreast of changes to the landscape and continues to reflect investor and business needs – send us your feedback by Friday 8 May
How to get involved
There are various ways you can be involved in the survey consultation process by the deadline of Friday 8 May.
Send us your thoughts
Fill out this short form to quickly give us structured feedback on the proposed changes.
Interviews
Request a 1:1 interview slot, for detailed feasibility feedback with one of our team, by email to aicdi@thomsonreuters.com.
Written feedback
Submit written comments using the following template to aicdi@thomsonreuters.com:
- Section / question reference
- Issue observed
- Suggested change
- Feasibility considerations
- Expected benefit (company clarity / investor comparability)
Questions or queries?
- For queries, interview requests, and feedback submissions contact aicdi@thomsonreuters.com
Overview of key proposed changes
Format changes
- Question 1.7: Do you have a company-wide designated board, committee or person(s), or similar bodies designated to review issues of accountability and responsibility, and other ethical issues? – Offer this as a drop-down question instead of yes/no:
- Yes – Board‑level body
- Yes – Board committee
- Yes – Senior management committee
- Yes – Designated individual(s)
- Yes – Multiple bodies or roles
- Not yet
- Question 1.9: Changing from yes/no to open test: How do you take a responsible/ethical approach to AI into consideration when selecting suppliers?
- Question 1.10: Change the format: Do you have a formal AI Model Registry (or equivalent database) that can track the lifecycle, purpose, testing methodologies, bias stats, and types of training data used in AI model/system development? Change to drop down: 1. Yes, internal; 2. Yes, publicly available 3. No
- Question 2.3.a: If so, do you include AI ethics training as a fundamental part of the onboarding process? Select all that apply:
- Yes – Technology / Engineering / IT functions
- Yes – Data, AI, or Analytics functions
- Yes – Product, Innovation, or Digital functions
- Yes – Risk, Legal, Compliance, or Ethics functions
- Yes – Business or operational functions using AI systems
- Yes – Senior management and leadership
- Yes – Organisation‑wide (all employees)
- Not yet
- Question 2.6: Does your company have any of the following mechanisms?
- A redress mechanism for workers negatively impacted by AI
- An internal mechanism for the submission and review of employees’ complaints in relation to AI
Content additions
- Adding a question about AI use case (section 0)
- Adding a question in section1: on how companies track and respond to evolving AI regulation, and which legislative processes they follow. e.g. ‘What is your process for monitoring and responding to new or changing AI regulations in the jurisdictions where you operate?’
- Substitute question 1.16 for the following questions:
1. Do you monitor the environmental impact of your AI system(s)?
- Not yet
- Yes
- How do you monitor the environmental impact of your AI system(s)?
- The ongoing model energy usage (for inference and/or performance)
- The ongoing model carbon footprint
- The ongoing data centres’ energy usage
- The ongoing data centres’ water usage
- The ongoing data centres’ carbon footprint
- The model training energy usage (before the model was procured/deployed)
- For the model training water usage (before the model was procured/deployed)
- For the model training carbon footprint (before the model was procured/deployed)
- Overall energy usage for all models within the company
- Overall carbon footprint of all models within the company
- Overall carbon footprint of the entire AI supply chain
- Other -please specify
- How do you monitor the environmental impact of your AI system(s)?
2. Do you report the environmental impact of your AI system(s) publicly?
3. Do you have measures in place to minimise the AI system(s)’s energy use and carbon footprint?
Add a sub question to 1.17: Does you company have a redress mechanism for workers negatively impacted by AI?
Adding a new question before 2.2: ‘Do you assess the impact of AI systems on workers in your company?’; and a sub- question: ‘Do you consult workers or their representatives before AI tools affecting them are deployed?’
Adding a new question to section 2: Has organisation‑wide AI deployment influenced workforce planning or role design? Yes/No
Adding a new question to section 3: Which of the following best describes the company’s approach to managing copyright and intellectual property risks in relation to AI systems?
- No formal approach in place yet
- Ad hoc or informal consideration of IP/copyright risks
- Internal guidelines on use of copyrighted data and AI-generated content
- Formal policy covering AI training data and generated outputs including risk controls and accountability mechanisms
Merged content
- Questions 1.1 and 1.2: Does your company adhere to any of the following self-regulatory codes of conduct, voluntary frameworks, commitments, guidelines, or internationally recognized technical standards in relation to the ethical development and deployment of AI, recognised by reputable industry bodies or relevant regulatory authorities? For example, the UNESCO Recommendation on the Ethics of AI.
- Question 1.5: making it a subset of question 1.4
Minor content edits
- Question 1.3: adding ‘or internationally’ to the current text: Are you engaging in technological and/or ethical exchanges with leaders from industry, regulation, academia, or civil society in the country where you are based?
- Question 1.12: Simplify wording: ‘Do you assess whether the use of AI systems is proportionate and adequate to the issues and challenges AI systems aim to address?
- Question 1.15: Simplify the question: do you conduct any of the following impact assessments. Add ‘no’ and ‘not yet’ to the dropdown options
- Question 1.19: Minor addition for clarification: Do you have a policy for ensuring there is a human overseeing all AI systems that you develop an/or deploy?
- Question 1.19a: simplify: ‘If so, how is this policy implemented and what monitoring tools are available to support this?’
- Question 2.20: Suggest a minor edit: Do you have a feedback mechanism for users of AI systems you develop and/or deploy to flag potential ethical issues?
- Question 2.3: Wording change: Do you provide training modules and appropriate materials to relevant company staff on the ethical standards and considerations of AI systems you use and/or develop?
- Question 3.1a: minor edit for clarification: Do you have processing standards and safeguards for personal data, including sensitive data?
- Question 3.10: More specific wording: Is there a dedicated team investigating emerging cybersecurity risks/risks related to data misuse and privacy/ AI-related risks, and developing appropriate mitigation strategies?
Edits/additions to the guidance
- Question 1.6 Add more details to the guidance – explain that public AI literacy can also be enhanced through providing materials, publishing information about AI use on the website, publishing model cards or transparency reports; making content that explains the company’s use of AI etc. Currently, the examples are quite limited.
- Question 1.11: Add in the guidance that we are looking to understand how they manage the process of attributing ethical and legal responsibility to physical persons or legal entities
- Question 2.5: How does your company ensure that AI tools used in the workplace do not infringe on workers’ rights?: Add more examples in the guidance: e.g. surveys, anonymous AI-related complaint process etc. In addition, add in the guidance the answer needs to be specific and that there is a need to provide actions/examples; This could also be merged with Q2.2
- Question 2.8: Add examples of AI-powered HR tools in the guidance
- Question 2.9: Add examples in the guidance, to provide more details on what that accessibility assessment might be, what kind of disabilities and marginalised groups
- Question 3.9: What policies, processes, and human resources do you have in place to ensure the safety and security of the AI systems you develop or use and protect them from system manipulation? – Add examples in the guidance: red-teaming of AI systems, processes for responding to adversarial inputs or model manipulation, incident response plans specific to AI.

