How to apply due diligence to ensure responsible AI use
Across sectors, organisations are increasingly integrating AI into their daily operations and value chains. At 2impact, we see this shift both in the organisations we work with and in our own ways of working. At the same time, we are cognizant that AI comes with risks to both people and the environment.
For this reason, we believe it is important for every company to understand and address these risks. This view is also reflected in the work of leading international organisations. The OECD, for example, has emphasized that AI should be innovative and trustworthy, and should respect human rights and democratic values through its AI Principles, which were updated in 2024. At the same time, the OECD Guidelines for Multinational Enterprises on Responsible Business Conduct set out how companies can use due diligence to identify and address risks to people, the environment and society. In February 2026, the OECD brought these two strands together by publishing its Due Diligence Guidance for Responsible AI. This guidance helps companies apply established due diligence principles specifically to the development, deployment and use of AI.
Which companies this is relevant for
The OECD Responsible AI Guidance distinguishes three groups of companies:
- Group 1: Suppliers of AI inputs: The upstream part of the AI value chain. These companies provide the capital, goods, and services needed to develop AI, including data, datasets, third-party code, funding, and digital infrastructure such as cloud services.
- Group 2: Enterprises actively involved in the design, development, deployment, and operation of AI systems : Companies directly involved across the AI system lifecycle, including planning, design, model building, testing, deployment, and operation.
- Group 3: Users of the AI system: The downstream part of the AI value chain. These companies use AI systems in their operations, products, or services, for example by using tools such as ChatGPT or Copilot in day-to-day business.
The actions companies should take regarding due diligence, depend on the group they belong to, as we’ll discuss in the rest of this blog.
The six due diligence steps in relation to AI
Based on the OECD guidelines, companies should follow a 6-step approach in their due diligence process to identify and address risks in their operations and supply chains. For risks related to AI, the same 6 steps apply:

Step 1 – embedding responsible business conduct into policies and management systems
Companies in all groups should develop policies that commit to the OECD AI principles. This means companies should for example commit to using, developing or providing inputs to AI that respects human rights and democratic values, has beneficial outcomes for people and the planet, and does not pose unreasonable safety and security risks. It is also important to provide an overview of a plan for implementing due diligence in relation to responsible AI. AI policies can be stand-alone or part of broader responsible business conduct policies.
To embed these policies in management systems, companies should, for example, assign responsibility for AI due diligence to relevant senior management, and responsibility for implementing aspects of the policies across relevant departments.
Step 2 - Identifying and assessing actual and potential adverse impacts
Companies should develop an understanding of the risks of potential adverse impacts related to the development and use of AI systems.
Risks associated with AI
AI presents serious environmental and human rights risks. It relies on resource-intensive infrastructure that increases pressure on energy systems, water use, raw material extraction, and local ecosystems, with wider consequences for the climate. It also raises concerns about the people who make these systems possible, since parts of the AI supply chain depend on forms of labour that can be insecure, underpaid, and harmful, including work involving exposure to disturbing content. Alongside this, AI can undermine privacy, shape behaviour in manipulative ways, and reinforce existing inequalities when biased systems influence decisions about people’s lives.
Therefore, all companies should seek to understand the risks posed by AI in their value chains:
- Companies in group 1 should identify the types of AI systems developed by business partners and the risks these systems pose.
- Companies in group 2 need to assess whether their AI systems meet responsible business conduct and labour standards, which should include consultation and engagement with stakeholders such as workers and trade unions, impacted communities, and civil society groups to gather information on significant risks.
- Companies in group 3 should identify risks in relation to the AI systems they use.
All companies should review whether high-risk business relationships have due diligence practices in place and prioritise the most significant (salient) risks.
Step 3 - ceasing, preventing and mitigating adverse impacts
If companies directly cause or contribute to adverse impacts, they should cease or prevent the activities causing these. For companies in group 2, actions to avoid and mitigate risk can be broadly categorised in four points:
- Responsible sourcing and use of data: assessing and improving data quality and AI performance, while preventing or mitigating risks from data sourced or annotated in harmful ways. For example, companies should implement privacy preserving and responsible data governance approaches to collecting data and training AI systems such as data cleaning, on-device processing, and federated learning.
- Transparency, explainability, and traceability: keeping stakeholders informed about the AI system’s functionality, capabilities, and risks, especially after deployment. For example, companies should implement mechanisms to provide clear, accessible, and meaningful explanations of automated decision-making processes, including the logic, main parameters, and potential outcomes of the algorithmic process, tailored to the average user’s understanding.
- Security and robustness: ensuring physical and cybersecurity, and maintaining reliability, repeatability, reproducibility, and predictability throughout the AI lifecycle. For example, companies should establish mechanisms to quickly respond to AI system failures.
- Responsible deployment and operation: assessing whether the model is safe to deploy, implementing guardrails, monitoring its use, and retiring it from production where appropriate. For example, companies should establish and integrate feedback processes for end users and relevant stakeholders to report problems and appeal system outcomes.
If companies in all three groups are linked to risks via business relationships, they should engage with the business relationship and use leverage to prompt the business relationship to address the risks.
Step 4 - tracking implementation and results
Companies in all three groups should track the implementation and effectiveness of their due diligence activities. Companies should monitor and track this within their own organisation as well as with business relationships, for example by carrying out periodic reviews or audits.
Step 5 – communicating how impacts are addressed
All companies should publicly communicate all relevant information on due diligence processes, including relevant policies, identified risks, actions taken to address these risks, measures to track implementation and results, and provision of or cooperation in any remediation. Where relevant, reports should also include information on AI system capabilities and limitations, or information on incidents and attempts by AI actors to circumvent safeguards.
Step 6 - providing for or co-operating in remediation when appropriate
If a company has caused or contributed to adverse impacts, it should seek to restore the affected persons to the situation they would be in had the impact not occurred. For example, if the development of an AI system has had negative impacts on people, the company whose activities caused that impact may choose to restore the affected people to the position they would have been had the harm not occurred, providing financial compensation, or provide medical and psychological harm.
Conclusion
Companies should treat AI-related risks as part of their broader responsible business conduct responsibilities. This applies not only to companies developing AI systems, but also to those supplying inputs into AI and those using AI in their operations, products and services.
For businesses, the practical takeaway is that responsible AI requires carrying out the six-step due diligence process. Those that have already implemented due diligence systems in relation to their materials supply chain can extend this process to cover risks and activities related to AI. Those just getting started can integrate AI use or development risks into their due diligence system from the start.
2impact has extensive experience supporting companies in setting up due diligence systems. Whether you want to apply to this to AI or human rights and environmental risks, don't hesitate to reach out.