Skip to content

Artificial intelligence

Building trust and compliance into AI-enabled systems

Insights, resources, and advice on trustworthiness and compliance for AI designers and developers and organizations deploying or using AI systems.

As AI ushers in a new era of productivity and capabilities, it also poses new risks that must be managed in new ways. AI relies on data, which itself can change, leading to results which may be difficult to understand. AI is also ‘socio-technical’ and is affected by a complex and dynamic interplay of human behavioural and technical factors. DNV can help you develop the new risk approach that AI needs – both to ensure compliance with emerging regulations and to manage risks dynamically – to access the benefits of AI more rapidly, fully, and confidently.  

Recommended practices 

Our resources at your disposal include our Recommend Practice (RP) on AI-enabled Systems, that addresses quality assurance of AI-enabled systems and compliance with the upcoming EU AI Act. Other recommended practices developed by DNV cover the building blocks of AI systems – data quality, sensors, algorithms, simulations models, and digital twins – that we have developed through our extensive work with digitalization projects at asset-heavy and risk-intensive businesses worldwide. Cross-cutting all those digital building blocks is cyber-security, where DNV offers world-leading industrial cyber security services.