New research will help protect society from unethical AI

A report from the Universities of Oxford and Bologna will help protect society from unethical AI, supporting organisations to meet future EU regulations.

A world-first approach to help organisations comply with future AI regulations in Europe has been published today in a report by the University of Oxford and the University of Bologna. It has been developed in response to the proposed EU Artificial Intelligence Act (AIA) of 2021, which seeks to coordinate a European approach in tackling the human and ethical implications of AI.

A one-of-a-kind approach, the ‘capAI’ (conformity assessment procedure for AI) will support businesses to comply with the proposed AIA, and prevent or minimise the risks of AI behaving unethically and damaging individuals, communities, wider society, and the environment.

Produced by a team of experts at Oxford University’s Saïd Business School and Oxford Internet Institute, and at the Centre for Digital Ethics of the University of Bologna, capAI will help organisations assess their current AI systems to prevent privacy violations and data bias. Additionally, it will support the explanation of AI-driven outcomes, and the development and running of systems that are trustworthy and AIA compliant.

Read the full story on