AI and Equality

News

Leveraging AI Regulation for Inclusive Design: The High-Risk EU AI Act Toolkit (HEAT)​

Tomasz Hollanek presents the High-risk EU AI Act Toolkit (HEAT)—an ethical
AI framework co-developed by the University of Cambridge and Ammagamma—which adopts a pro-justice, feminist-informed
approach to stakeholder engagement. 

Researcher Tomasz Hollanek from the Leverhulme Centre for the Future of Intelligence at Cambridge delivered an AI & Equality PubTalk, shedding light on the EU AI Act and its practical application through the innovative High-Risk EU AI Act Toolkit (HEAT). Hollanek, specializing in the critical intersection of design and AI ethics, particularly in human-AI interaction, provided a crucial guide to this evolving regulatory landscape.

Demystifying the EU AI Act
Hollanek kicked off by diving straight into the EU AI Act aiming to tackle AI-related risks. He emphasized that while not fully implemented yet, its impact is imminent. A key distinction he highlighted is between “providers” (developers) and “deployers” (users) of AI systems, each with distinct responsibilities. The Act is fundamentally about protecting human rights and values within the EU.

The Act operates on a risk-based classification system: unacceptable, high, and limited risk. Unacceptable risks are outright prohibited, like certain facial recognition technologies and manipulative AI. High-risk systems, the primary focus of the HEAT toolkit, include applications in areas like biometrics, critical infrastructure, education, and employment. Providers of these systems face stringent requirements, including conformity assessments and risk management. Limited risk systems, such as chatbots, require transparency about AI-generated content.

HEAT: Your Practical Guide to AI Act Compliance
Hollanek introduced the High-Risk EU AI Act Toolkit (HEAT). Developed to assist project managers, especially in SMEs, HEAT is designed to make navigating the AI Act less daunting. It’s about turning theory into practice, and Hollanek and his team achieved this through a co-design process with industry practitioners.

Hollanek explained that HEAT was born out of the realization that many existing AI ethics toolkits fall short. They often lack depth, reduce ethics to checklists, and fail to provide clear instructions for inclusive participation. HEAT addresses these shortcomings by offering an open-access, customizable tool designed for real-world application. It empowers organizations to not only comply with the EU AI Act but also to champion ethical and just AI development.

 HEAT is structured around “spaces”—interconnected areas that teams must address during AI development and deployment. Think of these as essential steps: defining team values, conducting impact assessments, managing risks, and crucially, engaging with stakeholders. Stakeholder engagement is a central pillar of HEAT, pushing users to actively involve those affected by the AI system, even though the AI Act doesn’t mandate it.

Each space is broken down into tasks, complete with links to the relevant sections of the AI Act, rationales, instructions, and workbooks for documentation. This structure ensures users understand why each step is necessary and how to execute it effectively. HEAT also includes “go further” sections, encouraging users to go beyond basic compliance towards more ethical and sustainable practices.

About Tomasz Hollanek

Dr Tomasz Hollanek is a Research Fellow at the Leverhulme Centre for the Future of Intelligence (LCFI) and an Affiliated Lecturer in the Department of Computer Science and Technology at the University of Cambridge, working at the intersection of AI ethics and critical design. His ongoing research explores the possibility of applying critical design methods – prioritising the goals of social justice and environmental sustainability – in the governance, development, and deployment of AI systems. This includes work on the ethics of human-AI interaction design (in particular, the design of companion chatbots and griefbots) and the In-depth EU AI Act Toolkit, helping developers translate the requirements of the European Union’s AI Act into design practice.

Previously, Tomasz was a Vice-Chancellor’s PhD Scholar at Cambridge and a Visiting Research Fellow at the École Normale Supérieure in Paris. He has contributed to numerous research projects, including the Global AI Narratives Project at LCFI and the Ethics of Digitalization research program at the Berkman Klein Center for Internet and Society at Harvard.

About the author

Amina Soulimani

Programme Associate at Women At The Table and Doctoral Candidate in Anthropology at the University of Cape Town

 

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.