App Logo

Demo image

Welcome to our Open Source multidisciplinary and interactive online tool for assessing the trustworthiness of an organization's AI implementation.

The tool is based on the ALTAI recommendations published by the European Commission and is designed to help organizations ensure their AI systems are transparent, robust, and trustworthy.

Highlight Areas of Risk

In today's fast-paced world, organizations are increasingly adopting AI to streamline operations and improve decision-making. However, AI systems must be developed and implemented with caution, ensuring that they do not compromise fundamental human rights or perpetuate bias and discrimination. Our tool provides a comprehensive assessment of your organization's AI implementation, highlighting areas of strength and areas for improvement.

Recommendations Report

You will also receive detailed suggestions and guidance for improving the trustworthiness of your AI system. This will enable you to build and maintain trust with your customers, employees, and other stakeholders, and mitigate the risks associated with AI implementation.

You are in control

One of the key benefits of our open-source tool is that it can be hosted and fully controlled by your organization. This means that you can maintain complete ownership and control over your data and assessments.

Host the tool on your own servers, you can also ensure that the tool meets your organization's specific security and privacy requirements.

OPEN-SOURCE, modify and adapt the tool to fit your organization's unique needs.

This flexibility and control make our tool an ideal solution for organizations looking to assess the trustworthiness of their AI systems while maintaining full control over their data and assessments.

Install it here

TRY THE DEMO INSTANCE

Demo image

    The demo instance is a publicly available instance for trying out the AI Ethics Assessment Tool.

    Projects and accounts on the demo instance are deleted periodically, you should not rely on the demo instance for production use. We cannot guarantee that your projects won't be lost. We recommend hosting your own instance.

Meet the AI4Belgium Ethics & Law advisory board

Nathalie Smuha - Researcher at KU Leuven
Nathalie SmuhaResearcher at KU Leuven
Nathalie Smuha is a legal scholar and philosopher at the KU Leuven Faculty of Law, where she examines legal, ethical and philosophical questions around Artificial Intelligence (AI) and other digital technologies.
Nele Roekens - Legal Advisor - Unia • Equality body and human rights institution
Nele RoekensLegal Advisor - Unia • Equality body and human rights institution
Nele is legal advisor at Unia, equality body and human rights institution. She specializes in technology and human rights, especially non-discrimination. She is also active at the European level in her role as chair of the working group on AI of the European Network of Human Rights Institutions.
Jelle Hoedemaekers - Stadardization Expert at Agoria
Jelle HoedemaekersStadardization Expert at Agoria
Jelle is an expert in AI Regulation. He works as ICT Normalisation expert at Agoria, where he is focused on the standardisation and regulation of new technologies such as Artificial Intelligence. Within Agoria he also works on policies surrounding new technologies. Jelle also co-leads the AI4belgium work group on Ethics and Law, which looks at the ethical and juridical implications of AI on the Belgian ecosystem.
Carl Mörch - Co-manager - FARI • AI Institute for Common Good
Carl MörchCo-manager - FARI • AI Institute for Common Good
I am co-directing FARI - AI for the Common Good Institute. This project is a joint initiative between Université Libre de Bruxelles (ULB) and the Vrije Universiteit Brussel (VUB).I am also an associate researcher at Algora Lab (UdeM, Mila, Canada) and an adjunct professor (UQAM, Canada). I have developed and published an AI Ethics Tool, and I work on the responsible use of technologies in healthcare.
Rob Heyman - Director - Data & Maatschappij Kennis Centrum
Rob HeymanDirector - Data & Maatschappij Kennis Centrum
The more digitalised we live, the more we get personalised decisions based on our information. My goal is to uncover how these things work and get people to understand what happens with data. I find it curious that so little is known about data in the age of big data. My method consists of uncovering the hidden life of data by mapping these processes in easy to digest, texts, scenarios and visuals. We then use co-creation sessions to map current practices with end-users expectations, regulators or innovators.
Yves Poullet - Former Rector at Namur University
Yves PoulletFormer Rector at Namur University
Yves Poullet was a rector at the University of Namur (2010-2017). He is a founder and former director of CRIDS (1979- 2009). He was also a member of the Privacy Protection Commission for 12 years.
Nathanaël Ackerman - Manager BOSA - AI - Minds Team
Nathanaël AckermanManager BOSA - AI - Minds Team
Nathanaël Ackerman is the managing director of the AI4Belgium coalition and Digital Mind for Belgium appointed by the Secretary of State for Digitalization. He is also head of the “AI – Blockchain & Digital Minds” team for the Belgian Federal Public Service Strategy and Support (BoSa).

Description

This tool was designed to enable team members with diverse expertise to collaborate and have conversations about key topics related to the trustworthiness of their AI implementation.

Topics assessed

  • 1. Fundamental rights This section emphasizes the need to respect fundamental human rights in the development and deployment of AI systems. It includes guidelines for ensuring that AI systems do not violate human dignity, privacy, or other fundamental rights.

  • 2. Human agency and oversight This section stresses the importance of human oversight in AI decision-making. It provides guidelines for ensuring that humans remain in control of AI systems and that decisions made by AI systems are explainable and auditable.

  • 3. Technical robustness and safety This section provides guidelines for ensuring the technical robustness and safety of AI systems. It covers topics such as system reliability, cybersecurity, and resilience.

  • 4. Privacy and data governance This section focuses on the need to protect personal data and ensure proper data governance in the development and deployment of AI systems. It provides guidelines for ensuring that personal data is collected, processed, and stored in a transparent and secure manner.

  • 5. Transparency This section stresses the importance of transparency in AI decision-making. It provides guidelines for ensuring that AI decision-making processes are explainable and that users can understand how decisions are made.

  • 6. Diversity, non-discrimination, and fairness This section provides guidelines for ensuring that AI systems do not perpetuate bias and discrimination. It covers topics such as data bias, fairness in decision-making, and inclusivity.

  • 7. Societal and environmental wellbeing This section emphasizes the need to consider the societal and environmental impact of AI systems. It provides guidelines for ensuring that AI systems are developed and deployed in a way that promotes social and environmental wellbeing.

  • 8. Accountability This section provides guidelines for ensuring accountability in AI development and deployment. It covers topics such as legal compliance, risk management, and stakeholder engagement.

With the support of cabinet Michel and cabinet De Sutter.

GitHub