Welcome to our Open Source multidisciplinary and interactive online tool for assessing the trustworthiness of an organization's AI implementation.
The tool is based on the ALTAI recommendations published by the European Commission and is designed to help organizations ensure their AI systems are transparent, robust, and trustworthy.
Highlight Areas of Risk
In today's fast-paced world, organizations are increasingly adopting AI to streamline operations and improve decision-making. However, AI systems must be developed and implemented with caution, ensuring that they do not compromise fundamental human rights or perpetuate bias and discrimination. Our tool provides a comprehensive assessment of your organization's AI implementation, highlighting areas of strength and areas for improvement.
Recommendations Report
You will also receive detailed suggestions and guidance for improving the trustworthiness of your AI system. This will enable you to build and maintain trust with your customers, employees, and other stakeholders, and mitigate the risks associated with AI implementation.
You are in control
One of the key benefits of our open-source tool is that it can be hosted and fully controlled by your organization. This means that you can maintain complete ownership and control over your data and assessments.
Host the tool on your own servers, you can also ensure that the tool meets your organization's specific security and privacy requirements.
OPEN-SOURCE, modify and adapt the tool to fit your organization's unique needs.
This flexibility and control make our tool an ideal solution for organizations looking to assess the trustworthiness of their AI systems while maintaining full control over their data and assessments.
The demo instance is a publicly available instance for trying out the AI Ethics Assessment Tool.
Projects and accounts on the demo instance are deleted periodically, you should not rely on the demo instance for production use. We cannot guarantee that your projects won't be lost. We recommend hosting your own instance.
This tool was designed to enable team members with diverse expertise to collaborate and have conversations about key topics related to the trustworthiness of their AI implementation.
Topics assessed
1. Fundamental rights This section emphasizes the need to respect fundamental human rights in the development and deployment of AI systems. It includes guidelines for ensuring that AI systems do not violate human dignity, privacy, or other fundamental rights.
2. Human agency and oversight This section stresses the importance of human oversight in AI decision-making. It provides guidelines for ensuring that humans remain in control of AI systems and that decisions made by AI systems are explainable and auditable.
3. Technical robustness and safety This section provides guidelines for ensuring the technical robustness and safety of AI systems. It covers topics such as system reliability, cybersecurity, and resilience.
4. Privacy and data governance This section focuses on the need to protect personal data and ensure proper data governance in the development and deployment of AI systems. It provides guidelines for ensuring that personal data is collected, processed, and stored in a transparent and secure manner.
5. Transparency This section stresses the importance of transparency in AI decision-making. It provides guidelines for ensuring that AI decision-making processes are explainable and that users can understand how decisions are made.
6. Diversity, non-discrimination, and fairness This section provides guidelines for ensuring that AI systems do not perpetuate bias and discrimination. It covers topics such as data bias, fairness in decision-making, and inclusivity.
7. Societal and environmental wellbeing This section emphasizes the need to consider the societal and environmental impact of AI systems. It provides guidelines for ensuring that AI systems are developed and deployed in a way that promotes social and environmental wellbeing.
8. Accountability This section provides guidelines for ensuring accountability in AI development and deployment. It covers topics such as legal compliance, risk management, and stakeholder engagement.