Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through largescale pilots in critical domains
Secure Society Health & Wellbeing Artificial IntelligenceThe increasing requirement for trustworthy AI systems across diverse application domains has become a pressing need not least due to the critical role that AI plays in the ongoing digital transformation addressing urgent socio-economic needs. Despite the numerous recommendations and standards, most AI practitioners and decision makers, still prioritize system performance as the main metric in their workflows often neglecting to verify and quantify core attributes of trustworthiness including traceability, robustness, security, transparency and usability. In addition, trustworthiness is not assessed throughout the lifecycle of AI system development so developers often fail to grasp a holistic view across different AI risks. Last, the lack of a unified, multi-disciplinary AI, Data and Robotics ecosystem for assessing trustworthiness across several critical AI application domains hampers the definition and implementation of a robust AI paradigm shift framework towards increased trustworthiness and accelerated AI adoption.
To address this critical unmet needs, FAITH innovation action will develop and validate a human-centric, trustworthiness optimization ecosystem, which enables measuring, optimizing and counteracting the risks associated with AI adoption and trustworthiness in critical domains, namely robotics, education, media, transport, healthcare, active ageing, and industrial processes through seven international Large Scale Pilots. Notably, cross-fertilization actions will create a joint outcome, which will bring together the visions and specificities of all the pilots. To this end, the project will adopt a dynamic risk management approach following EU legislative instruments and ENISA guidelines and deliver tools to be widely used across different countries and settings while diverse stakeholders’ communities will be engaged in the each pilot delivering seven sector-specific reports on trustworthiness to accelerate AI take-up.
FAITH aims to provide the practitioners and stakeholders of AI systems not only with a comprehensive analysis of the foundations of AI trustworthiness but also with an operational playbook for how to assess, build trustworthy AI systems and how to continuously measure their trustworthiness. The key objectives include:
establish a human-centric trustworthiness management framework aligned with EU socio-ethical valuesand current standardization instruments to strengthen EU AI, Data and Robotics ecosystem in application-oriented research
deliver the FAITH AI_TAF assessing risks related to trustworthiness and to validate it in large scale pilots
effectively design and implement LSPs and harnessing “lessons learned” through cross-fertilisation activities to validate the FAITH AI_TAF framework and the entire FAITH ecosystem
apply standardization procedures towards transparency, openness and interoperable data sharing in largescale case studies
establish a concrete business model and exploitation strategy for sustainable reuse
IT Innovation leads quality control procedures of FAITH coordination activities, FAITH System Trust Modeller (STM) Piloting Technical Infrastructure, and Evaluation of Methodology (NIST) & Policy Recommendations
FAITH brings together IT Innovation's extensive experience in risk modelling and applies the approach to complex sciences from artifical intelligence within critical domains such as healthcare. This project builds on the Spyderisk methodology and tools
The FAITH project is a 48 month project funded by the EC Horizon Europe programme.
Coordinator: IDRYMA TECHNOLOGIAS KAI EREVNAS
Website: TO BE PUBLISHED
More information: TO BE PUBLISHED
This project has received funding from the European Union's Horizon Europe's research and innovation programme under grant agreement No TBD.