AI systems certification in health applications

Alexandre Perera


    Alexandre Perera


    Universitat Politècnica de Catalunya (UPC), Spain


    Healthcare systems and applications follow a zero-risk approach which results in strong —and often long— control and regulatory processes. This zero-risk paradigm has not yet been successfully implemented in artificial intelligence (AI) based systems in health. The researchers have designed an auditing toolkit that standardizes quality control for AI health models, which includes a database of previously identified risks in the AI systems in healthcare. Once a new system is audited, a cryptographic mark serves as a technological seal to verify compliance with each prediction.

    A proof of concept for this toolkit has already been tested in a real-use case at the Hospital Sant Joan de Déu and has proved to reduce the risk of misprediction as well as the time for regulatory approval of the new AI-based system from two years to six months.

    Currently, the goal of this team is to transfer this set of tools into a minimum viable product through a spin-off company with the goal to reach the market in the short term. The increase of trust in AI systems will not only shorten time and resources to bring new scientific advances to clinical practice, but will also ensure traceability and empower final users to verify the compliance of any AI provider in the healthcare sector.