3.106Institutions should establish an approved and documented framework to review the reliability, fairness, accuracy and relevance of the algorithms, models and Data used prior to deployment of a material Big Data Analytics and AI Application and on a periodic basis after deployment, to verify that the models are behaving as designed and intended. The framework should cover, where relevant:
a.The various types and frequencies of reviews including continuous monitoring, re-training, calibration and validation;
b)Scenarios and criteria that would trigger a re-training, calibration, re-development or discontinuation of the model such as a significant change in input Data or external/economic changes;
a.Review of material Big Data Analytics and AI model outcomes for fairness or unintentional bias (e.g. through monitoring and analysis of false positive and/or false negative rates); and
b.Review of continuity or contingency measures such as human intervention or the use of conventional processes (i.e. that do not use Big Data Analytics and AI).
3.107When the use of a material Big Data Analytics and AI model results in a technical or model-related error or failure, Institutions should:
a.Be able to swiftly detect the error;
b.Establish a process to review the error and rectify it in a timely manner, which may include notifying another function; and
c.Report the error to relevant stakeholders if material.
3.108Institutions should establish a robust system for versioning and maintain record of each version of the material Big Data Analytics and AI model including but not limited to, where applicable:
a.New Data used;
b.Revisions to the documentation;
c.Revisions to the algorithm;
d.Change in the way variables are picked and used in the model or, where possible, the names of variables; and
e.The expected outcome of the newly calibrated, re-trained or re-developed model.
Book traversal links for Management and Monitoring