تجاوز إلى المحتوى الرئيسي

Design

  1. 3.98Institutions should ensure that the models for their Big Data Analytics and AI Applications are reliable, transparent, and explainable, commensurate with the materiality of those Applications. Accordingly, Institutions, where appropriate, should consider:
    1. a.Reliability: Implementing measures to ensure material Big Data Analytics and AI Applications are reliable and accurate, behave predictably, and operate within the boundaries of applicable rules and regulations, including any laws on data protection or cyber security;
    2. b.Transparency: Institutions should be transparent in how they use Big Data Analytics and AI in their business processes, and (where reasonably appropriate) how the Big Data Analytics and AI Applications function; and
    3. c.Technical Clarity: Implementing measures to ensure the technical processes and decisions of a Big Data Analytics and AI model can be easily interpreted and explained to avoid the threat of “black-box” models. The level of technical clarity should be appropriate and commensurate with the purpose and materiality of the Big Data Analytics and AI Application (e.g. where the model results have significant implications on decision making).
  2. 3.99Institutions should adopt an effective Data governance framework to ensure that Data used by the material Big Data Analytics and AI model is accurate, complete, consistent, secure, and provided in a timely manner for the Big Data Analytics and AI Application to function as designed. The framework should document the extent to which the Data meets the Institution’s requirements for data quality, gaps in data quality that may exist and steps the Institution will take, where possible, to resolve these gaps over time.
  3. 3.100Institutions should make regular efforts to ensure data used to train the material Big Data Analytics and AI model is representative (i.e. how relevant the Data and inferences drawn from the Data are to the Big Data Analytics and AI Application) and produces predictable, reliable outcomes that meet objectives.
  4. 3.101Institutions should be able to promptly suspend material Big Data Analytics and AI Applications upon the Institution’s discretion such as in the event of a high cyber threat, information security breach or malfunctioning of the model.
  5. 3.102Institutions should, where relevant, conduct rigorous, independent validation and testing of material trained Big Data Analytics and AI models to ensure the accuracy, appropriateness, and reliability of the models prior to deployment. Institutions should ensure the model is reviewed to identify any unintuitive or false causal relationships. The validation may be carried out by an independent function within the Institution or by an external organisation.
  6. 3.103Institutions should maintain documentation outlining the design of the material Big Data Analytics and AI model including but not limited to, where applicable:
    1. a.The input Data source and Data description (types and use of Data);
    2. b.The Data quality checks and Data transformations conducted;
    3. c.Reasons and justifications for specific model design and development choices;
    4. d.Methodology or numerical analyses and calculations conducted;
    5. e.Results and expected outcomes;
    6. f.Quantitative evaluation and testing metrics used to determine soundness of the model and its results;
    7. g.Model usage and implementation;
    8. h.Form and frequency of model validation, monitoring and review; and
    9. i.Assumptions or limitations of the model with justifications.
  7. 3.104Institutions should introduce controls to ensure confidentiality and integrity of the codes used in the material Big Data Analytics and AI Application so that the code is only accessed and altered by authorized persons.
  8. 3.105Institutions should identify and monitor the unique risks arising from use of the material Big Data Analytics and AI Application and establish appropriate controls to mitigate those risks.