Responsible AI

Responsible AI Framework for MALENA

Complex AI systems, like large language models based on neural networks, risk becoming black boxes when it is difficult to understand how results are generated. To trust the output of machine learning (ML) algorithms, it is imperative for individuals and organizations to know that the AI in use is fair, reliable, unbiased, explainable, and will not cause harm. By embedding ethical principles into the development and deployment of AI applications and processes, developers can create ML systems based on trust.

IFC seeks to apply a responsible approach to developing and deploying MALENA based on the principles of trustworthiness and explainability. IFC identified best practices for governing MALENA data and models through a literature review and review of responsible AI frameworks.

For MALENA, the approach entails the development of grounding features that allow users to trace output back to the source documents, a model evaluation dashboard covering six metrics, and a machine learning operations process to ensure the traceability and auditability of MALENA.

Grounding features

Trustworthiness in the MALENA is enabled through implementation of a grounding feature within the MALENA user interface that allows users to trace model predictions back to the source document. This provides complete transparency back to the input data.

Model Evaluation Dashboard

Trustworthiness in the MALENA is enabled through implementation of a grounding feature within the MALENA user interface that allows users to trace model predictions back to the source document. This provides complete transparency back to the input data.

Machine Learning Operations

The MALENA machine learning operations (“MLOps”) process supports traceability and auditability. Model metadata and artifacts, including training data, experiment details, and model performance metrics are stored and governed within a model registry.