A company makes forecasts each quarter to decide how to optimize operations to meet expected demand. The company uses ML models to make these forecasts.
An AI practitioner is writing a report about the trained ML models to provide transparency and explainability to company stakeholders.
What should the AI practitioner include in the report to meet the transparency and explainability requirements?
Partial dependence plots (PDPs) are visual tools used to show the relationship between a feature (or a set of features) in the data and the predicted outcome of a machine learning model. They are highly effective for providing transparency and explainability of the model's behavior to stakeholders by illustrating how different input variables impact the model's predictions.
Option B (Correct): 'Partial dependence plots (PDPs)': This is the correct answer because PDPs help to interpret how the model's predictions change with varying values of input features, providing stakeholders with a clearer understanding of the model's decision-making process.
Option A: 'Code for model training' is incorrect because providing the raw code for model training may not offer transparency or explainability to non-technical stakeholders.
Option C: 'Sample data for training' is incorrect as sample data alone does not explain how the model works or its decision-making process.
Option D: 'Model convergence tables' is incorrect. While convergence tables can show the training process, they do not provide insights into how input features affect the model's predictions.
AWS AI Practitioner Reference:
Explainability in AWS Machine Learning: AWS provides various tools for model explainability, such as Amazon SageMaker Clarify, which includes PDPs to help explain the impact of different features on the model's predictions.
Ellen
2 days agoGlory
5 days agoParis
6 days agoKimberlie
12 days agoGoldie
4 days agoMichel
18 days ago