Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

IAPP AIGP Exam - Topic 1 Question 42 Discussion

Actual exam question for IAPP's AIGP exam
Question #: 42
Topic #: 1
[All AIGP Questions]

Business A sells software that provides users with writing and grammar assistance. Business B is a cloud services provider that trains its own AI models.

* Business A has decided to add generative AI features to their software.

* Rather than create their own generative AI model, Business A has chosen to license a model from Business B.

* Business A will then integrate the model into their writing assistance software to provide generative AI capabilities.

* Business A is most concerned that its writing assistance software could recommend toxic or obscene text to its users.

Which of the following governance processes should Business A take to best protect its users against potentially inappropriate text?

Show Suggested Answer Hide Answer
Suggested Answer: B

Business A is integrating a generative AI model licensed from a third party (Business B) and is primarily concerned with the risk of toxic or obscene outputs being delivered to users. In this scenario,testing and validationof the AI model for such content risks is the most direct and effective governance strategy.

According to theAI Governance in Practice Report 2025, organizations thatdeployAI must engage inperformance monitoring protocolsand ensure systems perform adequately for theirintended purposes, including filtering harmful content:

''Operational governance... development of: Performance monitoring protocols to ensure systems perform adequately for their intended purposes.'' (p. 12)

''Product governance... includes: System impact assessments to identify and address risk prior to product development or deployment.'' (p. 11)

Furthermore, under theEU AI Act, which sets the global standard many organizations aim to align with, there is a clear obligation to test and monitor systems for potential harmful behavior:

''The act imposes regulatory obligations... such as establishing appropriate accountability structures,assessing system impact, providing technical documentation,establishing risk management protocols and monitoring performance...'' (p. 7)

Option B directly reflects this best practice ofpre-deployment testing and validationto ensure that the model aligns with Business A's minimum content safety requirements.

Let's now evaluate the incorrect options:

A . Fine-tuning on verified user-generated textmay improve model alignment but does not guarantee that the model will generalize correctly, especially if Business A lacks access to model internals (common in third-party licensing scenarios). Fine-tuning also introduces its own risks and may be contractually restricted.

C . A user reporting featureisreactive, not preventive. While helpful for long-term monitoring and mitigation, it does not prevent the initial harm of toxic outputs, which isBusiness A's primary concern.

D . Requesting documentation from Business Bis useful for transparency and risk management, but it does not replaceindependent verificationthat the model meets Business A's content safety standards.

Thus,testing the model's behavior for unacceptable outputs before deploymentis the most aligned approach with AI governance best practices and obligations.


Contribute your Thoughts:

0/2000 characters
Option B seems like the most practical choice. Testing is key!
upvoted 0 times
...
Loreta
21 days ago
I think asking for documentation on the training data is important too. Option D could give insights into potential biases or issues in the model.
upvoted 0 times
...
Sanda
26 days ago
I feel like having a user reporting feature is crucial. It could help catch issues that the model might miss, so maybe option C is worth considering.
upvoted 0 times
...
Sherill
1 month ago
I'm not entirely sure, but I think fine-tuning the model with verified text could help. Option A might be risky if the training data isn't diverse enough.
upvoted 0 times
...
Stefanie
1 month ago
I remember we discussed the importance of testing AI models to ensure they meet safety standards. Option B seems like a solid choice.
upvoted 0 times
...

Save Cancel