Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

IAPP AIGP Exam - Topic 3 Question 36 Discussion

Actual exam question for IAPP's AIGP exam
Question #: 36
Topic #: 3
[All AIGP Questions]

Scenario:

An enterprise is evaluating multiple third-party generative AI tools to integrate into its platform. As part of its AI governance policy, it is assessing themost effective methodsto reduce risks related to bias, data misuse, and liability when using third-party solutions.

All of the following are commonly adopted processes and policies in reducing potential risks introduced by third-party AI tools or applications EXCEPT:

Show Suggested Answer Hide Answer
Suggested Answer: B

The correct answer isB. AllowingPIIto be freely entered into prompts without safeguards is considered amajor privacy and security riskand is not a responsible governance practice.

From the AIGP ILT Guide -- Generative AI & Third-Party Risk Management:

''Use of personal or sensitive information in AI prompts can result in unintended exposure, regulatory breaches, and downstream liability.''

The AI Governance in Practice Report 2024 highlights:

''PII should be minimized or protected by design. Prompt engineering should prevent entry of personally identifiable data unless legally and technically safeguarded.''

A, C, and D are established best practices under responsible AI procurement and use.

===========


Contribute your Thoughts:

0/2000 characters
Brett
29 days ago
Yes, B should never be allowed. It opens too many doors for misuse.
upvoted 0 times
...
Katheryn
1 month ago
B is definitely the worst option. We need to protect privacy!
upvoted 0 times
...
Sang
1 month ago
I feel like A is also a concern, but not as much as B.
upvoted 0 times
...
Mari
1 month ago
Definitely B. It contradicts data protection principles.
upvoted 0 times
...
Walker
2 months ago
Agreed! Allowing PII in prompts is risky.
upvoted 0 times
...
Celestina
2 months ago
D is essential for keeping everything in check!
upvoted 0 times
...
Leonor
2 months ago
Wait, are we really okay with using public info like that?
upvoted 0 times
...
Leoma
3 months ago
A and C are must-haves for responsible AI use!
upvoted 0 times
...
Alease
3 months ago
B seems risky. Why allow PII in prompts?
upvoted 0 times
...
Shonda
3 months ago
Definitely need those liability clauses in contracts!
upvoted 0 times
...
Sherita
3 months ago
Haha, B? Seriously? That's like inviting hackers to come and steal your data. Not a chance!
upvoted 0 times
...
Kirk
3 months ago
I'd go with B as the exception. The rest seem like pretty standard AI risk mitigation practices.
upvoted 0 times
...
Dick
3 months ago
Definitely B. Incorporating PII is a huge no-no for any responsible AI governance policy.
upvoted 0 times
...
Lilli
4 months ago
Reviewing new use cases by a governance body sounds familiar, but I can't recall if that's always done. Maybe D is the exception?
upvoted 0 times
...
Juan
4 months ago
I think requiring a bias audit is a good practice, like we practiced in our case studies. So, C seems correct to me.
upvoted 0 times
...
Natalie
4 months ago
This is a good one. I'm going to make sure I understand the context first - an enterprise evaluating third-party AI tools and wanting to reduce risks. Then I'll carefully analyze each option to find the one that doesn't fit.
upvoted 0 times
...
Lauran
4 months ago
Okay, I think I've got this. The question is asking about the process or policy that is NOT commonly adopted to reduce risks from third-party AI tools. I'll need to evaluate each option and identify the one that's the outlier.
upvoted 0 times
...
Lorrie
5 months ago
B is a terrible idea. Letting PII into the prompts? That's just asking for trouble.
upvoted 0 times
...
Caprice
5 months ago
This question is tricky! I think B is the odd one out.
upvoted 0 times
...
Willetta
5 months ago
I remember discussing how important it is to have liability clauses in contracts, so I think A is definitely a common practice.
upvoted 0 times
...
Chauncey
5 months ago
I'm not entirely sure, but allowing PII in prompts seems risky. I feel like that could lead to data misuse, which we studied in class.
upvoted 0 times
...
Jamey
5 months ago
Hmm, I'm a bit confused by the wording here. Are they asking about the processes and policies that are commonly adopted, or the ones that are not commonly adopted? I need to read this carefully to make sure I don't miss anything.
upvoted 0 times
...
Roxanne
6 months ago
This seems like a pretty straightforward governance question. I'm going to focus on understanding the key risks the enterprise is trying to mitigate and then identify the option that doesn't align with those.
upvoted 0 times
Winfred
8 days ago
Right! B stands out as the odd one out here.
upvoted 0 times
...
Denny
13 days ago
I agree. Options A, C, and D all focus on governance.
upvoted 0 times
...
Dorothy
18 days ago
Definitely. It goes against the idea of reducing data misuse.
upvoted 0 times
...
Rashida
23 days ago
Yeah, incorporating PII seems risky.
upvoted 0 times
...
Desmond
4 months ago
Sounds like a tricky situation. What do you think about option B?
upvoted 0 times
...
...

Save Cancel