New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

SISA CSPAI Exam - Topic 5 Question 6 Discussion

Actual exam question for SISA's CSPAI exam
Question #: 6
Topic #: 5
[All CSPAI Questions]

Which framework is commonly used to assess risks in Generative AI systems according to NIST?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

0/2000 characters
Casie
2 months ago
C is just wrong, financial risks are only part of it!
upvoted 0 times
...
Chi
2 months ago
Wait, is it really A? Seems too straightforward.
upvoted 0 times
...
Quinn
2 months ago
I thought it was B at first, but A makes more sense.
upvoted 0 times
...
Sheron
3 months ago
Totally agree, A is the best choice here!
upvoted 0 times
...
Ilene
3 months ago
Definitely A! The AI RMF is the way to go.
upvoted 0 times
...
Celia
3 months ago
I feel like the other options don’t really apply to AI systems, but I’m not entirely confident about the specifics of the AI RMF.
upvoted 0 times
...
Lawanda
4 months ago
I’m a bit confused because I thought NIST had multiple frameworks, but I can’t recall if they specifically focused on generative AI.
upvoted 0 times
...
Lashaunda
4 months ago
I remember practicing a question about AI risk assessments, and I feel like the AI RMF was highlighted as a key framework.
upvoted 0 times
...
Wenona
4 months ago
I think the AI Risk Management Framework is the right answer, but I’m not completely sure if it’s the only one mentioned by NIST.
upvoted 0 times
...
Florinda
4 months ago
I'm a bit confused by this question. Assessing risks in Generative AI seems like a complex topic, and I'm not familiar with the specific NIST guidance on it. I'll have to make an educated guess here.
upvoted 0 times
...
Cathern
4 months ago
The question mentions NIST, so I'm guessing the answer is related to a NIST framework. A general IT risk assessment or focusing only on financial risks doesn't seem quite right. I'll go with the AI RMF option.
upvoted 0 times
...
Kate
4 months ago
Hmm, I'm not too sure about this one. I know NIST has guidance on AI risk, but I can't recall the exact framework they recommend. I'll have to think this through carefully.
upvoted 0 times
...
Pansy
5 months ago
I think the AI Risk Management Framework (AI RMF) is the answer here. It's specifically designed to assess risks in Generative AI systems according to NIST.
upvoted 0 times
...
Deandrea
5 months ago
Using outdated models? That's like trying to assess the risks of a nuclear reactor using a model from the stone age. Yikes!
upvoted 0 times
...
Blair
5 months ago
Financial risks? That's like trying to put a price tag on Skynet taking over the world. C'mon, man!
upvoted 0 times
...
Natalya
5 months ago
Option B sounds like a cop-out. You can't just ignore the unique risks of AI systems.
upvoted 0 times
Gracia
2 months ago
Option A seems like the best choice for real assessment.
upvoted 0 times
...
Eugene
2 months ago
We need frameworks that address those unique issues.
upvoted 0 times
...
Cammy
2 months ago
Exactly! Ignoring AI risks is a big mistake.
upvoted 0 times
...
Zachary
3 months ago
I totally agree! AI has its own set of challenges.
upvoted 0 times
...
...
Lamonica
5 months ago
I think the answer is A) The AI Risk Management Framework (AI RMF) for evaluating trustworthiness.
upvoted 0 times
...
Alesia
6 months ago
I think the AI Risk Management Framework (AI RMF) for evaluating trustworthiness is the correct answer. NIST seems to be focused on that recently.
upvoted 0 times
Xuan
5 months ago
I agree, the AI Risk Management Framework (AI RMF) is the way to go.
upvoted 0 times
...
...

Save Cancel