New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Isaca AAISM Exam - Topic 3 Question 6 Discussion

Actual exam question for Isaca's AAISM exam
Question #: 6
Topic #: 3
[All AAISM Questions]

An attacker crafts inputs to a large language model (LLM) to exploit output integrity controls. Which of the following types of attacks is this an example of?

Show Suggested Answer Hide Answer
Suggested Answer: A

According to the AAISM framework, prompt injection is the act of deliberately crafting malicious or manipulative inputs to override, bypass, or exploit the model's intended controls. In this case, the attacker is targeting the integrity of the model's outputs by exploiting weaknesses in how it interprets and processes prompts. Jailbreaking is a subtype of prompt injection specifically designed to override safety restrictions, while evasion attacks target classification boundaries in other ML contexts, and remote code execution refers to system-level exploitation outside of the AI inference context. The most accurate classification of this attack is prompt injection.


AAISM Exam Content Outline -- AI Technologies and Controls (Prompt Security and Input Manipulation)

AI Security Management Study Guide -- Threats to Output Integrity

Contribute your Thoughts:

0/2000 characters
Alberta
10 hours ago
Wait, can you really exploit LLMs like that?
upvoted 0 times
...
Ivette
6 days ago
Evasion sounds right to me, D) Evasion.
upvoted 0 times
...
Audra
11 days ago
Haha, "exploit output integrity controls"? Sounds like a fancy way to say "trick the AI into saying something bad."
upvoted 0 times
...
Kristian
16 days ago
Remote code execution? Nah, that's too advanced for this. Evasion is more likely.
upvoted 0 times
...
Theola
21 days ago
Hmm, I'd say it's a jailbreaking attempt. Gotta break out of those pesky output controls!
upvoted 0 times
...
Deonna
26 days ago
Prompt injection, for sure! That's the classic way to mess with those LLMs.
upvoted 0 times
...
Quentin
1 month ago
I feel like prompt injection is the best fit, especially since it involves crafting inputs. But what if it's more about bypassing controls?
upvoted 0 times
...
Arlene
1 month ago
Evasion sounds familiar, but I can't recall the specifics. I might be mixing it up with other types of attacks we covered.
upvoted 0 times
...
Diego
1 month ago
I remember practicing a question about jailbreaking, but I don't think that's the right answer here. It feels more like a manipulation of the input.
upvoted 0 times
...
Celeste
2 months ago
I think this might be related to prompt injection, but I'm not entirely sure. We discussed it in class, and it seemed like a common issue with LLMs.
upvoted 0 times
...
Yoko
2 months ago
Prompt injection, for sure. That's the classic way to mess with a language model's outputs. I feel pretty confident about this one - just need to make sure I explain my reasoning well in the exam.
upvoted 0 times
...
Viola
2 months ago
Ooh, this is a tricky one. I'm leaning towards evasion, where the attacker finds a way to bypass the model's integrity checks. But I'm not totally sure, so I'll have to review my notes.
upvoted 0 times
...
Sharita
2 months ago
I think it's A) Prompt injection. Seems fitting.
upvoted 0 times
...
Leila
2 months ago
I think it's more like B) Jailbreaking.
upvoted 0 times
...
Kattie
2 months ago
Definitely A) Prompt injection.
upvoted 0 times
...
Wenona
3 months ago
Remote code execution? That doesn't seem quite right for an LLM attack. I think I'm going to go with prompt injection as the best answer here. Gotta be careful with those crafty input tricks!
upvoted 0 times
...
Judy
3 months ago
B) Jailbreaking could also work, but I lean towards A.
upvoted 0 times
...
Salley
3 months ago
Hmm, I'm not sure. Could it also be a jailbreaking attack, where the attacker tries to bypass the model's safety controls? I'll have to think this through carefully.
upvoted 0 times
...
Celeste
3 months ago
This seems like a prompt injection attack, where the attacker tries to manipulate the model's output by crafting the input prompt. I'm pretty confident that's the right answer.
upvoted 0 times
...

Save Cancel