Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Isaca AAISM Exam - Topic 3 Question 6 Discussion

Actual exam question for Isaca's AAISM exam
Question #: 6
Topic #: 3
[All AAISM Questions]

An attacker crafts inputs to a large language model (LLM) to exploit output integrity controls. Which of the following types of attacks is this an example of?

Show Suggested Answer Hide Answer
Suggested Answer: A

According to the AAISM framework, prompt injection is the act of deliberately crafting malicious or manipulative inputs to override, bypass, or exploit the model's intended controls. In this case, the attacker is targeting the integrity of the model's outputs by exploiting weaknesses in how it interprets and processes prompts. Jailbreaking is a subtype of prompt injection specifically designed to override safety restrictions, while evasion attacks target classification boundaries in other ML contexts, and remote code execution refers to system-level exploitation outside of the AI inference context. The most accurate classification of this attack is prompt injection.


AAISM Exam Content Outline -- AI Technologies and Controls (Prompt Security and Input Manipulation)

AI Security Management Study Guide -- Threats to Output Integrity

Contribute your Thoughts:

0/2000 characters
Audra
6 days ago
A) Prompt injection is the best choice. Clear rationale!
upvoted 0 times
...
Reid
11 days ago
D) Evasion is too vague for this.
upvoted 0 times
...
Johnna
17 days ago
C) Remote code execution? Not really.
upvoted 0 times
...
Nadine
22 days ago
I agree, A) Prompt injection makes sense.
upvoted 0 times
...
Lindy
27 days ago
Not sure about this one, seems a bit far-fetched.
upvoted 0 times
...
Alberta
2 months ago
Wait, can you really exploit LLMs like that?
upvoted 0 times
...
Ivette
2 months ago
Evasion sounds right to me, D) Evasion.
upvoted 0 times
...
Audra
2 months ago
Haha, "exploit output integrity controls"? Sounds like a fancy way to say "trick the AI into saying something bad."
upvoted 0 times
...
Kristian
2 months ago
Remote code execution? Nah, that's too advanced for this. Evasion is more likely.
upvoted 0 times
...
Theola
2 months ago
Hmm, I'd say it's a jailbreaking attempt. Gotta break out of those pesky output controls!
upvoted 0 times
...
Deonna
2 months ago
Prompt injection, for sure! That's the classic way to mess with those LLMs.
upvoted 0 times
...
Quentin
3 months ago
I feel like prompt injection is the best fit, especially since it involves crafting inputs. But what if it's more about bypassing controls?
upvoted 0 times
...
Arlene
3 months ago
Evasion sounds familiar, but I can't recall the specifics. I might be mixing it up with other types of attacks we covered.
upvoted 0 times
...
Diego
3 months ago
I remember practicing a question about jailbreaking, but I don't think that's the right answer here. It feels more like a manipulation of the input.
upvoted 0 times
...
Celeste
3 months ago
I think this might be related to prompt injection, but I'm not entirely sure. We discussed it in class, and it seemed like a common issue with LLMs.
upvoted 0 times
...
Yoko
3 months ago
Prompt injection, for sure. That's the classic way to mess with a language model's outputs. I feel pretty confident about this one - just need to make sure I explain my reasoning well in the exam.
upvoted 0 times
...
Viola
3 months ago
Ooh, this is a tricky one. I'm leaning towards evasion, where the attacker finds a way to bypass the model's integrity checks. But I'm not totally sure, so I'll have to review my notes.
upvoted 0 times
...
Sharita
4 months ago
I think it's A) Prompt injection. Seems fitting.
upvoted 0 times
...
Leila
4 months ago
I think it's more like B) Jailbreaking.
upvoted 0 times
...
Kattie
4 months ago
Definitely A) Prompt injection.
upvoted 0 times
...
Wenona
4 months ago
Remote code execution? That doesn't seem quite right for an LLM attack. I think I'm going to go with prompt injection as the best answer here. Gotta be careful with those crafty input tricks!
upvoted 0 times
...
Judy
4 months ago
B) Jailbreaking could also work, but I lean towards A.
upvoted 0 times
...
Salley
5 months ago
Hmm, I'm not sure. Could it also be a jailbreaking attack, where the attacker tries to bypass the model's safety controls? I'll have to think this through carefully.
upvoted 0 times
...
Celeste
5 months ago
This seems like a prompt injection attack, where the attacker tries to manipulate the model's output by crafting the input prompt. I'm pretty confident that's the right answer.
upvoted 0 times
Rebbecca
1 day ago
I think it's definitely prompt injection.
upvoted 0 times
...
...

Save Cancel