New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Isaca AAIA Exam - Topic 2 Question 11 Discussion

Actual exam question for Isaca's AAIA exam
Question #: 11
Topic #: 2
[All AAIA Questions]

An organization's system development process has been enhanced with AI. Which of the following features presents the GREATEST risk?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

0/2000 characters
Paulene
9 hours ago
I agree with D, total reliance on AI is a bad idea.
upvoted 0 times
...
Sharen
6 days ago
I think B is a bigger issue. Non-tech users might misinterpret results.
upvoted 0 times
...
Laurel
11 days ago
D seems super risky, no human checks at all?
upvoted 0 times
...
Mitsue
16 days ago
D all the way! AI writing code without any human oversight? That's like letting a toddler drive a bus.
upvoted 0 times
...
Florinda
21 days ago
C is the riskiest option. Personalizing apps without human input? Yikes, that could go horribly wrong.
upvoted 0 times
...
Thurman
26 days ago
I'd have to go with B. Non-technical users validating AI results? That's a recipe for disaster.
upvoted 0 times
...
Sommer
1 month ago
I think we had a practice question about resource allocation and it didn’t seem as risky as the others. A might be less concerning.
upvoted 0 times
...
Oliva
1 month ago
I feel like all AI-generated code without oversight is a huge red flag. It’s like giving up control completely.
upvoted 0 times
...
Oliva
1 month ago
I’m not sure, but I think having non-technical users validating AI results could lead to misunderstandings. Maybe B is a big risk too?
upvoted 0 times
...
Bernardine
2 months ago
Personalizing applications could be risky if the AI isn't properly trained. I'd want to know more about how that process works.
upvoted 0 times
...
Torie
2 months ago
Validating AI results is always a concern, especially for non-technical users. That could lead to some serious issues down the line.
upvoted 0 times
...
Vanna
2 months ago
I'd probably start by considering which option poses the most significant security or compliance concerns. Letting AI generate code without any human review seems like a major red flag.
upvoted 0 times
...
Vinnie
2 months ago
I remember we discussed how AI can sometimes make decisions that aren't fully transparent, so D seems risky.
upvoted 0 times
...
Elsa
2 months ago
D is definitely the biggest risk. No human oversight on AI-generated code? That's a disaster waiting to happen.
upvoted 0 times
...
Sanda
3 months ago
Agreed, D is risky. AI can make mistakes without checks.
upvoted 0 times
...
Arlie
3 months ago
Wait, AI generates all the code? That sounds dangerous!
upvoted 0 times
...
Ma
3 months ago
I'm a bit confused by the wording of the question. Can the AI really allocate resources without any human oversight? That seems like a big risk to me.
upvoted 0 times
...
Annamaria
3 months ago
Hmm, this seems like a tricky one. I'd want to think through the potential risks of each option carefully.
upvoted 0 times
Arminda
2 months ago
I think D is the biggest risk. No human oversight is dangerous.
upvoted 0 times
...
...

Save Cancel