Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Isaca AAIA Exam - Topic 2 Question 11 Discussion

Actual exam question for Isaca's AAIA exam
Question #: 11
Topic #: 2
[All AAIA Questions]

An organization's system development process has been enhanced with AI. Which of the following features presents the GREATEST risk?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

0/2000 characters
Deeanna
1 day ago
I still think D is the worst. Full AI control is too much.
upvoted 0 times
...
Kaycee
6 days ago
I see your point, but C could lead to privacy issues.
upvoted 0 times
...
Yolando
11 days ago
True, but D really stands out. We need human input in coding.
upvoted 0 times
...
Susy
17 days ago
I feel B is a concern too. Non-tech users might misinterpret results.
upvoted 0 times
...
Grover
22 days ago
I think D is the biggest risk. No human oversight? That's scary.
upvoted 0 times
...
Deonna
27 days ago
C could be risky too, personalization might lead to bias.
upvoted 0 times
...
Paulene
2 months ago
I agree with D, total reliance on AI is a bad idea.
upvoted 0 times
...
Sharen
2 months ago
I think B is a bigger issue. Non-tech users might misinterpret results.
upvoted 0 times
...
Laurel
2 months ago
D seems super risky, no human checks at all?
upvoted 0 times
...
Mitsue
2 months ago
D all the way! AI writing code without any human oversight? That's like letting a toddler drive a bus.
upvoted 0 times
...
Florinda
2 months ago
C is the riskiest option. Personalizing apps without human input? Yikes, that could go horribly wrong.
upvoted 0 times
...
Thurman
2 months ago
I'd have to go with B. Non-technical users validating AI results? That's a recipe for disaster.
upvoted 0 times
...
Sommer
3 months ago
I think we had a practice question about resource allocation and it didn’t seem as risky as the others. A might be less concerning.
upvoted 0 times
...
Oliva
3 months ago
I feel like all AI-generated code without oversight is a huge red flag. It’s like giving up control completely.
upvoted 0 times
...
Oliva
3 months ago
I’m not sure, but I think having non-technical users validating AI results could lead to misunderstandings. Maybe B is a big risk too?
upvoted 0 times
...
Bernardine
3 months ago
Personalizing applications could be risky if the AI isn't properly trained. I'd want to know more about how that process works.
upvoted 0 times
...
Torie
3 months ago
Validating AI results is always a concern, especially for non-technical users. That could lead to some serious issues down the line.
upvoted 0 times
...
Vanna
3 months ago
I'd probably start by considering which option poses the most significant security or compliance concerns. Letting AI generate code without any human review seems like a major red flag.
upvoted 0 times
...
Vinnie
4 months ago
I remember we discussed how AI can sometimes make decisions that aren't fully transparent, so D seems risky.
upvoted 0 times
...
Elsa
4 months ago
D is definitely the biggest risk. No human oversight on AI-generated code? That's a disaster waiting to happen.
upvoted 0 times
...
Sanda
4 months ago
Agreed, D is risky. AI can make mistakes without checks.
upvoted 0 times
...
Arlie
4 months ago
Wait, AI generates all the code? That sounds dangerous!
upvoted 0 times
...
Ma
5 months ago
I'm a bit confused by the wording of the question. Can the AI really allocate resources without any human oversight? That seems like a big risk to me.
upvoted 0 times
...
Annamaria
5 months ago
Hmm, this seems like a tricky one. I'd want to think through the potential risks of each option carefully.
upvoted 0 times
Arminda
4 months ago
I think D is the biggest risk. No human oversight is dangerous.
upvoted 0 times
...
...

Save Cancel