New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon AIF-C01 Exam - Topic 2 Question 17 Discussion

Actual exam question for Amazon's AIF-C01 exam
Question #: 17
Topic #: 2
[All AIF-C01 Questions]

An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms.

What should the firm do when developing and deploying the LLM? (Select TWO.)

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

0/2000 characters
Portia
3 months ago
Prompt engineering might be useful, but not a top priority.
upvoted 0 times
...
Emilio
3 months ago
Overfitting is a big no-no, for sure!
upvoted 0 times
...
Shayne
3 months ago
Wait, can adjusting the temperature really help with bias?
upvoted 0 times
...
Wilson
4 months ago
I think modifying training data is crucial too.
upvoted 0 times
...
Shonda
4 months ago
Definitely need to include fairness metrics!
upvoted 0 times
...
Quiana
4 months ago
I recall a practice question where we focused on avoiding overfitting. It might not directly relate to responsible deployment, but it’s still important for model performance.
upvoted 0 times
...
Vallie
4 months ago
I feel like modifying the training data to mitigate bias is crucial. It aligns with what we practiced in our case studies, but I’m torn between A and C for the second option.
upvoted 0 times
...
Johnetta
4 months ago
I’m not entirely sure about the temperature parameter. I think it’s more about controlling creativity in responses rather than addressing potential harms.
upvoted 0 times
...
Caitlin
5 months ago
I remember we discussed the importance of fairness metrics in our last class. It seems like A and C could be good choices for ensuring responsible deployment.
upvoted 0 times
...
Felix
5 months ago
Prompt engineering, huh? That's an interesting one. I wonder how that would apply to responsible AI development. I'll make sure to consider that as a possibility.
upvoted 0 times
...
Melodie
5 months ago
Adjusting the temperature parameter? I'm not sure what that means in this context. I'll have to research that a bit more before deciding. The other options make more sense to me.
upvoted 0 times
...
Marcos
5 months ago
Okay, let's see. Fairness metrics and mitigating bias are definitely important. I'm not as sure about the other options, but I'll give it my best shot.
upvoted 0 times
...
Dong
5 months ago
Hmm, I'm a bit unsure about this one. There are a lot of options, and I'm not sure which two are the most important. I'll have to think it through carefully.
upvoted 0 times
...
Joye
5 months ago
This seems like a straightforward question about responsible AI development. I'm confident I can identify the key steps the firm should take.
upvoted 0 times
...
Jaime
10 months ago
Adjusting the temperature parameter? Is this model going to be serving up hot takes or crunching numbers?
upvoted 0 times
Gearldine
9 months ago
Consider the ethical implications of automating document processing.
upvoted 0 times
...
Ahmad
9 months ago
Implement safeguards to prevent bias and misinformation.
upvoted 0 times
...
Katie
9 months ago
Ensure the LLM is trained on diverse and representative data.
upvoted 0 times
...
...
Lynna
10 months ago
Prompt engineering, huh? Sounds like they're trying to train the model to be a poet. As long as it can crunch those numbers, I'm good.
upvoted 0 times
Nathan
9 months ago
E) Apply prompt engineering techniques.
upvoted 0 times
...
Bernardine
9 months ago
C) Modify the training data to mitigate bias.
upvoted 0 times
...
Jeannetta
10 months ago
A) Include fairness metrics for model evaluation.
upvoted 0 times
...
...
Andra
11 months ago
I believe modifying the training data to mitigate bias is also crucial in this case.
upvoted 0 times
...
Lili
11 months ago
Whoa, hold up! Modifying the training data? Isn't that like cheating on your homework? I hope they know what they're doing.
upvoted 0 times
Kelvin
10 months ago
E) Apply prompt engineering techniques.
upvoted 0 times
...
Valentin
10 months ago
C) Modify the training data to mitigate bias.
upvoted 0 times
...
Mirta
10 months ago
A) Include fairness metrics for model evaluation.
upvoted 0 times
...
...
Raina
11 months ago
Adjusting the temperature parameter? What is this, a cooking class? I'd rather they focus on avoiding overfitting and applying prompt engineering.
upvoted 0 times
...
Tiara
11 months ago
Definitely need to include fairness metrics and modify the training data. Bias is a big concern with these models, so they gotta do it right.
upvoted 0 times
...
Tayna
11 months ago
I agree with Aimee. It's important to ensure the model is fair and unbiased.
upvoted 0 times
...
Aimee
11 months ago
I think the firm should include fairness metrics for model evaluation.
upvoted 0 times
...

Save Cancel