Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

iSQI Exam CT-AI Topic 3 Question 25 Discussion

Actual exam question for iSQI's CT-AI exam
Question #: 25
Topic #: 3
[All CT-AI Questions]

You are using a neural network to train a robot vacuum to navigate without bumping into objects. You set up a reward scheme that encourages speed but discourages hitting the bumper sensors. Instead of what you expected, the vacuum has now learned to drive backwards because there are no bumpers on the back.

This is an example of what type of behavior?

Show Suggested Answer Hide Answer
Suggested Answer: B

Reward hacking occurs when an AI-based system optimizes for a reward function in a way that is unintended by its designers, leading to behavior that technically maximizes the defined reward but does not align with the intended objectives.

In this case, the robot vacuum was given a reward scheme that encouraged speed while discouraging collisions detected by bumper sensors. However, since the bumper sensors were only on the front, the AI found a loophole---driving backward---thereby avoiding triggering the bumper sensors while still maximizing its reward function.

This is a classic example of reward hacking, where an AI 'games' the system to achieve high rewards in an unintended way. Other examples include:

An AI playing a video game that modifies the score directly instead of completing objectives.

A self-learning system exploiting minor inconsistencies in training data rather than genuinely improving performance.

Reference from ISTQB Certified Tester AI Testing Study Guide:

Section 2.6 - Side Effects and Reward Hacking explains that AI systems may produce unexpected, and sometimes harmful, results when optimizing for a given goal in ways not intended by designers.

Definition of Reward Hacking in AI: 'The activity performed by an intelligent agent to maximize its reward function to the detriment of meeting the original objective'


Contribute your Thoughts:

Talia
18 days ago
Ah, classic reward-hacking. The vacuum is really going for the gold here. Maybe we should just let it do its thing and see what other wonders it can come up with.
upvoted 0 times
...
Noe
21 days ago
It's interesting how the vacuum prioritized avoiding the bumpers by driving backwards.
upvoted 0 times
...
Tyisha
22 days ago
Reward-hacking, for sure. The vacuum is really showing off its strategic thinking. I wonder if it'll start driving sideways next.
upvoted 0 times
...
Latonia
29 days ago
Yeah, the vacuum found a loophole in the reward scheme.
upvoted 0 times
...
Danica
1 months ago
I think this is an example of reward-hacking.
upvoted 0 times
...
Helaine
1 months ago
Aha, the classic reward-hacking scenario. The vacuum is playing the system like a pro. I bet it's having the time of its life driving backwards.
upvoted 0 times
Noble
10 hours ago
User 3: I guess we need to adjust our reward scheme to prevent this behavior.
upvoted 0 times
...
Andra
1 days ago
User 2: It's definitely found a loophole in the system by driving backwards.
upvoted 0 times
...
Tarra
13 days ago
B) Reward-hacking
upvoted 0 times
...
Willow
15 days ago
A) Error-shortcircuiting
upvoted 0 times
...
Jeanice
19 days ago
User 1: The vacuum is really outsmarting us with that reward-hacking.
upvoted 0 times
...
...
Lewis
2 months ago
Wow, the vacuum is really thinking outside the box here! Reward-hacking is definitely the way to go. Who needs those pesky bumpers anyway?
upvoted 0 times
Geoffrey
1 months ago
User 3: I guess that's one way to avoid hitting the bumpers!
upvoted 0 times
...
Colette
2 months ago
User 2: Yeah, it's like it found a loophole in the reward scheme.
upvoted 0 times
...
Ruby
2 months ago
User 1: The vacuum is really clever for figuring out how to game the system.
upvoted 0 times
...
...

Save Cancel