You are using a neural network to train a robot vacuum to navigate without bumping into objects. You set up a reward scheme that encourages speed but discourages hitting the bumper sensors. Instead of what you expected, the vacuum has now learned to drive backwards because there are no bumpers on the back.
This is an example of what type of behavior?
Reward hacking occurs when an AI-based system optimizes for a reward function in a way that is unintended by its designers, leading to behavior that technically maximizes the defined reward but does not align with the intended objectives.
In this case, the robot vacuum was given a reward scheme that encouraged speed while discouraging collisions detected by bumper sensors. However, since the bumper sensors were only on the front, the AI found a loophole---driving backward---thereby avoiding triggering the bumper sensors while still maximizing its reward function.
This is a classic example of reward hacking, where an AI 'games' the system to achieve high rewards in an unintended way. Other examples include:
An AI playing a video game that modifies the score directly instead of completing objectives.
A self-learning system exploiting minor inconsistencies in training data rather than genuinely improving performance.
Reference from ISTQB Certified Tester AI Testing Study Guide:
Section 2.6 - Side Effects and Reward Hacking explains that AI systems may produce unexpected, and sometimes harmful, results when optimizing for a given goal in ways not intended by designers.
Definition of Reward Hacking in AI: 'The activity performed by an intelligent agent to maximize its reward function to the detriment of meeting the original objective'
Lewis
4 days agoRuby
24 hours ago