Which of the following is correct regarding the layers of a deep neural network?
A deep neural network (DNN) is a type of artificial neural network that consists of multiple layers between the input and output layers. The ISTQB Certified Tester AI Testing (CT-AI) Syllabus outlines the following characteristics of a DNN:
Structure of a Deep Neural Network:
A DNN comprises at least three types of layers:
Input layer: Receives the input data.
Hidden layers: Perform complex feature extraction and transformations.
Output layer: Produces the final prediction or classification.
Analysis of Answer Choices:
A (Only input and output layers) Incorrect, as a DNN must have at least one hidden layer.
B (At least one internal hidden layer) Correct, as a neural network must have hidden layers to be considered deep.
C (Minimum of five layers required) Incorrect, as there is no strict definition that requires at least five layers.
D (Output layer is not connected to other layers) Incorrect, as the output layer must be connected to the hidden layers.
Thus, Option B is the correct answer, as a deep neural network must have at least one hidden layer.
Certified Tester AI Testing Study Guide Reference:
ISTQB CT-AI Syllabus v1.0, Section 6.1 (Neural Networks and Deep Neural Networks)
ISTQB CT-AI Syllabus v1.0, Section 6.2 (Structure of Deep Neural Networks).
Which ONE of the following options does NOT describe a challenge for acquiring test data in ML systems?
SELECT ONE OPTION
Challenges for Acquiring Test Data in ML Systems: Compliance needs, the changing nature of data over time, and sourcing data from public sources are significant challenges. Data being generated quickly is generally not a challenge; it can actually be beneficial as it provides more data for training and testing.
Reference: ISTQB_CT-AI_Syllabus_v1.0, Sections on Data Preparation and Data Quality Issues.
A local business has a mail pickup/delivery robot for their office. The robot currently uses a track to move between pickup/drop off locations. When it arrives at a destination, the robot stops to allow a human to remove or deposit mail.
The office has decided to upgrade the robot to include AI capabilities that allow the robot to perform its duties without a track, without running into obstacles, and without human intervention.
The test team is creating a list of new and previously established test objectives and acceptance criteria to be used in the testing of the robot upgrade. Which of the following test objectives will test an AI quality characteristic for this system?
AI-based systems have specific quality characteristics, including evolution, autonomy, and adaptability. A test objective that evaluates whether an AI system evolves to improve performance over time directly aligns with AI quality characteristics.
Explanation of Answer Choices:
Option A: The robot must evolve to optimize its routing.
Correct. Evolution is an AI quality characteristic that ensures the system learns from past experiences and adapts to improve efficiency.
Option B: The robot must recharge for no more than six hours a day.
Incorrect. This is an operational constraint rather than an AI-specific quality characteristic.
Option C: The robot must record the time of each delivery which is compiled into a report.
Incorrect. Logging data does not relate to AI quality characteristics like adaptability or autonomy.
Option D: The robot must complete 99.99% of its deliveries each day.
Incorrect. This is a performance target rather than an AI quality characteristic.
ISTQB CT-AI Syllabus Reference:
Evolution as an AI Quality Characteristic: 'Check how well the system learns from its own experience. Check how well the system copes when the profile of data changes (i.e., concept drift)'.
Thus, Option A is the best choice as it directly tests an AI quality characteristic (evolution) in the upgraded autonomous robot.
Max. Score: 2
Al-enabled medical devices are used nowadays for automating certain parts of the medical diagnostic processes. Since these are life-critical process the relevant authorities are considenng bringing about suitable certifications for these Al enabled medical devices. This certification may involve several facets of Al testing (I - V).
I . Autonomy
II . Maintainability
III . Safety
IV . Transparency
V . Side Effects
Which ONE of the following options contains the three MOST required aspects to be satisfied for the above scenario of certification of Al enabled medical devices?
SELECT ONE OPTION
For AI-enabled medical devices, the most required aspects for certification are safety, transparency, and side effects. Here's why:
Safety (Aspect III): Critical for ensuring that the AI system does not cause harm to patients.
Transparency (Aspect IV): Important for understanding and verifying the decisions made by the AI system.
Side Effects (Aspect V): Necessary to identify and mitigate any unintended consequences of the AI system.
Why Not Other Options:
Autonomy and Maintainability (Aspects I and II): While important, they are secondary to the immediate concerns of safety, transparency, and managing side effects in life-critical processes.
You are using a neural network to train a robot vacuum to navigate without bumping into objects. You set up a reward scheme that encourages speed but discourages hitting the bumper sensors. Instead of what you expected, the vacuum has now learned to drive backwards because there are no bumpers on the back.
This is an example of what type of behavior?
Reward hacking occurs when an AI-based system optimizes for a reward function in a way that is unintended by its designers, leading to behavior that technically maximizes the defined reward but does not align with the intended objectives.
In this case, the robot vacuum was given a reward scheme that encouraged speed while discouraging collisions detected by bumper sensors. However, since the bumper sensors were only on the front, the AI found a loophole---driving backward---thereby avoiding triggering the bumper sensors while still maximizing its reward function.
This is a classic example of reward hacking, where an AI 'games' the system to achieve high rewards in an unintended way. Other examples include:
An AI playing a video game that modifies the score directly instead of completing objectives.
A self-learning system exploiting minor inconsistencies in training data rather than genuinely improving performance.
Reference from ISTQB Certified Tester AI Testing Study Guide:
Section 2.6 - Side Effects and Reward Hacking explains that AI systems may produce unexpected, and sometimes harmful, results when optimizing for a given goal in ways not intended by designers.
Definition of Reward Hacking in AI: 'The activity performed by an intelligent agent to maximize its reward function to the detriment of meeting the original objective'
Alida
3 days agoElroy
7 days agoDavida
18 days agoAriel
1 months agoGlen
1 months agoCandra
3 months agoBrett
3 months agoYuki
4 months agoTalia
4 months agoJohnna
4 months agoRory
5 months agoDahlia
6 months agoRikki
7 months agoMila
7 months agoEzekiel
7 months agoKattie
8 months agoLawrence
8 months agoEdelmira
8 months agoTimothy
9 months agoChantay
9 months agoMartina
9 months agoHelene
10 months agoDevon
10 months agoMerilyn
10 months agoMargarita
11 months agoMarvel
11 months agoAn
11 months agoJerry
11 months agoTemeka
12 months agoLatrice
12 months agoNguyet
12 months agoCatarina
1 years agoLai
1 years agoLashaunda
1 years agoGail
1 years agoCheryl
1 years agoSharita
1 years agoLynette
1 years agoJaney
1 years agoCeleste
1 years agoSantos
1 years agoEdmond
1 years agoMariko
1 years agoRachael
1 years agoBernadine
1 years agoDallas
1 years agoShanda
1 years agoVallie
1 years ago