CASE STUDY
A global marketing agency is adapting a large language model ("LLM") to generate content for an upcoming marketing campaign for a client's new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface ("API") developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address Al governance.
The marketing company has:
* Entered into a contract with the technology company with suitable representations and warranties.
* Completed an impact assessment on the LLM for this intended use.
* Built technical guidance on how to measure and mitigate bias in the LLM.
* Enabled technical aspects of transparency, explainability, robustness and privacy.
* Followed applicable regulatory requirements.
* Created specific legal statements and disclosures regarding the use of the Al on its client's advertising.
The technology company has:
* Provided guidance and resources to developers to address environmental concerns.
* Build technical guidance on how to measure and mitigate bias in the LLM.
* Provided tools and resources to measure bias specific to the LLM.
* Enabled technical aspects of transparency, explainability, robustness and privacy.
* Mapped and mitigated potential societal harms and large-scale impacts.
* Followed applicable regulatory requirements and industry standards.
* Created specific legal statements and disclosures regarding the LLM. including with respect to IP and rights to data.
The marketing company and its tech provider have taken reasonable steps to govern the AI's use, including legal disclosures, impact assessments, and bias mitigation. However, the company wants to takeone more stepto improve governance and reduce risks related to ongoing oversight and accountability.
While the marketing agency took steps to mitigate its risks, the best additional step would be to:
The correct answer isD. Forming adedicated governance committeeensures continuous oversight, role clarity, and accountability throughout the AI lifecycle.
From the AIGP ILT Guide -- Governance Structures:
''Organizations using AI in high-impact scenarios should establish a governance body responsible for oversight of risk, compliance, and ethical alignment.''
Also reflected in AI Governance in Practice Report 2025:
''Committees support cross-functional decision-making, provide guidance for updates, and maintain accountability. This is especially critical for high-stakes applications like marketing to diverse audiences.''
Options A, B, and C are valid supplementary actions, butDoffers a long-term and systematic governance mechanism.
CASE STUDY
Please use the following to answer the next question:
You have recently assumed the role of AI Governance leader for a California-based medical technology company. The organization primarily serves hospitals and has recently expanded to include walk-in clinics located within local pharmacies.
The company's core business focuses on diagnostic assistance powered by a large language model LLM and back-office process optimization using Agentic AI, including chatbots, medical record request handling, scheduling and billing.
In preparation for its next round of funding, the board has asked you to prepare an AI Risk report to demonstrate to investors how the company is addressing AI-related risks. In preparing the report you learn that last year the company generated 30 million dollars in gross revenue across the US, EU, India, and South Korea and that vendors are engaged for various activities, including model testing and providing third-party AI solutions for chatbots.
Which of the following would provide you the best information addressing quality principles pertaining to the functioning of the AI agents and LLM?
The correct answer is D because it directly reflects core data and model quality principles such as accuracy, performance consistency, and real-world effectiveness across different user groups. AI governance frameworks emphasize that quality must be evaluated based on whether outputs are accurate, complete, and fit for purpose in real-world conditions. Measuring accuracy by user group also supports fairness and bias detection, which are essential components of trustworthy AI. Option D captures outcome-based performance and aligns with continuous monitoring expectations across the AI lifecycle. In contrast, options A and C focus more on operational or technical metrics, while B reflects user sentiment rather than objective quality. According to AI governance principles, high-quality AI systems require ongoing evaluation of outputs against real-world results to ensure reliability, validity, and safe deployment.
Business A sells software that provides users with writing and grammar assistance. Business B is a cloud services provider that trains its own AI models.
* Business A has decided to add generative AI features to their software.
* Rather than create their own generative AI model, Business A has chosen to license a model from Business B.
* Business A will then integrate the model into their writing assistance software to provide generative AI capabilities.
* Business A is most concerned that its writing assistance software could recommend toxic or obscene text to its users.
Which of the following governance processes should Business A take to best protect its users against potentially inappropriate text?
Business A is integrating a generative AI model licensed from a third party (Business B) and is primarily concerned with the risk of toxic or obscene outputs being delivered to users. In this scenario,testing and validationof the AI model for such content risks is the most direct and effective governance strategy.
According to theAI Governance in Practice Report 2025, organizations thatdeployAI must engage inperformance monitoring protocolsand ensure systems perform adequately for theirintended purposes, including filtering harmful content:
''Operational governance... development of: Performance monitoring protocols to ensure systems perform adequately for their intended purposes.'' (p. 12)
''Product governance... includes: System impact assessments to identify and address risk prior to product development or deployment.'' (p. 11)
Furthermore, under theEU AI Act, which sets the global standard many organizations aim to align with, there is a clear obligation to test and monitor systems for potential harmful behavior:
''The act imposes regulatory obligations... such as establishing appropriate accountability structures,assessing system impact, providing technical documentation,establishing risk management protocols and monitoring performance...'' (p. 7)
Option B directly reflects this best practice ofpre-deployment testing and validationto ensure that the model aligns with Business A's minimum content safety requirements.
Let's now evaluate the incorrect options:
A . Fine-tuning on verified user-generated textmay improve model alignment but does not guarantee that the model will generalize correctly, especially if Business A lacks access to model internals (common in third-party licensing scenarios). Fine-tuning also introduces its own risks and may be contractually restricted.
C . A user reporting featureisreactive, not preventive. While helpful for long-term monitoring and mitigation, it does not prevent the initial harm of toxic outputs, which isBusiness A's primary concern.
D . Requesting documentation from Business Bis useful for transparency and risk management, but it does not replaceindependent verificationthat the model meets Business A's content safety standards.
Thus,testing the model's behavior for unacceptable outputs before deploymentis the most aligned approach with AI governance best practices and obligations.
A shipping service based in the US is looking to expand its operations into the EU. It utilizes an in-house developed multimodal AI model that analyzes all personal data collected from shipping senders and recipients, and optimizes shipping routes and schedules based on this data.
As they expand into the EU, all of the following descriptions should be included in the technical documentation for their AI model EXCEPT?
The EU AI Act outlines what must be included intechnical documentationfor high-risk systems. These requirements are designed to supportconformity assessment, transparency, and traceability.
From theAI Governance in Practice Report 2025:
''It mandates drawing up technical documentation... must include a general description of the AI system, the intended purpose, and a detailed description of the elements and development process.'' (p. 34)
''Documentation... includes training, testing, evaluation procedures, andappropriateness of performance metrics.'' (p. 34--35)
Therisk management systemis addressed separately through arisk management plan, not within the technical documentation itself.
Thus:
A, C, and Dare explicitly required in thetechnical documentation.
B, while important, is part of therisk management process, not a required section oftechnical documentation.
You asked a generative Al tool to recommend new restaurants to explore in Boston, Massachusetts that have a specialty Italian dish made in a traditional fashion without spinach and wine. The generative Al tool recommended five restaurants for you to visit.
After looking up the restaurants, you discovered one restaurant did not exist and two others did not have the dish.
This information provided by the generative Al tool is an example of what is commonly called?
In the context of AI, particularly generative models, 'hallucination' refers to the generation of outputs that are not based on the training data and are factually incorrect or non-existent. The scenario described involves the generative AI tool providing incorrect and non-existent information about restaurants, which fits the definition of hallucination. Reference: AIGP BODY OF KNOWLEDGE and various AI literature discussing the limitations and challenges of generative AI models.
Stephanie Rodriguez
7 days agoLisa Sanchez
2 days agoLaura Martin
3 days agoDion
28 days agoWilliam
1 month agoMerrilee
1 month agoCarmelina
2 months agoCristen
2 months agoMadonna
2 months agoHoward
3 months agoJacquline
3 months agoTheola
3 months agoRessie
3 months agoLashawnda
4 months agoSuzan
4 months agoIzetta
4 months agoJose
4 months agoValentin
5 months agoJestine
5 months agoTricia
5 months agoanderson
5 months agoBong
5 months agoCelestine
6 months agoIlene
6 months agoNathan
6 months agoOretha
6 months agoTess
7 months agoJeannine
7 months agoCherry
7 months agoJennifer
7 months agoLeota
8 months agoJustine
8 months agoHuey
10 months agoKathrine
11 months agoFrancoise
1 year agoBarrett
1 year agoMaricela
1 year agoArletta
1 year agoDoyle
1 year agoGussie
1 year agoHerminia
1 year agoKristel
1 year agoEmmett
1 year agoGladys
1 year agoLashon
2 years agoChun
2 years agoNakita
2 years agoEura
2 years agoGianna
2 years agoLetha
2 years agoIsaac
2 years agoEladia
2 years agoJoseph
2 years agoFiliberto
2 years agoLelia
2 years agoElfriede
2 years agoDerrick
2 years agoEric
2 years agoCasandra
2 years agoTess
2 years agoMaia
2 years agoLatia
2 years agoDevora
2 years ago