CASE STUDY
A global marketing agency is adapting a large language model ("LLM") to generate content for an upcoming marketing campaign for a client's new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface ("API") developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address Al governance.
The marketing company has:
* Entered into a contract with the technology company with suitable representations and warranties.
* Completed an impact assessment on the LLM for this intended use.
* Built technical guidance on how to measure and mitigate bias in the LLM.
* Enabled technical aspects of transparency, explainability, robustness and privacy.
* Followed applicable regulatory requirements.
* Created specific legal statements and disclosures regarding the use of the Al on its client's advertising.
The technology company has:
* Provided guidance and resources to developers to address environmental concerns.
* Build technical guidance on how to measure and mitigate bias in the LLM.
* Provided tools and resources to measure bias specific to the LLM.
* Enabled technical aspects of transparency, explainability, robustness and privacy.
* Mapped and mitigated potential societal harms and large-scale impacts.
* Followed applicable regulatory requirements and industry standards.
* Created specific legal statements and disclosures regarding the LLM. including with respect to IP and rights to data.
The technology company has also addressed environmental concerns and societal harms.
Which of the following results would be considered biased outputs from this AI system EXCEPT?
The correct answer isA. Sending ads to construction companies (business entities) rather than individual workers isa business targeting decision, not inherently a biased AI output.
From the AIGP ILT Participant Guide -- Bias & Fairness Module:
''Biased outputs often include stereotyping, exclusion of underrepresented groups, or reinforcing harmful societal assumptions.''
Examples likeinsufficient representation of minority groupsorgender-stereotyping in visuals or languageare typical manifestations of bias.
AI Governance in Practice Report 2024 also notes:
''Bias in generative models may manifest in representation gaps, stereotyping, or unequal performance across demographic groups.''
Option A, by contrast, describes adistribution strategy, not a bias generated by the AI model.
===========
Peggie
16 days agoLashunda
21 days agoSonia
26 days agoKirby
1 month agoSoledad
1 month agoFrancesco
1 month agoAron
2 months agoTiara
2 months agoVannessa
2 months agoRemona
2 months agoBettina
2 months agoStephen
2 months agoNana
3 months agoOzell
3 months agoSunny
3 months agoEllen
3 months agoTheola
4 months agoXuan
4 months agoCherelle
3 months ago